(Illustration by Gaich Muramatsu)
For performance a single replica is best: many things take twice as long with two replicas, e.g. storing a file. We will be coming out with write back caching: this will hugely improve tar zxvf linux.tgz To see how much the improvement will be you can do the following: 0. use 4.6.5 1. venus-init server 200000 (200MB is a midsize cache) 2. clog (VITAL) 3. cfs wd -age 5 -time 5 This write disconnects the client, and starts writing back at most 1 sec later and then allows quite a lot of stuff to be sent in one blow (namely for 5 secs) 4. now tar zxvf your-kernel.tgz - Peter - On Wed, 16 Sep 1998, Troy Benjegerdes wrote: > Well, I set up two coda servers and a volume replicated on both servers, > and ran the Bonnie filesystem benchmark on nfs and coda. I also untarred a > linux kernel to check file creation times. > > The machines were new Asus P2DS 100Mhz RAM motherboards with one Pentium > II 450 and a 4 GB Western Digital IDE drive. Each machine has 256MB off > RAM. The machines were connected via Fast ethernet and a Bay Networks > switch. > > The venus cache size was also set 20 MB. > > Both machines were running codasrv and venus, and bonnie was run on the > second machine. > > For the 30 MB file size, coda actually beat nfs for block writes. On block > reads for the 30MB size, NFS was over 10 times faster... I believe this to > be because NFS is using the linux-buffer cache to it's advantange. Does > the coda fs module use the buffer cache as much? I am using the module > that comes with the 2.1.121 linux kernel. > > For the untarring, coda was *much* slower. I'm assuming this is because > file creation has a lot of overhead and such. > > All in all, I am quite impressed, and coda quite looks quite promising > as a base filesystem for a Beowulf-type cluster environment. My next goal > is to get 6 more identical machines set up (for a cluster of 8) and check > how coda performs. Does anyone have any suggestions on how many servers I > should run? I believe two is the minimum for data redundancy, and 8 (one > one each machine) would be overkill. > > Here are the results: > > Coda filesystem, replicated on 2 servers, 300MB test file > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 300 1474 8.3 1629 2.3 1022 4.7 1420 5.8 1380 3.4 19.1 0.2 > > NFS filesystem, 300MB test file > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 300 2268 13.7 2226 2.4 1150 3.9 2702 13.2 2603 4.7 219.8 2.0 > > Coda filesystem, replicated on 2 servers, 30MB test file > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 30 3256 17.9 4405 5.7 3134 7.7 2007 7.3 3474 3.1 378.6 2.0 > > NFS filesystem, 30 MB test file > > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- > MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU > 30 2831 13.0 2216 3.3 3181 8.4 28403 99.9 115432 101.5 2790.6 18.8 > > > > Untarring linux-2.1.121 on nfs: > > [troybenj_at_mos11 test]$ time tar zxvf /tmp/linux-2.1.121.tar.gz > > 4.18user 3.97system 1:14.74elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (3163major+9197minor)pagefaults 0swaps > > > > -------------------------------------------------------------------------- > | Troy Benjegerdes | troybenj_at_iastate.edu | hozer_at_drgw.net | > | Unix is user friendly... You just have to be friendly to it first. | > | This message composed with 100% free software. http://www.linux.org | > -------------------------------------------------------------------------- >Received on 1998-09-16 15:54:41