(Illustration by Gaich Muramatsu)
Hi, I wonder if it is possible to replicate the root filesystem? I have tried it the following way: (all version coda-*-6.0.2-1.i386.rpm) scm: vice-setup, adapt /vice/db/servers: 172.16.1.1 1 172.16.3.1 2 adapt /vice/db/VSGDB: E0000100 172.16.1.1 172.16.3.1 start auth2, rpc2portmap, updatesrv non-scm: vice-setup pointing to 172.16.1.1 start updateclnt -h `cat /vice/db/scm`, auth2 -chk, updatesrv, startserver & scm: startserver & client: start venus, pointing to 172.16.1.1 ... 17:26:52 Found 5 realms 17:26:52 starting VDB scan 17:26:52 6 volume replicas 17:26:52 3 replicated volumes 17:26:52 0 CML entries allocated 17:26:52 2 CML entries on free-list 17:26:52 starting FSDB scan (4166, 20000) (25, 75, 4) 17:26:52 11 cache files in table (0 blocks) 17:26:52 4155 cache files on free-list 17:26:52 starting HDB scan 17:26:52 0 hdb entries in table 17:26:52 0 hdb entries on free-list 17:26:52 Mounting root volume... 17:26:52 Venus starting... coda_read_super: rootfid is (0xff000001.0x1.0x1) coda_read_super: rootinode is 1353984905 dev 9 17:26:52 /coda now mounted. 5 Realms???? scm: createvol_rep rootvol E0000100 /vicepa Getting initial version of /vice/vol/BigVolumeList. V_BindToServer: binding to host 172.16.1.1 GetVolumeList finished successfully V_BindToServer: binding to host 172.16.1.1 Servers are (172.16.1.1 172.16.3.1 ) V_BindToServer: binding to host 172.16.3.1 GetVolumeList finished successfully HexGroupId is 7f000000 creating volume rootvol.0 on 172.16.1.1 (partition /vicepa) V_BindToServer: binding to host 172.16.1.1 creating volume rootvol.1 on 172.16.3.1 (partition /vicepa) V_BindToServer: binding to host 172.16.3.1 Fetching volume lists from servers: V_BindToServer: binding to host 172.16.3.1 GetVolumeList finished successfully 172.16.3.1 - success V_BindToServer: binding to host 172.16.1.1 GetVolumeList finished successfully 172.16.1.1 - success V_BindToServer: binding to host 172.16.1.1 VLDB completed. <echo rootvol 7f000000 2 1000001 2000001 0 0 0 0 0 0 E0000100 >> /vice/db/VRList.new> V_BindToServer: binding to host 172.16.1.1 VRDB completed. Do you wish this volume to be Backed Up (y/n)? [n] After creation of a volume I get usually a message at the client, this time not, and when trying to access the filesystem, I get: [root_at_redhat80 coda]# cd /coda [root_at_redhat80 coda]# ls 17:29:37 Resolved realm '172.16.1.1' 172.16.1.1 [blinking -> volume not recognized] [root_at_redhat80 coda]# ls 17:31:39 VSG change for volume [1] 0 -> 5086a908 coda_upcall: Venus dead on (op,un) (10.14) flags 10 ls: 172.16.1.1: No such device No pseudo device in upcall comms at a0239e40 You have new mail in /var/spool/mail/root On another client: [root_at_redhat80 coda]# cd 172.16.1.1 17:37:13 Resolved realm '172.16.1.1' 17:37:13 VSG change for volume [1] 0 -> 5086a648 coda_upcall: Venus dead on (op,un) (10.7) flags 10 No pseudo device in upcall comms at a0239e40 No pseudo device in upcall comms at a0239e40 -bash: cd: 172.16.1.1: No such device or address Venus is still running on both clients, however, it is not dead. Something similar happens, when doing a cfs lv /coda/172.16.1.1 In http://www.coda.cs.cmu.edu/maillists/codalist/codalist-2001/3952.html Jan Harkes wrote: "you probably want to have at least a separate rootvolume because it is a pain repairing any conflicts that occur in the root of your /coda tree as we can't turn the /coda object into a symlink without losing access to a control file that we need to do the necessary repair operations." Is this still up to date with version coda-server-6.0.2-1.i386.rpm? So does this mean that one should not replicate the rootvolume or am I missing the point? Thanks for feedback! Cheers, Josef -- Josef Schwarz BT Exaxt Distributed Computing Research tel: +44(0)1473 606172 mob: +44(0)7792 605408 pp 13 MLBG, Adastral Park, Martlesham, Ipswich IP5 3RE, UKReceived on 2003-10-27 13:47:30