(Illustration by Gaich Muramatsu)
hi all, i am having a working installation of 2 coda servers. one is the scm and the other is a replication. i successfully copied data to the coda mount point and saw it getting replicated on both vicepas. now before using it for something productively i thought about testing a failure sceanrio. so i stopped the replication server. indeed i could use the mount file. so i tried to copy data onto the mounted coda partition. IMHO i thought that it gets copied to the online server and later replicated to the server when it is online again. but nothing happened. the client stuck with the cp command and the vicepa of the online server did not change at all (using df-h). so i canceld the command and tried to put the second server online again. that worked. but after refreshing the client services I got the following log messages: [ X(00) : 0000 : 12:28:23 ] Coda Venus, version 6.9.1 [ X(00) : 0000 : 12:28:23 ] Logfile initialized with LogLevel = 0 at Thu Jun 7 12:28:23 2007 [ X(00) : 0000 : 12:28:23 ] E StatsInit() [ X(00) : 0000 : 12:28:23 ] L StatsInit() [ X(00) : 0000 : 12:28:23 ] ***** FATAL SIGNAL (11) ***** 12:28:23 Coda Venus, version 6.9.1 12:28:23 /var/lib/coda/LOG size is 2706432 bytes 12:28:23 /var/lib/coda/DATA size is 10821456 bytes 12:28:23 Loading RVM data 12:28:23 Last init was Wed Jun 6 13:02:41 2007 12:28:23 Last shutdown was clean 12:28:23 Starting RealmDB scan 12:28:23 Found 3 realms 12:28:23 starting VDB scan 12:28:23 Fatal Signal (11); pid 3301 becoming a zombie... 12:28:23 You may use gdb to attach to 3301 hmmm... What failure szenarios do really work? As a second thing: in my scenario i do not really need acls, since i am only interested in using coda for shared data and i am having firewalls such that only privileged hosts can access the file servers. can i somehow configure the volumes such that everbody can access them without the need to do clog? are there some good pointers i can rtfm for this case. if i get the above to work, coda is really cool!!! thanks in advance -- JakobReceived on 2007-06-07 07:14:27