(Illustration by Gaich Muramatsu)
(see also http://www.coda.cs.cmu.edu/maillists/codalist/codalist-2002/4202.html) Credits for the original ideas go to Christer Bernerus, who has been the project leader for DFS deployment at Chalmers. We have also newly had some discussion about the following. What is the easiest way to avoid dump/restore and still have both disaster resistance and data history available at all times. The first is ensured by Coda replication, we get it for free. The second would need some changes (presumably nothing fundamental) in the way backup volumes are created, so that it would be possible to keep many "generations" of backup volumes, sharing most of the disk data. One further problem is that backup volumes are strongly connected to the corresponding r/w replica, sharing the disk. If we loose/recreate a r/w replica because of a disaster or if we just change volume replication (impossible yet but according to Jan it is feasible to fix), then we do not have any backup volumes == data history connected to the newly (re)created replica. I am curious if we would gain anything by defining a new operation, "ensure replicas consistence", like volutil lock [as we do now] volutil resynch [may fail] volutil backup-if-synched [fails if the above fails, always unlocks] Then the backup volumes produced by such operation would be logically the same on all servers concerned. *May be* we could transfer them and "connect" to the newly created replicas? Another approach to recreating data history I can think about would be an incremental restore, that would create a new volume, sharing data with the original one, instead of updating the original one. It would make it possible to "readonly replicate" old "backup-volumes-sets" on new servers. Any comments? Best regards, -- IvanReceived on 2003-05-04 07:24:54