(Illustration by Gaich Muramatsu)
Hello Jan, thanks for the explanation! > So technically we don't need clone That is what I wanted to hear :-) > (in your case you could use <volume>/daily or something weirder) :-) > 'volutil backup' could simply alias as 'volutil clone <vol> <vol>.backup'. Wow, what a nice feeling, the things advance, and become more manageable at the same time. Not often it happens :) > I'm not entirely sure why we need a separate 'lock' operation, does > anyone with AFS experience know if there is some reason for that? > Considering that backup implicitly unlocks, shouldn't it also just > implicitly lock? Just a guess. May be historically backup / clone was even less "atomic" and needed (or was expected to need) more operations between lock/unlock? > Ah, but logically there is no 'history' for a newly created replica. I > don't think that it is necessarily realistic to allow someone to > 'rewrite history' by attaching previously made (random) backup volumes > to a new volume. What we are really interested in is not the "replica history", but a *replicated* "volume history". Creating backup volumes provides a nice way to have it (including different replication for different dates' backups). If we could manage to readonly replicate (I mean by copying data by some means like dump/restore) *collections* of backup volumes, i.e. sets of backup volumes sharing the common disk data, we would be mostly fine, except that we'd loose the sharing with a newly created r/w replica. Then, would it be feasible to populate a newly created r/w replica, considering the "precreated history"? Like inverted backup, when we proclaim a read/write replica to contain the same data as the "prefilled" backup, and then just let the clients trigger any necessary updates? The result would be the same efficient storage use for all replicas, and the freedom to have and manage replicated backups. Another step in making things transparent would be treating backups like readonly-replicated volumes and mounting ..../data.daily instead of ..../data.0.daily and ..../data.1.daily but then we get into that problem with consistency... > > "ensure replicas consistence" > You're talking about server-intiated resolution as opposed to our > current client-initiated resolution. Thread lightly, I heard that there > is probably some form of IP in that area. Ajjj. Ok. Forget it. Anyway, unsure if it is a good thing. But if we'd need it, we could possibly instead just check whether replicas are consistent? :) A suitable cron job on a client would do the rest. Turning the things the other way around, could it be a client triggering backup cloning? :) Cheers, -- IvanReceived on 2003-05-06 04:45:24