(Illustration by Gaich Muramatsu)
> > > There have been, of course, a number of research projects on similar > > > things (see Darrell Long's Swift, TicketTAIP, and CHEOPS, to name just > > > three). AFAIK, none of that has been integrated with Coda. Would make > > > an interesting project. > > > > It would. It's actually one I started some serious thinking about at one > > point, but the reality is that I don't have the time or the knowledge of > > the linux kernel to actually undertake a project like that. > > 99% of the Coda code is userspace, and that one percent only involves > the interaction between a Coda client and the kernel. So hacking Coda > servers doesn't require any linux-kernel knowledge. Yeah, it's probably not as big a job as it seems to me, but a hole in one's knowledge always makes a problem seem bigger. > Besides if you really need to hack the kernel for such a project, you > have to keep in mind that we're don't just have a Linux kernel module, > but also kernel modules for FreeBSD, NetBSD, Solaris, Win9x and > WinNT/2000/XP. Good to know. That opens other possibilites, too. > > A new server in the group would initially start out as just a blank holding > > disk. When it needed a file, that file would be fetched and cached locally. > > The client would have a hard time keeping track on which servers it > might be able to find a copy. It also makes it harder to make sure that > all copies are updated and keeping track of various versioning changes. > So it would add a considerable amount of overhead just to track where > recent copies are, and when something is updated, which server(s) are > responsible for notifying the clients. Yup. But that's what makes in interesting, too! <grin> Some of us are just suckers for punishment. > Sure it is possible, but it is kind of a radical change compared to the > current replication strategies... That's what I really expected to hear. It didn't sound like CODA could do it; it's not its design intent. > > Once held on this new server, another server could purge its copy to make > > space for something else it would like to cache. As long as there were > > at lest three (for example) other copies out there somewhere, a server > > would know that it was free to purge its local copy and the system as > > a whole would still meet the minumum required redundancy. > > Then you need to keep track of where copies are, which means either a > centralized location, which would still give you the same scalability > bottleneck (data transfers are pretty efficient, it is the metadata that > hinders scalability), or in some distributed thingy which would be a > whole other research project by itself. I think each machine could keep track of what other machines have copies of the file. They would then push change notifications upon the file being modified. The peer wouldn't have to fetch the file immediately; it could even purge its copy if that seemed wise. Of course, any disconnected server would not be able to receive that notification and thus some sort of audit process would also be needed. But those are just some of the challenges that would make such a project "fun". Brian ( bcwhite_at_precidia.com ) ------------------------------------------------------------------------------- I don't suffer from insanity... I enjoy every minute of it.Received on 2003-02-26 17:06:15