Coda File System

Re: reliability,performance ?

From: Michel Brabants <michel.brabants_at_euphonynet.be>
Date: Sun, 3 Sep 2006 20:35:27 +0200
Hello,

so if I got ir right;
 * Coda is normally reliable ... although this is still a littl ebit 
troubling. Ok, I plan to have replication, what should help.
 * I can't have files bigger than 2 GB because of how coda is implemented. It 
doesn't have anything to do with the client-side cache. ( Or does it allow 
for files as big as the cache? Although it still is a not-so-nice limit, the 
cache I would create would/could be multiple gigabytes.

If I deploy coda, I will be using jfs for the client-side caches normally. Has 
it any advantages if I use a journalling file-system for the servers? Does 
coda do journalling?

Thank you for your time and answer. Greetings,

Michel

Op woensdag 30 augustus 2006 10:06, schreef michel.brabants_at_euphonynet.be:
> Hello,
> 
> for what it is worth, my thoughts about coda. I really want to deploy coda
> (it seems a good distributed filesystem), but the following things are
> stopping me from doing this:
> 
>  * 2 GB file-limit on linux. I've read that there is a patch for windows,
> but the question is how long before a stable patch wil be merged with the
> linux-code?
>  * The faulty re-integration you mentionned that happens from time to time
> it seems. This is a bad one or can it be detected and solved manually (at
> least it is detected and solvable then). I read about a coda-rewrite, is
> this already done in coda 6.x?
> 
> We need the distributed filesystenm to eventually (can be within a month
> or later) store terabytes of data. I don't think that this is a problem
> with coda.
> I think that coda is my filesystem of choice for the moment, if it wasn't
> for the above things.
> I want to add one last thing. I read that if a client is in disconnected
> mode, 2x the space of files is/can be used because of keeping a copy of
> the latest connected-version or so? Maybe I didn't get it and maybe you do
> it already, but couldn't you for example only specify the blocks (on the
> filesystem) that have changed? Maybe there are better solutions.
> ASR seems to be nice. Reintegration of subversion-repositories could maybe
> be done by using subversion-merge or so in my case.
> 
> Greetings,
> 
> Michel
> 
> > On Sat, Aug 26, 2006 at 01:06:50AM -0400, Jan Harkes wrote:
> >> operations. We also take a hit on the network transfers, last time I
> >> measured I saw writes in the order of 3MB/s to a single replica, 6MB/s
> >> to a doubly replicated volume, and 7.5 or so to a triply replicated
> >> volume. Not 100% sure anymore and this was a couple of years ago on a
> >> 100Base-T network before we added things like encryption.
> >
> > btw. I am not terribly concerned about not being able to saturate a
> > gigabit network. In my mental model the local cache is large enough to
> > cache everything I care about, the period prefetching (hoarding) will
> > pull in anything that may have changed on the servers and the
> > write-disconnected operation will send any local changes back to the
> > servers in the background.
> >
> > Jan
> >
> >
> 
> 
> 

Received on 2006-09-03 14:38:40