Coda File System

Re: CODA starting questions

From: Rene Rebe <rene.rebe_at_gmx.net>
Date: Tue, 9 Oct 2001 23:40:28 +0200
Hi!

On Tue, 9 Oct 2001 15:30:14 -0400
Jan Harkes <jaharkes_at_cs.cmu.edu> wrote:

> On Tue, Oct 09, 2001 at 11:53:09AM +0200, Ren? Rebe wrote:

[...]

> It will be extremely fast when they are already in the cache. How do you
> expect work with a file while disconnected from the network if you only
> have a random few bits and piece cached locally, and then when both the
> local and the server version of the file are updated, how would you
> reconstruct a consistent complete version of the local copy.
> 
> Also, if you want to work with file-chunks, we suddenly would need to
> make more upcalls from the kernel to userspace when a file is accessed
> to check whether the stuff is present. Upcalls are pretty expensive,
> just compare creating 100 files in /coda compared to creating 100 files
> on the local disk. (while Coda is write-disconnected, otherwise you're
> measuring client-server communication overhead).

Shure. Dealing with file-chunks would be difficult and wouldn't leave bits
for the disconnected-mode. But I wouldn't need to have such big files in my
local cache. Since copying would be painfull slow (open () a uncached 600MB
file over 100MBit ethernet would take 600/12.5MB/s -> at least 48 seconds
:-((. It would be nice to have a size-threshold for cached files ...

BTW: Is my assumption "that CODA can only handle files if they completely
fit into the available cache-space" correct?

> Partial caching just won't work if you want to have any guarantees on
> consistency, just look at the mess AFS3 made of it. Small example for
> AFS users, open a file for read/write, write 100KB, sleep for a bit. In
> the mean time, truncate the file and write a few bytes of data from
> another client. Then continue writing on the first client. You end up
> with... The few bytes written by the second client, almost 64KB of 0's,
> and then the tail of the file as written by the first client. Similar
> things apply to NFS which basically caches on a page basis, and earlier
> writes can possibly replace more recent updates. Just google for the
> horror stories.

None of the two major Linux NFS solution works for me without corruptions ...
- although I do not see that any bits are cached ... :-(

> > In one docu I found that the cache shouldn't be larger than ~300MB. Is this
> > the latest info? Especially for the larger files (mentioned above) and hording
> > on my laptop this would not be that cool ...
> 
> That is derived from on an average file size of 24KB, i.e. we're really
> concerned about the number of cacheable files. 300MB is approximately
> 12000/13000 cacheable files. If your files are on average 600MB, the
> cachesize can be several terabytes (never tried, but...) as long as you
> override the default calculation by defining values for both cacheblocks
> _and_ cachefiles in /etc/coda/venus.conf.

I'll test it the next days.

> the reply-to address. Mutt typically handles this stuff pretty well, so
> I expect it is some override to avoid breaking standards.

:-( OK. fixed.

> Jan

Very much thanks for the reply!


-- 

eMail:    rene.rene_at_gmx.net
          rene.rebe_at_rocklinux.org

Homepage: http://www.rene.rebe.myokay.net/

Anyone sending unwanted advertising e-mail to this address will be
charged $25 for network traffic and computing time. By extracting my
address from this message or its header, you agree to these terms.
Received on 2001-10-09 17:35:24
Binary file ./codalist-2001/3812.html matches