Coda File System

Re: Data Corruption

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Mon, 26 Jul 2010 00:42:27 -0400
On Sun, Jul 25, 2010 at 11:31:34PM -0400, Zetas wrote:
> When transferring larger files (i've tried 150MB and 900MB) i seem to get a
> corrupted file on all systems including the node that sent it. It's not
> corrupted before i copy it to the /coda directory but somewhere along the line
> something happens.

Is the file larger than the size of the Coda client cache? Because of
the whole file semantics the client will refuse to fetch the file when
it is too big.

In the other direction when the file is initially written it is created
as a 0-length file, then opened for writing and the client doesn't
actually know the size until the write has completed. At this point it
will not discard the (too large) file but send it to the server and then
drop it from the local cache. After this it will refuse to fetch the
file.

If you want to run with a large cache (gigabytes?) you probably should
also specify the number of files in venus.conf, the client assumes that
a file is about 24KB on average and pre-allocates far too much state
when the cache is that large. The client will then freeze up for a while
every time it tries to rebalance the cache during/after each automatic
hoard walk.

Jan
Received on 2010-07-26 00:42:42