Coda File System

Re: FAQs update please

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Thu, 31 Jan 2002 14:25:37 -0500
On Thu, Jan 31, 2002 at 11:03:02AM -0800, Drew Perttula wrote:
> Could you speak a little more about what goes wrong with large amounts
> of data/users? Is it possible to surmount the big-data restriction by
> running a few (separated) codas?  What happens with lots of users--
> too many errors due to simultaneous access to the same file?

The servers store metadata (file attributes, directories) in VM, for a
"typical" filesystem there is approximately 5% overhead. So with 2GB
of 'usable' memory space on a 32-bit machine, any single Coda server can
handle about 40GB of filedata. If you want to deal with a terabyte,
you'd need to run about 25 servers (if they are not replicating) or more
when you want to use replication and volumes are stored on multiple
servers. Clearly this is administratively not very attractive.

Most simultaneous access are actually between a user accessing his files
from his desktop and from his laptop (or from home). People tend to have
a private copy of most files they collaborate on, maybe it's the UNIX
environment (diff/patch), maybe it is a result of Coda's conflicts that
can be annoying enough to change a user's behaviour.

But with many users, there are more conflict some of which the user
cannot easily fix himself, such as a reintegration being blocked/in
conflict due to a server-server conflict, or as a result of a Coda bug.
And more volume related administration, where an administrator moves or
splits a user volume to distribute the load or diskusage across
different servers.

Jan
Received on 2002-01-31 14:25:51