Coda File System

Re: OpenAFS or Coda?

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Mon, 9 Dec 2002 10:29:35 -0500
On Sat, Dec 07, 2002 at 05:34:42PM -0800, Michael Loftis wrote:
> Here's the situation I've got, two sites 100+GB of files that need to be 
> shared between the two sites by a design group.  The sites are/will be 
> connected via a point to point T-1 forming a private network.
> 
> My question is which would be better at handling this sort of scenario, and 
> how best to handle it.  All of the Coda systems I've examined so far are 
> either a client or a server, so setting up a server on either side as a 
> replicated (READ+WRITE) volume is a non-starter.

The 'recommended' Coda setup would either be.

a) Store all files in a server at one site, and rely on the cache of the
   Coda clients in the remote site to avoid refetching unchanged data.
b) Find a 'split' of the data set such that files typically used and or
   modified by users at site A, are stored on a server at site A (and
   similar for site B).

Either way is probably still quite bandwidth intensive. But still better
than the 'most logical' but worst possible setup, which is to have a
single volume replicated across both sites. This is bad because, clients
have no concept of 'local' or 'remote' replica, so they will fetch data
from either server, and server-server resolution is performed by
shipping all data to one site which resolves the differences, and then
redistributing it back to the various replicas.

If files aren't modified much at all, you might want to look into a
rsync or unison based solution. This will be a much better solution wrt.
bandwidth usage.

Jan
Received on 2002-12-09 10:30:37