Coda File System

Re: Some questions...

From: Ivan Popov <pin_at_math.chalmers.se>
Date: Tue, 24 Sep 2002 22:13:01 +0200 (MET DST)
Hello Derek,

trying to answer some questions based on my own experience, your mileage
may vary.

On Tue, 24 Sep 2002, Derek Simkowiak wrote:

> load-balanced cluster.  They will drag and drop files to this WebDAV
> share, which will result in Apache/mod_dav writing a file to the the CODA
> filesystem, which will result in that file appearing on all of the nodes
> in the cluster, which will result in any future clients seeing that file
> regardless of which load-balanced node they actually get served by.

A word of caution: if two of your clients happen to drag-n-drop the same
file at the same time (so that open-write-close on one Apache server
overlaps with open-write-close on another) you will get a conflict and the
file will be unavailable without a manual intervention. But you have read
documentation and probably are aware of it.

> 	Reading over the docs, some questions came up.  The first one
> is... what should the "big picture" of this setup look like?  Would each
> node in the cluster need to run both the CODA server _and_ client

No, you need a client on each node and one server (or probably several
servers for redundancy) preferably separately.

> to itself?  Or would it be a client to the master CODA server (i.e., just
> one of the nodes chosen at random), requiring a config change/failover if

Each client is a client to all of servers in some sense, i.e. if a server
goes down, a client can talk to another one, without intervention.
(unsure how long are timeouts before switching)

> 	Next question: I want to serve about 80 Gigs of space via WebDAV
> (on the CODA filesystem).  Has CODA ever been used to serve up more than 2
> Gigs at a time?  The documentation says nothing about large volumes.

I run a server with 4G and another one with about 10G, for a long time.

> 	Next, the HOWTO says to set aside 4% of your total volume space
> for RVM metadata storage.  For my setup, that would be 3.2 Gigs of RVM

Unfortunately, you can not count with having more than about 1G RVM per
one server process. That means that you will have to run at least 3 server
processes serving each 1/3 of your disk space.
Do not confuse it with replication. These 3 processes serve different
parts of the file space, while possibly running on the same host. You
can setup another server with 3 processes which will replicate these 3, or
more such servers, but anyway the total number of processes will be three
times more than the number of your data replicas. (Or you can think of
your data as of three independent chunks, each replicated twice or
several times).

This Coda limitation will probably go away with time, but it is the
only possible way right now.

> space for CODA.  But I want to use a file, not a raw partition (for
> several reasons).  Is using a huge 3.2 gig file for RVM metadata a
> plausable option?  Or should I just give up now and use the raw partition

Cannot tell anything about performance impact. But you'll actually have to
have three 1G files or partitions instead of one 3G.

> 	Next is about the Virtual Memory.  My servers each have 1 gig of
> RAM.  But according to the HOWTO, I would need 3.2 Gigs (4% of my 80 Gig

RVM for the Coda server is on the Coda server that shouldn't be the same
as your Web servers.

You Web servers - Coda clients - have to have some RVM and corresponding
virtual memory, but it is calculated from the cache size, not the data
amount.

> 	The next question comes from the HOWTO.  The HOWTO says "Do not go
> above 300 Meg on the [client-side] cache."  What is that 300 Meg limit?

I'm running typically with 900000Mb cache.
Do not know how dangerous is it :)

> 	Basically, I want to make sure CODA is at least theoretically able
> to handle my needs, before I spend a bunch of time learning CODA, doing

Theoretically - yes. I cannot tell for sure in reality :)
there are some not-yet-fixed issues that may become your showstopper,
but they are few. Good luck!

Regards,
--
Ivan
Received on 2002-09-24 16:14:31