Coda File System

Re: automatic authentication, replicated servers

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Tue, 22 May 2001 20:19:38 -0400
On Tue, May 22, 2001 at 06:21:35PM -0500, Kelly Corbin wrote:
> I have two questions:
> 
> 1.  The documentation states that clog authentication tokens expire 
> after about 25 hours.  Is there any way to extend that or to 
> authenticate every so often in a cron job or something?

For read-only access you don't need tokens, the default ACL on the root
directory of a newly created volumes allows 'rl' access to
System:AnyUser and this is inherited by directories that were created
later on.

On our webserver we have several cron scripts that update the
mailinglist and webglimpse indexes. These run as their own userid which
is different from the http server. Their cron-scripts start with the
following:

    echo "password" | clog -pipe mailuser
    hypermail .....

> 2.  Is there a way to force a client to access a particular server for a 
> replicated volume?
> 
> I'm trying to use Coda as a DFS on a number of web servers and I want 
> the content to be the same on all of them.  It seems that the only way 
> to insure 100% file availability is to have the client and server on 
> each machine and replicate a 'web' volume across all servers.  For 
> performance, it would make sense for each client to access the server on 
> the same machine it's running on.  I know that each client is supposed 
> to choose the 'strongest' server, but how is that decided?  How can I 
> force each client to use their respective server?

Strongest server is based on average response time. In fact it is in
many cases better to use a remote Coda server, there will be fewer
context switches and less cache pressure on the www-server/coda-client
when fetching over the network. Popular pages are already in kernel's
pagecache, less popular ones can be pulled from the client cache so for
many hits you don't need a local server at all. The infrequently
accessed pages are fetched from the Coda server, but if that one is
located on the same machine the Coda server can only send the file to
the client by adding an extra copy into the kernel's pagecache. This
added pagecache pressure will evict more popular data from the kernel
caches.

Coda clients only fetch data from the strongest server, attributes are
still coming from all available server to detect missed updates and
conflicts. Every time a server is considered unreachable (even for a
short time) the client needs to revalidate the version-vectors on all
available replicas to detect potential missed updates and conflicts.

Also, the Coda has a write-all replication strategy, writes should be
faster when there are only a few Coda servers (1-3). And last (but not
least) resolution is faster when there are fewer replicas to compare.

The other day we were reinitializing one of our Coda servers, the whole
process took about 5 hours. It had one of the replicas of the web
volume, however the webserver continued uninterrupted simply using the
second replica. The web related traffic as seen by the servers is really
low due to the client side caching.

Jan
Received on 2001-05-22 20:19:40