Coda File System

Re: Venus dying on file create by xemacs or Star Office 5.0 (new

From: LEE, Yui-wah <clement+_at_cs.cmu.edu>
Date: Thu, 24 Jun 1999 10:31:18 +0800 (CST)
mattwell_at_us.ibm.com wrote:
> 
> ...
>
> I obtained the latest source and managed to get it to compile
> far enought to produce a venus binary. I installed that and it
> worked much better. I was able to (for the first time) save a file
> from StarOffice. Great! Unfortunately I started to have some
> other problems. For no good reason I can see I began to get
> conflicts. The repair utility refuses to operate on the conflicts.
> I have a token and I am the coda "superuser." If I use the
> cfs beginrepair filename command I get the directory and
> am able to view the local and global files. I don't know how
> to proceed at that point. A cfs endrepair filename causes the
> directory to revert to a "broken file. So I re-installed coda and
> started clean.
> 

The user interface of repair is a bit confusing.  In your case, if you
have local--global conflicts, you can choose to have the local or the
global version of your files/dirs as the repaired version.  That's why
there are commands like preservelocal, discardlocal, discardalllocal, etc.

> So far so good. Then I went to put all the files back into
> their volumes. I am using tar remotely from a client. This
> started fine but then coda started spitting out cache overlow
> messages. Note that there was plenty of space left and
> a df was reporting negative numbers on the coda
> filesystem - a very nice feature BTW. Then tar started
> complaining about "unable to create file - filesystem full"
> or something like that. So it would create a few files and
> then skip a few. I think that, if possible, venus should at
> this point block on serving the files until the server catches
> up with its requests. By writing a little script that sends the
> tar process a STOP signal, waits a second, then sends
> a CONT signal, waits two seconds etc. I was able to
> sucessfully untar my tape!
> 
> Here are the possible sources of problems I can see:
> 
> 1. weak server machine (P100, 8meg ram)
> 2. venus is of a younger vintage than the server binaries
> 3. I used 2Meg for the log value - the script suggested
>     that amount but the default in the square brackets is 12M.
>     Is the 2Meg value a typo in the script or is 2M o.k. for
>     serving 3gig with a 130M DATA partition. All partions
>     are bigger than than the values given to the setup
>     script, e.g. 140M for DATA, 3M for LOG.

RVM logs are wrap-around logs, so a bigger log means the log will
wrap-around less frequently, which is in general better.  Of course,
you should not make your log larger than the partition hosting the log.
I don't understand why the vice-setup-rvm script suggests a upper limit
of 30M.  Maybe it is just trying to be environmental-friendly ;-) ? I
recall some machines in CMU used to have RVM logs of 400MB -- this is
not unreasonable as those RVM log devices lived in their dedicated disks
with dedicated heads anyway.

Also, note that both the servers and the clients use RVM logs.
Since your file system was complaining about "cache overflow",
so if the problem was due to RVM log being too small (I
am not sure), then the RVM log in question should be the one on
the client, not the one on the server.  Venus chooses the RVM log
(and data) size automatically (they are both regular files located in
/usr/coda/{LOG,DATA}), according to your client cache size (as designated
in /usr/coda/etc/vstab).

-- Clement

======================================================================
Yui-wah LEE (Clement)                              Tel: (852)-26098412
Department of Computer Science and Engineering,    Fax: (852)-26035024 
The Chinese University of Hong Kong     Email: clement_at_cse.cuhk.edu.hk
======================================================================
Received on 1999-06-23 22:38:36