Coda File System

Re: Coda development

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Wed, 4 May 2016 21:01:00 -0400
On Wed, May 04, 2016 at 07:43:46PM -0400, Greg Troxel wrote:
>   I think it's critical to have a FUSE client so that coda can run
>   anywhere that supports FUSE (the high-level interface, preferably).  I
>   think it's perfectly ok if performance suffers a bit for this; working
>   at 25% speed is 90% of the utility function for me, if not 95%.  And
>   with modern CPUs the performane issues are not going to be a big deal;
>   glusterfs on NetBSD (userspace FS, FUSE) is doing most of 1 Gb/s.  I
>   think it's fine to have in-kernel implementations for those that
>   really care about speed, but it seems the number of supported
>   platforms has dwindled to Linux and NetBSD.

Fuse would be nice, but its support is very on-off across platforms and
it will never be possible to extend the fuse api with a cross-platform
pioctl style interface, so pioctls would have to get implemented as some
sort of virtual filesystem (similar to /proc /sys) probably somewhere
under the same /coda root, we already use a hidden /coda/.CONTROL for
global pioctls, so maybe something like that could become a root
directory for a pioctl-fs where writing/reading virtual files would
replace the current set of ioctl operations and still maintain the
proper kernel-level tagging of user identity, which is more important
than ever with namespaces you cannot just connect over a tcp or unix
domain socket and prove you are in the same filesystem namespace as your
/coda mountpoint. Again, this is a big project.

>   Coda's behavior of doing a store when one does
>   open-for-write/read/close is really awkward.  Arguably programs should
>   not do that, but they do.   So I think it's necessary to not store
>   then, even if that does result in calling in the read locks.
>   Alternatively, open-for-write can be open-for-read, and upgraded on
>   the first write, but I think just not storing is 90% of the win.

This is both simple and expensive. We already are partly there because
of lookaside caching. We just need to make sure we keep around a valid
checksum of the last known data version for every cache file. So when a
file is closed after the open-for-write/read/close cycle and we have to
recompute the checksum to update it we can first check against the old
value and if is wasn't changed not send the store.

The only problem is that write-optimizations on the CML occur when a
file is opened for writing so that we do not send back data that will be
replaced soon anyway. So that fact needs to be tracked so we can still
force writeout in case a Store CML was cancelled during the open. Minor
detail probably not too hard, just need to make sure it isn't forgotten.

Jan
Received on 2016-05-04 21:01:12