(Illustration by Gaich Muramatsu)
Jan Harkes <jaharkes_at_cs.cmu.edu> writes: > When I last looked there were several windows fuse implementations, and > in the back of the mind I recall the OS X variant had broken because of > some kernel change. FUSE on OS X seems to be ok. I just mounted a remote machine using sshfs. The website is at: https://osxfuse.github.io/ There used to be multiple successors to MacFUSE, osxfuse and https://fuse4x.github.io/, but they have been unified in osxfuse. > But there were two ways to use fuse, and one, the high level api using > libfuse, which was the most portable across platforms used a separate > thread to handle each request which doesn't mesh well with Coda's LWP > threading and the low-level api was either not available for every > platform or needed platform specific tweaks, details are unclear. It may well be true that the low-level API is less portable in practice. On NetBSD, there is native puffs (which is FUSE like semantically), and a compat-lib for the high-level FUSE API) called librefuse. There is a daemon that provides a /dev/fuse for the low-level API. I am unclear on the low-level API on OS X. > The individual read/write accesses used to be an issue when systems were > single core and context switches were expensive. Each system call would > require saving the page table state for the user's process, then context > switching to the venus process, handling the IO, and context switching > back. And something like a write would involve the original data copy in > the app (1), copied to the kernel (2), copied in-kernel passing on the > upcall message queue (3), copied to venus (4), copied back to the kernel > for write out to the container file (5), actual copy to disk (6?). Agreed that it used to be a big deal. I believe it isn't now because of seeing reports of near GbE speed for glusterfs. > Things have improved in modern kernels, cpu caches are larger, copies > are more efficient, context switch overhead is much improved, there is > zero-copy IO, we have multiple cores so both the app and venus can keep > running at the same time and available memory is measured in the > gigabytes instead of megabytes. We can push gigabytes per second as > individual reads or writes through a fuse filesystem, although having a > well behaved application using page-sized/page-aligned IO probably helps. Indeed, lots of resources now. >> reason why it can't be done. I see what you mean about providing >> identity, but one could always have the user program obtain a key or >> auth token via a magic path and use that to authenticate a user/venus >> channel. But magic paths seem like an ok solution. > > That is basically how clog passes the obtained Coda token to venus, > using a pioctl. Or did you mean the other way around where we could pull > the Coda (or some special one-time use) token back from venus and then > use that to authenticate that user over a unix domain or tcp (https?) > connection. I meant talking to venus and obtaining a secret that could be used to authenticate some channel that doesn't convey uid reliably. But I think doing pioctl via FUSE will not be that hard and seems to be the way to go. >> For me, if I can't run coda on all the systems I use, then it just >> doesn't work. So I tried out unison, and I am now using Syncthing >> instead. My take on requirements for coda is that being able to run it > > Both unison and syncthing try to get all clients to store a complete > copy of all the data. I guess it is like Coda without the System:AnyUser > acl and an agressive hoard setup that always tries to cache everything, > never actually tried to use Coda that way. Ofcourse syncthing chunks up > a file in 128KB blocks and only sends modified ones so it will be more > efficient at propagating updates if only parts of a file change. Yes, the space of filesystems is pretty complicated. As local storage gets bigger, I'm finding that having full copies of smallish (10GB :-) sets of bits is very useful. I really do like the coda model, and would find it useful again if it can run ~everywhere.Received on 2016-05-22 13:59:37