(Illustration by Gaich Muramatsu)
On Tue, Aug 05, 2014 at 07:16:57AM -0400, Greg Troxel wrote: > u-codalist-z149_at_aetey.se writes: > > >> But seriously, in 2010+ all other serious distributed filesystems except > >> NFS seem to be "FUSE first". A particular case in point is GlusterFS. > >> Someone in NetBSD has been getting Gluster ported to NetBSD, and has > >> reported file read rates over the network (GbE) from remote servers at a > >> substantial fraction (60%? more?) of the GbE rate, on a > >> normal-but-fairly-high-end amd64 box. > >> > >> So I don't think the performance issues are really that big a deal any > >> more. > > > > The performance of read/write in Coda (note we are not talking about > > open() and friends) does not relate to the ability to "fill" the network. > > > > It is about processes efficiently accessing the _local_ (cached) files. > > So your comparison may be not exactly applicable. > > (I recall ZFS on Linux has remarkably worse performance via FUSE compared > > to natively) > > That's a fair point. What I really mean is that the speed of FUSE is > high enough that it doesn't seem likely to be an issue. Actually it isn't really an issue for local workloads when you have enough memory and a well working pagecache. In another project we've been using FUSE to demand-page virtual machine disk and memory images and the latency is well within bounds. For pioctls it depends, most of them can probably exposed as extended attributes, for instance to set acls, or query object location, CML/volume information etc.. Another option is to create a /proc like virtual filesystem, but there it is harder to associate actions with particular objects so it is mostly useful for high level switches, getting statistics that are currently dumped into venus.log and maybe toggeling connected/disconnected states, but I prefer to use rpc2's lua scripting for that nowadays. JanReceived on 2014-08-05 07:54:11