Coda File System

Re: Caching mechanism

From: Stephen J. Turnbull <>
Date: Mon, 26 Nov 2012 15:19:27 +0900
Dhaivat Pandya writes:

 > I would agree that this true for many situations. However, with virtualized
 > environments like Amazon EC2 with extremely low latency and high bandwidth
 > between nodes, I'm taking a guess that predictive pre-fetching might be
 > feasible.

But then what benefit does it have?  

Pre-fetching makes a lot of sense in the web environment where you
have very small, *explicit* fan-out for the search, and
human-perceptible latency and bandwidth constraints, but infrequent
(from the machine's point of view.  But if you have low latency and
high bandwidth, and frequent access to uniformly distributed files, it
seems to me caching is going to beat prefetch every time.  This is not
the same as the situation with, say, spinning disks where bandwidth is
cheap and latency is relatively high, and with appropriate physical
organization you can use the bandwidth to prefetch sectors in sequence
very efficiently, with rather high accuracy for many applications.

I don't see how a file system can guess accurately about the next file
(or even do reasonably well by fetching many files).  So I'm with Rune:

 > On Sun, Nov 25, 2012 at 11:18 AM, <> wrote:
 > > My conclusion based on my experience (YMMV) is that lazy fetching
 > > is a very efficient strategy. Given the "high cost" of accessing
 > > a file without a reason, the less prefetching the better.

I also agree with Rune that if you need a quantum leap in performance
from Coda, it is worth trying -- practical experience beats pure
theory quite frequently for that kind of thing.  But the theory
suggests that without a breakthrough you're not going to get much
Received on 2012-11-26 01:45:01