(Illustration by Gaich Muramatsu)
Thanks for all the help right now I'm doin a general update and will start seting up coda tonight Brian On Thu, 9 Sep 2004 12:08:18 -0400, Jan Harkes <jaharkes_at_cs.cmu.edu> wrote: > On Thu, Sep 09, 2004 at 04:31:21AM -0500, Brian Finney wrote: > > curios what is considered to be the current max size for the venus cache > > As far as the amount of data is concerned, probably about as much as > your harddrive can handle. The limiting factor is the number of files. > More files mean that we need more memory to store the metadata, we need > more CPU time to scan for possible replacements, and it takes longerto > revalidate the cache after a disconnection. > > Now there is this estimated formula built into venus that assumes that > a typical user has an average file size of 24KB, about 4 files per > directory, 256 files in a volume, etc. So with the default ratios, the > cache probably shouldn't be larger than several hundred megabytes. > 200MB is approximately 8000 files and requires only about 20MB of RVM. > I've heard several reports that 500MB caches (20000 files) can > sometimes suffer from considerable stalls, so memory doesn't really > seem to be the limiting factor here. > > However... > > I'm wanting to try coda starting with using it between my laptop and > > server, primarily for use with collections of movies, mp3s, and other > > files that would easily surpass these limits, and then if every thing > > Your average filesize is several orders higher than our assumed > 24KB/file. So it would make sense to tweak some of the values in > /etc/coda/venus.conf. > > cacheblocks=10000000 # 10 GB cache > cachefiles=10000 # 10k files > cml_entries=40000 # 4 log entries per file. > hoard_entries=1000 # I 'think' this should be enough :) > > (I really should change the cml_entries and hoard entries defaults to be > relative to the # of cachefiles and not the # of cacheblocks...) > > Jan > >Received on 2004-09-09 15:17:26