(Illustration by Gaich Muramatsu)
On Sun, Dec 16, 2007 at 06:27:41PM +0100, u+codalist-p4pg_at_chalmers.se wrote: > Note that rvm is still 32-bit, so it does not matter if the processor is > capable of 64-bit addressing. On a 64-bit system, RVM actually uses 64-bit memory addressing. It pretty much has to because it presents itself to the application as a memory region with transactional properties (commit/rollback and persistence). It is definitely the number of files that matters. One of our servers stores a total of 82GB of file data according to "df /vicepa". There are roughly 715000 inodes used, so we store the meta data for something between 715K and 1.4M files, because we run backups, there may be up to two 'vnodes' pointing at the same container file, but we don't back up all volumes so I can't get a precise number for this. The actual amount of RVM used is 406MB, which I got from the server logfile after running volutil printstats. The machine is configured for 1GB of RVM, so we should be able to scale to about 2 million files, which would (in our deployment) be around 160-200GB of file data. > > My reading of the archive mail indicates that 32 bit memory limits > > max file system size to about 25G. > > Ref: http://www.coda.cs.cmu.edu/maillists/codalist/codalist-2002/4440.html One of the problems with the early estimates is that it is based on some 'golden ratio' of RVM overhead is 4% of the file size, but the average size of a file has increased considerably. A more precise method is to use 'rvmsizer' which calculates how much RVM would be needed to store the meta data for a given file tree. So if you have a representative sample of data it gives a basis for extrapolation. Currently it is a C program that is installed as part of the server install in /usr/bin. Probably should become a shell, perl or python script so that people can run it before they build or install a Coda server. JanReceived on 2007-12-17 12:20:58