(Illustration by Gaich Muramatsu)
Hi, I have spent several days out of reach of the Internet, quite exciting experience on its own. It has become also a field test of Coda disconnected operation. A laptop running Linux 2.6.12 with Coda 6.0.14 and rpc2-2. Venus cache of 2500000 Kb, 104166 cache files. The home directory is on Coda and all software I am using is on Coda as well. Hoarding unfortunately does not scale to the level needed for hoarding the software. So I am relying on the cache being sufficiently big to contain all objects I ever use. Before leaving I have accessed my documents and mail files (on Coda as well) to have my mail archives with me, and did the same with several programs which I do not routinely use (so they might have been missing in the cache) but which I thought I might need during the trip. The cache still had over 1Gb free. So far so good, it worked as expected and I was enjoying access to all things I needed - documents, mail, software. One day near the end of the trip I happened to run Gimp on a bunch of high resolution pictures. That very useful program is unfortunately Coda unfriendly, it used a lot of space for its own swapping under my home directory, which filled up Venus cache. At that moment my precious cached files were thrown out of the cache, including all programs and libraries not used by the processes running at that moment. So afterwards I was unable even start the same environment as the startup procedure did not find its pieces. Worse than that, next time at boot Venus did not start at all: 13:40:15 Coda Venus, version 6.0.14 13:40:15 /...../LOG size is 66451968 bytes 13:40:15 Recov_InitRVM: RVM_INIT failed (RVM_EINTERNAL) So I had to reinit the client and lose all files on Coda I created or modified during the trip. (I was prepared though and had copies out of Coda as well) The conclusions I draw from this experience: Disconnected mode works quite well, given that your cache is unlimited. I want to be able to hoard lots of objects, not causing Venus to spend all the time and memory for bookeeping of those. I want to be able to "lazy hoard", that is, mark some directories as "descendants always in cache" but they do not have to populate or refresh without explicit read operations initiated by myself. With other words, set protection against expunging, nothing else. I hope cache maintenance can be reimplemented in a more flexible way. Cache size limit in blocks should be some percentage of the available disk space, dynamically, not otherwise. The objects on the cache would be taggable by "persistency priority" like "red" - never reuse their space in the cache (possibly "yellow" - reuse as the last resort ?) "green" - reuse at will (like Venus usually does) If I had my files tagged as "red", Gimp would bail out with "disk full" error but no harm would be done. If I would occationally fill up the cache with "red" files, I would selectively unpin some parts of them and free the cache (disk space and/or slots in rvm). I have not felt a need for keeping the cached files up-to-date like current hoard walk does. "Lazy hoarding" is sufficient for a lot of situations and can probably be done without the overhead of the current hoarding implementation. Thanks for the great file system! It does lots of things right, in contrast to other networked file systems. And in fact it does work disconnected. Regards RuneReceived on 2006-06-27 03:37:59