(Illustration by Gaich Muramatsu)
Just a thought on this: Keep e2compr in mind. For those not familiar with e2compr it is transparent compression for ext2. I have used it to keep my laptop coda cache compressed without noticable slowdown (for my needs anyway). I know coda has provisions for compressing files in the cache but I never could figure out how make it work transparently. Jan Harkes <jaharkes_at_cs.cmu.edu> on 03/08/2000 11:49:01 AM Please respond to codalist_at_TELEMANN.coda.cs.cmu.edu To: codalist_at_TELEMANN.coda.cs.cmu.edu cc: Subject: Re: compression On Wed, Mar 08, 2000 at 07:54:37AM -0500, Greg Troxel wrote: > Now that I'm running Coda over IPsec, I realize it would be very nice > if Coda were able to apply gzip or something to the bulk file > transfers. I realize I could use IPcomp, but it seems better to allow > reuse of compression context across the whole application data unit. > I don't know how hard this would be, but I thought I'd mention it so > that it can get put on the wish-list for future redesign thoughts - I > suspect keeping this in mind when redoing side-effects for TCP would > make an eventual implementation of file-level compression a lot > easier. > > Greg Troxel <gdt_at_ir.bbn.com> Hi Greg, Good suggestion, I've actually looked at compressing the shadow files that we currently always make before backfetching with zlib. It should not be that hard. The only thing is that the server double-checks the size of the transferred data, so I would either always decompress after receiving an update on the server, or add a compressed-flag/size field to some of the internal structures. If the server doesn't have to do compression/decompression saves server CPU cycles. On the other hand, probably weakly connected clients might be interested in fetching compressed files even if they were not originally compressed on the server. While a strong client might not want to waste it's time on the compression/decompression. JanReceived on 2000-03-08 12:54:47