(Illustration by Gaich Muramatsu)
> A larger cache also has disadvantages, longer startup time, and more > data to revalidate after a disconnection. Typically I try to get the > cache-size between 1x and 2x the 'active working set'. But caches larger > than about 200MB are sure to tickle a few problems as some cache-wide > operations don't scale nicely. Thanks for all the threads today... most of my question were answered... now correct me if i'm wrong but this is my new idea : the dell poweredge 2550 with 60 gig on disks will be the primary coda server (running on ext2, maybe some journaling for 'fast reboots' and 'data garantuee'). A small poweredge 350 with 60 gig on disks aswell, will be the second coda server and they will be in synch. The 'web clients' will mount the coda filesystem on the primary coda server. What happens if the primary server fails ? Is it possible to configure your client that it will go to the second coda server and will keep doing his 'production' ? What if the second server fails (network outage,...) is it possible that the coda cache has 'everything'? Let's say the fileserver has 60 gig.. 20 gig for web and 10 gig for mail... 30 gig free of space... is it possible that the 'web client' has a 'cache' of 20 gig so if the server fails, he can rely on his local cache... and when the server will be online again he can sync the data again... This way the local clients always have a working cache and a exact mirror copy of the production environment ? Am I dreaming here or is this setup possible? Does somebody else knows a better failover/high availability solutions for us ? Sorry for the 'newbie-style' questions but coda is rather new for me... thanks in advance for helping us out here... CainReceived on 2001-07-19 14:52:26