(Illustration by Gaich Muramatsu)
Thanks; We're looking at starting with a little over 4TB of storage, and AFS/DFS/EFS may be our only option at this point. I appreciate your response. I look forward to testing the new big-filesystem version of CODA. Will you need to use a better (logged/transactional/journaling) filesystem? Or would UFS-type filesystems be adequate for the job? I winder if Be, Inc would allow BeFS, their 64-bit journaling file system, to be ported to Linux... -----Original Message----- From: Peter J. Braam [mailto:braam_at_cs.cmu.edu] Sent: Sunday, May 09, 1999 10:39 PM To: codalist_at_TELEMANN.CODA.CS.CMU.EDU Subject: Re: Larger then 10GB Coda server? Michael, This is per server -- so it's NOT a good thing. We will be able to beef things up, but it will be really hard to get boyond memory mapping 2G, which only gives something like an 80GB file server. At this size I'm expecting other problems to start cropping up. AFS does (I think) not have this limit. I suspect a major change to deal with large server sizes will have to come to the agenda; it's doable but a lot of work. - Peter - Michael Rothwell wrote: > > Is the limit 10GB per volume, or 10GB per server? Is it a consequence of the > meta-data limits? Does AFS have simlar limits? > > Thanks! > > -----Original Message----- > From: Peter J. Braam [mailto:braam_at_cs.cmu.edu] > Sent: Sunday, May 09, 1999 10:38 AM > To: codalist_at_TELEMANN.CODA.CS.CMU.EDU > Subject: Re: Larger then 10GB Coda server? > > Hi Rob, > > First of all, like I do with everyone, I really urge you to first play > with Coda on a smaller scale. You may not like it and will need to gain > some experience. What you are asking is currently well beyond what we feel > comfortable doing ourselves - and 150G leads to metadata well beyond what > can be memory mapped on 32bit architectures (unless you have large files > on average). > > It looks like 150M is out of reach without a major re-engineering of Coda > meta data on the server - however ~30GB might well be possible (using > something like 1G of RVM data) [you'd need to do things differently from > the currently installation, but I have successfully done rapid starups of > such servers, contact me for some patches.] The issues are described in > > http://www.coda.cs.cmu.edu/cgi-bin/coda-fom?file=3 > > [What you shouldn't forget is that backup volumes also take space, they > take 500B/file and 500B/dir.] > > How many users are going to be using this installation? Commonly we would > have a Coda volume for each user and there is a limit of 1024 volumes (or > 1000 I forget). This could also be an issue, but is easy to change. > > - Peter - > > On Fri, 7 May 1999, Rob Towner wrote: > > > I am interested in implementing Coda on my network and I would like to > > ask some questions on this list. I have 4 campuses connected by T1 lines > > and I would like to dedicate 30 GB of disk space in Coda to each campus. > > My current thinking of the ideal configuration is as follows. One Coda > > server with 150 GB in Coda on a raid set. Each of the four campuses > > would have a server which run the Coda client and would serve files via > > Samba to win 95 workstations. > > > > I am aware that currently Coda only scales up to 10 GB per server. Would > > I be totally wasting my time if I tried to get Coda working with 150 GB > > or is there a chance that I could make it work? Would putting a 30 GB > > Coda server at each campus be a better solution? > > > > The Coda HOWTO says to not make a venus cache larger then 300 MB. The > > setup script says I can use half the available space on the partition > > for the venus cache. How large can I make a venus cache? I want to use a > > very large cache because my WAN is very slow. In fact, I would like to > > use a 30 GB venus cache on the campus servers. > > > > By now you probably think I'm crazy. :) > > > > Rob Towner > > CIS, Network Assistant > > Yuma Union High School District > > Yuma Educational Consortium > >Received on 1999-05-09 22:33:40