(Illustration by Gaich Muramatsu)
Hello Ryan, trying to answer. If I miss something or take it wrong, hopefully somebody will comment. It looks like you have done a fair job of researching. > I don't want replication and I don't have disconnected client needs. Then you do not have the major two reasons for using Coda. The third one, the global file name space, is also unique to Coda, though it is vital for you mostly if you do not have administrative power on the users' workstations or whatever they ar using for processing their data. > all I really want is global name space Certainly Coda is the best fs in that aspect. The question is what you mean by "globalness". Some approximation like AFS does it can be ok. > soon. The two current file servers have 1.3TB in RAID5 and 2TB in RAID5 > respectively. The new file server will have 4TB in RAID5. > Question #2: Will Coda play nice with such large RAID devices? No problem with big data, but a problem with a lot of objects. Big files (up to 2G) are ok, and you can have say hundred thousand 2G files, which will give you 200TB storage - but you can not have more than several hundred thousand - million files per server instance (Jan, give me a hand, what is the reasonable estimation of the number of files we can have per server? You mentioned it but I can't find the reference). It is a headache when there are lots of small files. > Our largest user current has a 187GB home directory and many other As a real life reference, my "homedir data" on Coda consists mostly of 6551 and 5418 files on two volumes, occupying respectively 83 M and 6.7 G > users follow closely behind in the size department. Size isn't the only > concern, it's also the number of files. The user with 187GB has approximately > 58,000 files. It should work even with a single _volume_, but a couple of dozen such users will saturate one _server_, so that you would have to run multiple servers (physical machines or server processes, each eating about 1G virtual memory) > Question(s) #3: Is it even possible for Coda to support such massive volumes > with tens of thousands of files effectively? If so, how can growing data sizes > and file numbers be dealt with? What if this users "out grows" their current > Volume size? Will the max suggested RVM size handle all this or am I just > dreaming? With the current implementation (it will surely change at some point) you can have max 1GB rvm per server. It corresponds to some maximum number of files per _server_. The number of files per _realm_ is limited by the maximum number of servers per realm (currently about 200). > Question #4: If users need to work with 10GB or greater files, will the client > cache manager be able to deal with that or will everything just come crashing > down when they try? No. No files bigger than 2G. Are you aware that open() on not-yet-cached files blocks until the whole file is fetched into the cache? It takes at least 2 minutes for 1G file on a 100Mbit connection. > I know I'm asking a lot, but all I really want to know is if I should bother > to continue educating myself in Coda or if Coda just isn't the right solution > for me. It is only you who know your needs and can make the decision. AFS is pretty stable and is in wide use. As you do not need write-replication or disconnected mode, it might be right for you. Best regards, -- IvanReceived on 2004-09-18 10:30:47