(Illustration by Gaich Muramatsu)
Three machines: - Server (debian) - ClientW, sessile system on net via cable modem. (RedHat) Has a large media filesystem (i.e., big files) - ClientR, a notebook (RedHat) ClientW & ClientR are both running coda clients. Server is the coda server. On ClientW, I am slowly copying a set of large files into /coda/Server/shivers By "slowly," I mean that I copy a file, then sleep 8 min, then copy another. This gives the client plenty of time to upload the bits to Server. (I have a 100Mb venus cache on ClientW, which is larger than the total write size (~ 46Mb), so there *shouldn't* be an issue, but see below.) On ClientR, I can see the files I am copying on ClientW as they are uploaded to Server with ls -l /coda/Server/shivers Cool! Things go wrong when I just say on ClientW cp <12-big-files> /coda/Server/shivers/. The system writes about three files, then things get screwy & disconnected. What happens, I *think*, is that the write goes in two stages: it's written into the venus cache quickly, then dribbles out over my cable modem slowly. When this bottleneck causes enough reintegration data to build up, blammo. The lossage is as I described in my last message: cfs lv shows the system in some kind of disconnected state, and cfs wr won't make it reconnect. So the message seems to be that if I don't press the system hard, it works. Under pressure, it falls over. For me, that's progress. Now I want to understand the current hosage. Can anyone help? -OlinReceived on 2004-05-14 12:49:14