(Illustration by Gaich Muramatsu)
> NFS/Samba have a completely different model, they are block based. If > you read a very large uncached file in Coda, you have to wait until it > has been fetched completely. I take it that this is an implementation detail. I'm suprised that since you guys have the benefit of having a multi-threaded implementation you don't just feed the file's bytes off to the reading application while saving it to the cache at the same time, that way the initial latency of opening large files is illiminated. I'm sure that this would be much more complex to implement than I have stated here, but are there any semantic reasons why it couldn't be done this way? Are there any other places in the coda client that code be significantly improved if the implementation weren't prohibitively complex? The reason I ask this is because I am seriously considering implementing the coda client in Java, and iwould like to know about these wish-list items ujp front so I can take them into consideration in my designs. I think that Java's strong support for multi-threaded code and the fact that stream based IO is a very common file access method in Java applications will make a java port of coda quite useful. Later, -JustinReceived on 2000-05-23 12:39:48