(Illustration by Gaich Muramatsu)
On Tue November 16 2004 8:57 pm, Stephen J. Turnbull wrote: > >>>>> "Jerry" == Jerry Amundson <jerry_at_pbs.com> writes: > > Jerry> I have several GB of maildirs in a compressed tar > Jerry> file. When I try to untar (to venus on the SCM), it gets a > Jerry> ways before giving me "File too large" errors. But the SCM > Jerry> has plenty of space. > > "File too large" presumably means *file* too large. Is there a chance > that you've got a file (maybe a backup tar or something) > 100MB in > there? You aren't trying to copy the tarfile itself to Coda, are you? > > The other likely candidate, since these are maildirs, and you get > "quite a ways" but don't seem to have an intuition as to why it broke > just *there*, is that you have a *directory* that is too large. Since > this limit is given in bytes (256K, IIRC), not dentries, I suppose > that a somewhat imprecise "file too large" error might be issued when > you run out of space for more entries in a directory. > > Jerry> Can I bypass venus, or manipulate cache size somehow? > > No, you can't bypass venus. There are no directories you can access > without venus, it all lives in RVM, which is a BLOB. [snip] > If it's a directory, fire up your editor of choice and start hacking > Coda source. Horrible, I know, but there you are; it's an unfortunate > (in hindsight) design choice made many years ago. Jan et al. have > other priorities at the moment, although I'm pretty sure Jan has said > that this restriction is on his "must go away someday" list. Arrgh. [root_at_monamie root]# find /vicepa/0 | wc -l 2717 That looks like it. Well, I can't imagine it's a simple fix, else it would have been done by now... But if someone points me in the vicinity, I can take a peek at it... jerryReceived on 2004-11-17 02:34:24