(Illustration by Gaich Muramatsu)
It does narrow it down quite a bit. It is not related to object demotion but the cache revalidation / hoarding. First it makes sure we have valid attributes for all cached objects. As we are not seeing any getattr or validateattr RPC2 calls, they must all be valid. Of course we are iterating through the complete list of cached objects, but this is O(n) and even for 30K objects should be reasonably efficient. This is followed by a second phase where the hoard contexts are checked, we walk all hoard bindings and check if all objects still have the right hoard priority assigned to them. As hoard bindings are name based this involves lookup of the path name for each object and then checking if the object has been assigned the correct hoard priority values. This is pretty complex code, so it is most likely where we are using 100% CPU. I must be the first person to try out the use case where I have a cache significantly bigger than my entire volumes and I hoard everything, thus guaranteeing I have access to the entire volume when disconnected. I'm surprised -- with modern drive sizes, that seemed like a good way to use coda on a notebook. My completely-hoarded volume is about 27,000 files, taking up about 2.5Gb of storage; it represents about 1/8 of my current home directory, which is about 386,000 files comprising 20Gb of space. Are we concluding that this kind of use is simply not on for coda, or is it that there is a bug, which, if fixed, would make this kind of use OK? I don't have a good understanding of how critical these O(n) context-checking ops are -- do they have to happen, are they optional and performed when connectivity is good, or what. -OlinReceived on 2007-05-15 10:57:24