(Illustration by Gaich Muramatsu)
The SrvErr log file contains nothing, and the SrvLog follows at the end of this message. Also, this is not an upgrade of any kind. I installed RH 6.2 from scratch, and installed a new 5.3.6-1 server, as the SCM, the first Coda server of the cell. Also, there are others who are having the same problem on RH 6.2. I am planning on setting up another machine with RH 6.1, with coda server 5.3.5-1 or 5.3.6-1 (or one of each, if I can), and seeing if the clients that cannot connect to this server can connect to the other(s). One more note... the clients that fail to connect to my server can connect to the Coda test server, so it seems that the clients are OK. Also, the client installed on the same (RH 6.2) machine as the server has no problems connecting to that server. Maybe it has something to do with network? Thanks, Gus Scheidt SrvLog: -------------------------------> 11:50:47 New SrvLog started at Wed Apr 19 11:50:47 2000 11:50:47 Resource limit on data size are set to 2147483647 11:50:47 Server etext 0x80ff56a, edata 0x8138 11:50:47 RvmType is Rvm 11:50:47 Main process doing a LWP_Init() 11:50:47 Main thread just did a RVM_SET_THREAD_DATA 11:50:47 Setting Rvm Truncate threshhold to 5. Partition /vicepa: inodes in use: 0, total: 262144. 11:51:01 Partition /vicepa: 240131K available (minfree=5%), 235970K free. 11:51:01 The server (pid 11642) can be controlled using volutil commands 11:51:01 "volutil -help" will give you a list of these commands 11:51:01 If desperate, "kill -SIGWINCH 11642" will increase debugging level 11:51:01 "kill -SIGUSR2 11642" will set debugging level to zero 11:51:01 "kill -9 11642" will kill a runaway server 11:51:01 VCheckVLDB: could not open VLDB 11:51:01 VInitVolPackage: no VLDB! Please cre 11:51:01 Vice file system salvager, version 3.0. 11:51:01 SanityCheckFreeLists: Checking RVM Vnode Free lists. 11:51:01 DestroyBadVolumes: Checking for destroyed volumes. 11:51:01 Salvaging file system partition /vic 11:51:01 Force salvage of all volumes on this partition 11:51:01 Scanning inodes in directory /vicepa... 11:51:01 SalvageFileSys completed on /vicepa 11:51:01 Attached 0 volumes; 0 volumes not attached 11:51:01 CheckVRDB: could not open VRDB lqman: Creating LockQueue Manager.....LockQueue Manager starting ..... 11:51:01 LockQueue Manager just did a rvmlib_set_thread_data() done 11:51:01 CallBackCheckLWP just did a rvmlib_set_thread_data() 11:51:01 CheckLWP just did a rvmlib_set_thread_data() 11:51:01 ServerLWP 0 just did a rvmlib_set_thread_data() 11:51:01 ServerLWP 1 just did a rvmlib_set_th 11:51:01 ServerLWP 2 just did a rvmlib_set_thread_data() 11:51:01 ServerLWP 3 just did a rvmlib_set_thread_data() 11:51:01 ServerLWP 4 just did a rvmlib_set_thread_data() 11:51:01 ServerLWP 5 just did a rvmlib_set_th 11:51:01 ResLWP-0 just did a rvmlib_set_thread_data() 11:51:01 ResLWP-1 just did a rvmlib_set_thread_data() 11:51:02 VolUtilLWP 0 just did a rvmlib_set_thread_data() 11:51:02 VolUtilLWP 1 just did a rvmlib_set_t 11:51:02 Starting SmonDaemon timer 11:51:02 File Server started Wed Apr 19 11:51:02 2000 11:51:06 VGetPartition Couldn't find partition /vicepa/ 11:51:06 VCreateVolume: Cannot find partition /vicepa/. Bailing out. 11:51:06 Unable to create the volume; aborted 11:51:06 create: volume creation failed for volume 3000001 11:51:06 status = (103) 11:51:48 VN_GetDirHandle NEW Vnode 0 Uniq 0 cnt 1 11:51:48 VN_PutDirHandle: Vn 1 Uniq 1: cnt 0, vn_cnt 0 11:51:48 Creating new log for root vnode 11:51:48 VAttachVolumeById: vol 3000001 (coda 11:51:48 create: volume 3000001 (coda:root.0) VLDB created. Search lengths: RO 0, RW 0, BK 11:51:48 /vice/vol/AllVolumes written 11:51:49 VRDB created, 1 entries 11:52:11 client_GetVenusId: got new host 127. 11:52:11 Building callback conn. 11:52:11 No idle WriteBack conns, building new one 11:52:11 Writeback message to 127.0.0.1 port 2430 on conn 2cea4082 succeeded 11:52:11 RevokeWBPermit on conn 2cea4082 returned 0 11:53:04 client_GetVenusId: got new host 199.111.154.254:2430 11:53:04 Building callback conn. 11:53:04 No idle WriteBack conns, building new one 11:53:04 Writeback message to 199.111.154.254 port 2430 on conn 4e3e4f8 succeeded 11:59:17 Callback failed RPC2_DEAD (F) for ws 199.111.154.254, port 2430 11:59:17 Unbinding RPC2 connection 263752943 12:51:32 SmonDaemon timer expired 12:51:32 Entered CheckRVMResStat 12:51:32 Starting SmonDaemon timer 12:53:41 Building callback conn. 12:53:41 RevokeWBPermit on conn 4e3e4f8 returned -2016 12:53:41 No idle WriteBack conns, building new one 12:53:41 Writeback message to 199.111.154.254 port 2430 on conn b10bf3b succeeded 13:01:17 Callback failed RPC2_DEAD (F) for ws 199.111.154.254, port 2430 13:01:17 Unbinding RPC2 connection 126336813 13:52:02 SmonDaemon timer expired 13:52:02 Entered CheckRVMResStat 13:52:02 Starting SmonDaemon timer 14:52:32 SmonDaemon timer expired 14:52:32 Entered CheckRVMResStat 14:52:32 Starting SmonDaemon timer 15:52:32 SmonDaemon timer expired 15:52:32 Entered CheckRVMResStat 15:52:32 Starting SmonDaemon timer ------------------------------> Jan Harkes wrote: > On Wed, Apr 19, 2000 at 01:53:18PM -0400, Karl G Scheidt wrote: > > I have been exploring Coda for the past few weeks, and today I am > > stumped on a client problem. When I start venus from any client, it > > appears to start up correctly. But: > > > ... > > It hangs right here, sleeps for about 15 to 20 seconds, then dies. > > The venus.log file looks like this: > ... > > I also tried going back to 5.3.5-1, both on the client and on the > > server. The client has the same problem; venus.log is: > ... > > This is not the same problem, have you read the announcement for 5.3.6? > > SPECIAL NOTES FOR PEOPLE UPGRADING FROM PREVIOUS VERSIONS > --------------------------------------------------------- > ! The layout of container files in venus.cache has changed. Before > upgrading a client make sure all logged modification have been > reintegrated with the servers. Before starting the new client do the > following: > # rm -rf /usr/coda/venus.cache/* > # touch /usr/coda/venus.cache/INIT > > The container file cache is not compatible between the two versions, you > can't simply switch back and forth between the two and expect things to > keep working. > > You probably had a network problem when 5.3.6 was starting and it was > trying to do hostname lookups for the servers. If the servers are not > listed in /etc/hosts, this process can take a very long time to > complete. (30 seconds * number of server * number of nameservers in > /etc/resolv.conf) > > > **The biggest difference between what I have done today and what I have > > done in the past is that the server (either version has the problem) is > > running on a machine with RH 6.2, rather than 6.1 (suggesting a 6.2 > > problem?). I have not tried reverting to 6.1 and trying again; I wanted > > to see if there is a known solution for my problem first. > > Did you look at any of the server logs? > > JanReceived on 2000-04-19 17:15:17