Coda File System

Re: coda does not replicate

From: Markus Markert <mma_at_suchtreffer.de>
Date: Wed, 28 May 2003 16:48:42 +0200
> Hello Markus,
>
> > my coda works for a few days really fine. now i have did the following
> > and the replication does not work now.
>
> do you mean that replication did work before?

yes, it worked very well and fast :-)

> > i stopped all of the coda processes (non-scm and then scm).
> > then i did a ln -s into the coda directory (ln -s /coda/WWW /info/WWW)
>
> Why are you talking about shutting down the processes and about this
> symlink? Anyway, that link shouldn't be able to damage anything :)

Because this was the last thing what i have done, thought that maybe this is 
the error candidate for coda. but it seems not so.

ok the details:

1 non-scm(ds10)  1 scm(ds11)

[files]
	/vice/db/servers
	ds11            11
	ds10            10

	/vice/db/vicetab
	ds11   /vicepa   ftree   width=256,depth=3
	ds10   /vicepa   ftree   width=256,depth=3

	/vice/db/VSGDB
	E0000100 ds11
	E0000102 ds10
	E0000104 ds11 ds10

	/etc/hosts
	192.168.0.160	ds10
	192.168.0.161	ds11

have created with volrep the last volume E0000104.
all was fine. venus has mounted the volume under /coda and had touched some 
files in it. after the touches coda was able to replicate them to the other 
servers. was great :-)

the logs says this:

[ds11-/vice/srv/SrvLog]
*******************************************************************************************
16:30:54 New SrvLog started at Wed May 28 16:30:54 2003

16:30:54 Resource limit on data size are set to -1

16:30:54 RvmType is Rvm
16:30:54 Main process doing a LWP_Init()
16:30:54 Main thread just did a RVM_SET_THREAD_DATA

16:30:54 Setting Rvm Truncate threshhold to 5.

Partition /vicepa: inodes in use: 1, total: 16777216.
16:31:47 Partition /vicepa: 25606788K available (minfree=0%), 25573948K free.
16:31:47 The server (pid 697) can be controlled using volutil commands
16:31:47 "volutil -help" will give you a list of these commands
16:31:47 If desperate,
                "kill -SIGWINCH 697" will increase debugging level
16:31:47        "kill -SIGUSR2 697" will set debugging level to zero
16:31:47        "kill -9 697" will kill a runaway server
16:31:47 Vice file system salvager, version 3.0.
16:31:47 SanityCheckFreeLists: Checking RVM Vnode Free lists.
16:31:47 DestroyBadVolumes: Checking for destroyed volumes.
16:31:47 Salvaging file system partition /vicepa
16:31:47 Force salvage of all volumes on this partition
16:31:47 Scanning inodes in directory /vicepa...
16:31:47 SFS: There are some volumes without any inodes in them
16:31:47 Entering DCC(0xb000001)
16:31:47 DCC: Salvaging Logs for volume 0xb000001

16:31:47 done:  2 files/dirs,   5 blocks
16:31:47 SalvageFileSys:  unclaimed volume header file or no Inodes in volume 
b000002
16:31:47 SalvageFileSys: Therefore only resetting inUse flag
16:31:47 SalvageFileSys completed on /vicepa
16:31:47 VAttachVolumeById: vol b000001 (codaclient.0) attached and online
16:31:47 VAttachVolumeById: vol b000002 (codaclient.0) attached and online
16:31:47 Attached 2 volumes; 0 volumes not attached
lqman: Creating LockQueue Manager.....LockQueue Manager starting .....
16:31:47 LockQueue Manager just did a rvmlib_set_thread_data()

done
16:31:47 CallBackCheckLWP just did a rvmlib_set_thread_data()

16:31:47 CheckLWP just did a rvmlib_set_thread_data()

16:31:47 ServerLWP 0 just did a rvmlib_set_thread_data()

16:31:47 ServerLWP 1 just did a rvmlib_set_thread_data()

16:31:47 ServerLWP 2 just did a rvmlib_set_thread_data()

16:31:47 ServerLWP 3 just did a rvmlib_set_thread_data()

16:31:47 ServerLWP 4 just did a rvmlib_set_thread_data()

16:31:47 ServerLWP 5 just did a rvmlib_set_thread_data()

16:31:47 ResLWP-0 just did a rvmlib_set_thread_data()

16:31:47 ResLWP-1 just did a rvmlib_set_thread_data()

16:31:47 VolUtilLWP 0 just did a rvmlib_set_thread_data()

16:31:47 VolUtilLWP 1 just did a rvmlib_set_thread_data()

16:31:47 Starting SmonDaemon timer
16:31:47 File Server started Wed May 28 16:31:47 2003

16:33:47 client_GetVenusId: got new host 192.168.0.160:32770
16:33:47 Building callback conn.
16:33:47 No idle WriteBack conns, building new one
16:33:47 Writeback message to 192.168.0.160 port 32770 on conn 12977d28 
succeeded
16:33:47 RevokeWBPermit on conn 12977d28 returned 0
16:33:47 VGetVnode: vnode b000001.3 is not allocated
16:33:47 ViceValidateAttrs: (b000001.3.383) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.4 is not allocated
16:33:47 ViceValidateAttrs: (b000001.4.29b) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.7 is not allocated
16:33:47 ViceValidateAttrs: (b000001.7.384) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.14 is not allocated
16:33:47 ViceValidateAttrs: (b000001.14.1b7) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.18 is not allocated
16:33:47 ViceValidateAttrs: (b000001.18.2a0) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.2c is not allocated
16:33:47 ViceValidateAttrs: (b000001.2c.2a5) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.30 is not allocated
16:33:47 ViceValidateAttrs: (b000001.30.2a6) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.34 is not allocated
16:33:47 ViceValidateAttrs: (b000001.34.2a7) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.3c is not allocated
16:33:47 ViceValidateAttrs: (b000001.3c.1c1) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.3c is not allocated
16:33:47 ViceValidateAttrs: (b000001.3c.2a9) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.40 is not allocated
16:33:47 ViceValidateAttrs: (b000001.40.1c2) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.40 is not allocated
16:33:47 ViceValidateAttrs: (b000001.40.2aa) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.44 is not allocated
16:33:47 ViceValidateAttrs: (b000001.44.1c3) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.48 is not allocated
16:33:47 ViceValidateAttrs: (b000001.48.1c4) failed (GetFsObj 1)!
16:33:47 VGetVnode: vnode b000001.86 is not allocated
16:33:47 ViceValidateAttrs: (b000001.86.fb) failed (GetFsObj 1)!
16:33:47 ComputeCompOps: fid(0x7f000000.1.1)

16:33:47 COP1Update: VSG not found!
*******************************************************************************************

[ds10-/vice/srv/SrvLog]
*******************************************************************************************
16:33:25 New SrvLog started at Wed May 28 16:33:25 2003

16:33:25 Resource limit on data size are set to -1

16:33:25 RvmType is Rvm
16:33:25 Main process doing a LWP_Init()
16:33:25 Main thread just did a RVM_SET_THREAD_DATA

16:33:25 Setting Rvm Truncate threshhold to 5.

Partition /vicepa: inodes in use: 14, total: 16777216.
16:33:40 Partition /vicepa: 23924876K available (minfree=5%), 23892024K free.
16:33:40 The server (pid 690) can be controlled using volutil commands
16:33:40 "volutil -help" will give you a list of these commands
16:33:40 If desperate,
                "kill -SIGWINCH 690" will increase debugging level
16:33:40        "kill -SIGUSR2 690" will set debugging level to zero
16:33:40        "kill -9 690" will kill a runaway server
16:33:40 Vice file system salvager, version 3.0.
16:33:40 SanityCheckFreeLists: Checking RVM Vnode Free lists.
16:33:40 DestroyBadVolumes: Checking for destroyed volumes.
16:33:40 Salvaging file system partition /vicepa
16:33:40 Force salvage of all volumes on this partition
16:33:40 Scanning inodes in directory /vicepa...
16:33:40 SFS: There are some volumes without any inodes in them
16:33:40 SFS:No Inode summary for volume 0xa000001; skipping full salvage
16:33:40 SalvageFileSys: Therefore only resetting inUse flag
16:33:40 Entering DCC(0xa000002)
16:33:40 DCC: Salvaging Logs for volume 0xa000002

16:33:40 done:  17 files/dirs,  22 blocks
16:33:40 SalvageFileSys:  unclaimed volume header file or no Inodes in volume 
a000003
16:33:40 SalvageFileSys: Therefore only resetting inUse flag
16:33:40 SalvageFileSys completed on /vicepa
16:33:40 VAttachVolumeById: vol a000001 (codaclient.1) attached and online
16:33:40 VAttachVolumeById: vol a000002 (codaroot.1) attached and online
16:33:40 VAttachVolumeById: vol a000003 (codaclient.1) attached and online
16:33:40 Attached 3 volumes; 0 volumes not attached
lqman: Creating LockQueue Manager.....LockQueue Manager starting .....
16:33:40 LockQueue Manager just did a rvmlib_set_thread_data()

done
16:33:40 CallBackCheckLWP just did a rvmlib_set_thread_data()

16:33:40 CheckLWP just did a rvmlib_set_thread_data()

16:33:40 ServerLWP 0 just did a rvmlib_set_thread_data()

16:33:40 ServerLWP 1 just did a rvmlib_set_thread_data()

16:33:40 ServerLWP 2 just did a rvmlib_set_thread_data()

16:33:40 ServerLWP 3 just did a rvmlib_set_thread_data()

16:33:40 ServerLWP 4 just did a rvmlib_set_thread_data()

16:33:40 ServerLWP 5 just did a rvmlib_set_thread_data()

16:33:40 ResLWP-0 just did a rvmlib_set_thread_data()

16:33:40 ResLWP-1 just did a rvmlib_set_thread_data()

16:33:40 VolUtilLWP 0 just did a rvmlib_set_thread_data()

16:33:40 VolUtilLWP 1 just did a rvmlib_set_thread_data()

16:33:40 Starting SmonDaemon timer
16:33:40 File Server started Wed May 28 16:33:40 2003

16:33:47 client_GetVenusId: got new host 192.168.0.160:32770
16:33:47 Building callback conn.
16:33:47 No idle WriteBack conns, building new one
16:33:47 Writeback message to 192.168.0.160 port 32770 on conn 39e972f7 
succeeded
16:33:47 RevokeWBPermit on conn 39e972f7 returned 0
16:33:47 Entering RecovDirResolve (0x7f000000.0x1.0x1)

16:33:47 ComputeCompOps: fid(0x7f000000.1.1)

16:33:47 RS_ShipLogs - returning 0
16:34:02 Going to spool log entry for phase3

16:35:19 client_GetVenusId: got new host 192.168.0.161:32772
16:35:19 Building callback conn.
16:35:19 No idle WriteBack conns, building new one
16:35:19 Writeback message to 192.168.0.161 port 32772 on conn 436e73b 
succeeded


hope this is enough.

> Would you go into details, otherwise I am rather confused about your
> setup.
>
> Best regards,

thx and i hope you can help me :-)

-- 
-----------------------------------------------------------
Suchtreffer AG
Bleicherstrasse 20
D-78467 Konstanz
Germany

fon:       +49-(0)7531-89207-17
fax:       +49-(0)7531-89207-13
e-mail:   mma_at_suchtreffer.de
internet: http://www.suchtreffer.de
-----------------------------------------------------------
In a world without walls and fences,
who need gates?
Received on 2003-05-28 10:52:01