(Illustration by Gaich Muramatsu)
On Fri, Feb 09, 2007 at 12:36:11AM -0500, Davor Ocelic wrote: > On the client side (also from CVS head), I edited venus.conf > to point to the proper device file (/dev/.static/dev/cfs0) and > edited the 'realms' file to include a line like: > > re.alm server.re.alm > > Venus started successfully and typing > cfs lv /coda/server.re.alm/ produced meaningful output. If you address your server directly like that, you don't need a realms file at all. Technically when a Coda client wants to talk to a group of Coda servers it needs to connect to one or more servers in the group and asks those servers where volumes are located. So if you connect to /coda/server.re.alm, your client simply asks 'server.re.alm' what the root volume is, how many replicas there are and where those replicas are located. So given that one server a client can discover that some volume replicas may be located on other servers in your domain. In fact, we used to have a single root server with no volumes. It's only function was to handle volume location queries and redirect clients to the servers that did store data. The only issue with this is that if that one server goes done, clients cannot discover the location of any Coda volumes for which it doesn't have cached information in the realm, even when the rest of the servers are available. So for high availability purposes we added DNS SRV record lookup, now a client checks the DNS record for _coda._udp.re.alm, and ideally gets a list of several servers that can be used for volume location queries (technically every Coda server has a copy of all the critical data in /vice/db and so every server is capable of responding to volume location requests). So great we've got DNS SRV record lookup and we can publish a couple of servers as representatives for our realm. However our DNS servers do not support SRV records, don't really know why. So we added an alternative lookup mechanism, and similar to how /etc/hosts works we first check /etc/coda/realms. So with the realms file you specified it simply says that whenever that specific clients tries to access '/coda/re.alm/', it should get volume location information from server.re.alm. > I ran clog user_at_re.alm , but I got a message along the > lines of "Cannot resolve any auth2 servers in realm re.alm". > Repeated invocations of the command prompted for a password This wouldn't have gotten the token you wanted anyways, the resulting token would have give you permission to access files in /coda/re.alm/ and not /coda/server.re.alm/ If you access '/coda/server.re.alm/', you should obtain tokens as 'clog user_at_server.re.alm'. Although both really map to the same server and root volume, the client doesn't realize this. And the really messy part is that the server is unable to distinguish requests for /coda/re.alm from /coda/server.re.alm and will only set up a single callback connection as a result the client ends up with a connectivity issues (server reachability is defined as having an established callback connection) and possibly stale data in either of the two trees. Now since you built from source, did you override the configure prefix or did everything get installed in /usr/local/{bin,sbin}. It is quite likely that clog is trying to use /usr/local/etc/coda/realms and as such it doesn't see your definition and will fall back on trying to resolve for DNS SRV records which don't exist. > without giving the error message again, but the authentication > didn't succeed ('ctokens' reported "...venus ioctl: Not authenticated..." > in its output). > > I tried providing -host option to clog, both as a FQDN server > name and an IP address, but the outcome was the same. Another thing to check is whether the auth2 daemon is actually running on server.re.alm. JanReceived on 2007-02-09 16:56:05