(Illustration by Gaich Muramatsu)
On Wed, 2003-06-11 at 18:45, Ivan Popov wrote: > On Wed, 11 Jun 2003, Jan Harkes wrote: > > > > the expected behaviour is that the clients contact the server that they > > > assume to be more responsive - based on their checks. > > > That is, the choice is dynamic, and depends not only on the net topology. > > > Should be fine in your case, anyway. > > > The actual file fetches occur from the server they think they have the > > best connection to. But attribute information, stores, etc. go to both > > servers. Until.... a client decides that one or more members in the > > replication group are slow. Then the client switches to weakly-connected > > Aargh. Thanks Jan for correcting me! > > Of course, it is "mostly read-only" data that suits well to be put on > geographically distinct servers. > Updates in strongly connected mode would not create extra overhead, but > the speed then is limited by the slowest client-server link. > > So, data being heavily updated is more efficiently kept in one place > (possibly replicated, but replicated "within good connectivity"). It is a bit difficult to decide if my situation is "heavily updated". In fact people are working in different locations, and want to have access to all of their documents. Mostly they are working on OpenOffice documents, and after saving such a document, it should be available on both locations. However, if there is an hour between the first update and the replication, there is no problem whatsoever, as the travelling distance between locations is at least 45 minutes. So, to avoid these kind of problems, could I disconnect the servers and regularly reconnect them to allow for the replication? (every hour or so) or do I get into another mess? Result will off course be that I will have some conflicts if people start working on a document on both sides, but I think that is a lesser evil, and it will only occur seldom. Kind regards, DickReceived on 2003-06-12 07:23:26