Ok, that's interesting. But what if you have e.g. 4 webservers. Each
webserver is always writing to the same master.
The user stays always on the same webserver.
The inconsistency fact is currently holding me back from that!
For the delete statement I fully agree, but I never execute statements
like this, except perhaps for session tables, which are currently
excluded from replication.
With my Master-Master setup I wanted to spread the mySQL traffic to
raise the maximum number of possible connections and as you already said
The second aim, of course is to have a failover.
I must say that I don't know much about partitioning (yet). It sounds
In your conf. it would be possible even with heartbeat2 and a virtual
IP, then it does the failover automatically and sends error reportings
What other possibilities do you have? You can split read/write
statements to different servers.
You can use a mysql proxy, which - from my knowledge - is not yet
stable, isn't it?
You can use sqlrelay - whith which I honestly don't have any experience
yet, but which can provide a similar solution I guess.
Mit freundlichen Grüßen / Kind regards
Marcus Bointon schrieb:
> On 21 Nov 2007, at 14:30, Christian Schramm wrote:
>> Of course you may get problems with duplicate entries and depending
>> on the server performance also a synchronisation lag.
>> Does anyone in the list have experience with multi master setups like
>> Any recommendations?
> Yes. I do this in several installations. This kind of replication is
> not about scaling, it's about redundancy. You gain nothing by being
> able to write to two servers because all masters have to execute the
> same queries anyway, in fact it will actually cost you slightly in
> So, don't write to your secondary master, and don't do critical (e.g.
> within transactions) reads from it either, that way your data will
> stay sane. Propagation delay is very, very small for small operations
> (microseconds), but if you post a big transaction, it may cause the
> secondary to fall several minutes behind (depending on transaction
> size). A classic example might be to to do a million inserts in a
> transaction, then immediately try to read one of them back form the
> secondary - it won't be there.
> Note the new replication features in 5.1. Row based replication is
> important - imagine you have a statement like:
> delete from mytable order by rand() limit 50;
> This can delete different records on each master! 5.1 fixes that.
> 5.1's partitioning CAN give you big improvements in write speed.
> Multi-master really helps when it comes to failover - you just point
> your clients at the new master, and that's it. No reconfiguration of
> slaves required. When you fix the original master, start it up and it
> will catch up to the new master, at which point it can safely become
> the new failover target.
>> I want to realize the current project i'm working on with replication
>> because the database is not optimized to run e.g. on a ndbcluster.
> NDBCluster solves a different problem - and you can't have
> transactions or foreign key support on it.
> --Marcus Bointon
> Synchromedia Limited: Creators of http://www.smartmessages.net/
> UK resellers of info@hand CRM solutions
> marcus@stripped | http://www.synchromedia.co.uk/