List:Replication« Previous MessageNext Message »
From:Mats Kindahl Date:June 8 2010 6:54pm
Subject:Re: Replication between one master and multiple slaves
View as plain text  
Hi Mark,

There is currently no "good" support for doing what you want except what
you are already suggesting: running one dedicated server for each web
site and have it create its own binlog files that you then stream off-site.

There are some experiments with the "scriptable replication"
(http://forge.mysql.com/wiki/ReplicationFeatures/ScriptableReplication)
that would allow more advanced filtering on the master server, but that
is currently just a prototype and definitely nothing that I would
recommend for production usage.

To set up multiple servers on the same machine, one usually uses the
mysqld_multi script. You can see a blog about how I did it under
http://mysqlmusings.blogspot.com/2009/12/using-mysqldmulti-on-karmic.html.
There is a mysqld_multi.server script for putting in /usr/init.d that
starts up multiple servers as well, but I haven't tried that.

Just my few cents,
Mats Kindahl

On 06/08/2010 11:28 AM, Mark Rogers wrote:
> Sorry, I think I just worded it badly and we're talkng about the same
> thing.
>
> What I meant was on the main (physical) server I run the main database
> instance (30 databases, logging 5), but also 5 additional slave
> instances that use blackhole to replicate one database each and create
> their own binlog files. The client-site based servers then replicate
> from those 5 slaves.
>
> The advantage of this over moving the databases to their own server
> instances outright is that there are minimal changes to the main
> server config so the chances of me screwing up a working website are
> minimised, even if I manage to screw up the replication. It does mean
> duplicate logs but the main instance only needs hold data for a
> minimal time to allow the local slaves to keep up.
>
> However, to do this I still need assistance in working out the "best"
> way to run multiple database instances under Debian/Ubuntu using their
> standard packages, or else I want to look at writing some software to
> split the binlogs into distinct files per database if this is possible
> (and fairly straightforward). Or, I'm looking for suggestions as to
> how other people do this.
>
> Mark Rogers
>
> On 07/06/10 18:57, Rick James wrote:
>> Or...
>> 5 slaves that you control.  Each would be configured with a single
>> "replicate-do-database=...".  The database named would correspond to
>> the client.  The client machine would then replicate from that slave,
>> an not receive any data other that his database.
>> (For more efficiency, your master could have "binlog-do-database=..."
>> 5 times -- this would limit how much is sent to your 5 slaves.)
>> Black Hole is not needed with the above design.  Nor do I see a way
>> to use Black Hole while sending _only_ the client's data to his machine.
>>
>> On 6/7/10 6:24 AM, Mark Rogers wrote:
>>> Situation:
>>> Shared server (over which I have control), running about 30
>>> databases (for 30 websites). I would like to replicate (say) 5 of
>>> those databases to their (separate) client's premises.
>>>
>>> Which is the best way to do this? Host server is a Ubuntu. Actually
>>> it's a virtual server with 512MB RAM, which is easily adequate for
>>> it's current load.
>>>
>>> Obviously, if I dump all the 5 databases to a binlog then replicate
>>> from there, each of the 5 clients will see all of the data for all 5
>>> databases, which is a bandwidth and security issue. I know that
>>> there are two standard options to avoid this: run those 5 databases
>>> in 5 separate database server instances, or run 5 additional
>>> instances using Blackhole databases which generate their own
>>> binlogs. Doing this with a stock Ubuntu (Debian) install doesn't
>>> seem particularly straightforward and I've not seen any
>>> documentation for doing this (other than by doing a fresh install
>>> from source), which is not practical on an existing server (and in
>>> any case I would like to stick with packages from the repositories
>>> that are easy to keep up to date). Also, I have concerns about
>>> creating extra instances from a memory point of view (or is this not
>>> a significant issue)?
>>>
>>> Other options include writing something that can interpret the
>>> binlog and split it into separate binlog files for separate
>>> databases (would this be hard?)
>>>
>>
>
>


-- 
Mats Kindahl
Lead Software Developer
MySQL Replication, Oracle

Thread
Replication between one master and multiple slavesMark Rogers7 Jun
  • Re: Replication between one master and multiple slavesRick James7 Jun
    • Re: Replication between one master and multiple slavesMark Rogers8 Jun
      • Re: Replication between one master and multiple slavesMats Kindahl8 Jun
        • Re: Replication between one master and multiple slavesMark Rogers21 Jun