Sorry, I think I just worded it badly and we're talkng about the same thing.
What I meant was on the main (physical) server I run the main database
instance (30 databases, logging 5), but also 5 additional slave
instances that use blackhole to replicate one database each and create
their own binlog files. The client-site based servers then replicate
from those 5 slaves.
The advantage of this over moving the databases to their own server
instances outright is that there are minimal changes to the main server
config so the chances of me screwing up a working website are minimised,
even if I manage to screw up the replication. It does mean duplicate
logs but the main instance only needs hold data for a minimal time to
allow the local slaves to keep up.
However, to do this I still need assistance in working out the "best"
way to run multiple database instances under Debian/Ubuntu using their
standard packages, or else I want to look at writing some software to
split the binlogs into distinct files per database if this is possible
(and fairly straightforward). Or, I'm looking for suggestions as to how
other people do this.
On 07/06/10 18:57, Rick James wrote:
> 5 slaves that you control. Each would be configured with a single
> "replicate-do-database=...". The database named would correspond to
> the client. The client machine would then replicate from that slave,
> an not receive any data other that his database.
> (For more efficiency, your master could have "binlog-do-database=..."
> 5 times -- this would limit how much is sent to your 5 slaves.)
> Black Hole is not needed with the above design. Nor do I see a way to
> use Black Hole while sending _only_ the client's data to his machine.
> On 6/7/10 6:24 AM, Mark Rogers wrote:
>> Shared server (over which I have control), running about 30 databases
>> (for 30 websites). I would like to replicate (say) 5 of those
>> databases to their (separate) client's premises.
>> Which is the best way to do this? Host server is a Ubuntu. Actually
>> it's a virtual server with 512MB RAM, which is easily adequate for
>> it's current load.
>> Obviously, if I dump all the 5 databases to a binlog then replicate
>> from there, each of the 5 clients will see all of the data for all 5
>> databases, which is a bandwidth and security issue. I know that there
>> are two standard options to avoid this: run those 5 databases in 5
>> separate database server instances, or run 5 additional instances
>> using Blackhole databases which generate their own binlogs. Doing
>> this with a stock Ubuntu (Debian) install doesn't seem particularly
>> straightforward and I've not seen any documentation for doing this
>> (other than by doing a fresh install from source), which is not
>> practical on an existing server (and in any case I would like to
>> stick with packages from the repositories that are easy to keep up to
>> date). Also, I have concerns about creating extra instances from a
>> memory point of view (or is this not a significant issue)?
>> Other options include writing something that can interpret the binlog
>> and split it into separate binlog files for separate databases (would
>> this be hard?)
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG