5 slaves that you control. Each would be configured with a single
"replicate-do-database=...". The database named would correspond to the
client. The client machine would then replicate from that slave, an not
receive any data other that his database.
(For more efficiency, your master could have "binlog-do-database=..." 5
times -- this would limit how much is sent to your 5 slaves.)
Black Hole is not needed with the above design. Nor do I see a way to
use Black Hole while sending _only_ the client's data to his machine.
On 6/7/10 6:24 AM, Mark Rogers wrote:
> Shared server (over which I have control), running about 30 databases
> (for 30 websites). I would like to replicate (say) 5 of those
> databases to their (separate) client's premises.
> Which is the best way to do this? Host server is a Ubuntu. Actually
> it's a virtual server with 512MB RAM, which is easily adequate for
> it's current load.
> Obviously, if I dump all the 5 databases to a binlog then replicate
> from there, each of the 5 clients will see all of the data for all 5
> databases, which is a bandwidth and security issue. I know that there
> are two standard options to avoid this: run those 5 databases in 5
> separate database server instances, or run 5 additional instances
> using Blackhole databases which generate their own binlogs. Doing this
> with a stock Ubuntu (Debian) install doesn't seem particularly
> straightforward and I've not seen any documentation for doing this
> (other than by doing a fresh install from source), which is not
> practical on an existing server (and in any case I would like to stick
> with packages from the repositories that are easy to keep up to date).
> Also, I have concerns about creating extra instances from a memory
> point of view (or is this not a significant issue)?
> Other options include writing something that can interpret the binlog
> and split it into separate binlog files for separate databases (would
> this be hard?)
Rick James - MySQL Geek