I'm building a mysql replication solution to one of our clients, and I
would like to minimize the downtime during the setup
We've dealt with replication before, but in this case I would like to
avoid doing a full locked dump of the db if possible, since the database
is quite large
My idea is to start the slave sync with one table, and add all of the
databases / tables one by one.
This way, the only down time would be while 1 table gets dumped
In theory, it should work, does anybody has any experience with a
solution like this?
I've put together a script to add a table to a running replication:
I assume, that the replication is already working for one table, and the
databases are created ahead
I've removed the safety checks to make it shorter
echo "Step 1. stop slave @ Server2"
echo STOP SLAVE SQL_THREAD | mysql
echo "Step 2. export table with locks in place with --master-data option
@ Server1 and move dump to Server2"
ssh -i sshkey user@Server1 mysqldump --lock-tables --single-transaction
--flush-logs --hex-blob --master-data=2 $1 $2>/opt/$1.$2.sql
binlogFile=`head -n 25 /opt/$1.$2.sql | grep "CHANGE MASTER" | cut -d
"'" -f 2-2`
binlogLine=`head -n 25 /opt/$1.$2.sql | grep "CHANGE MASTER" | cut -d
"=" -f 3- | cut -d ";" -f -1`
head -n 25 /opt/$1.$2.sql | grep "CHANGE MASTER"
echo "Parsed file:"$binlogFile" and line:"$binlogLine"."
echo "Step 3a. run slave until slave sync point @ S2"
echo START SLAVE SQL_THREAD UNTIL MASTER_LOG_FILE = \'$binlogFile\',
MASTER_LOG_POS = $binlogLine | mysql
echo "Step 3b. load table dump"
mysql $1 < /opt/$1.$2.sql
echo "Step 3c. add Table1 to my.cnf"
echo replicate-do-table=$1.$2 >> /etc/mysql/my.cnf
echo "Step 4. wait until slave finished"
while [ `echo SHOW SLAVE STATUS \\\\G | mysql | grep "Slave_SQL_Running:
No" | wc -l` -ne 1 ];do
echo "Step 5. restart mysql"
Thank you for your help,