So I followed your steps and here is what I get (when I try to do the --apply-log):
130718 10:28:02 InnoDB: Error: all log files must be created at the same time.
130718 10:28:02 InnoDB: All log files must be created also in database creation.
130718 10:28:02 InnoDB: If you want bigger or smaller log files, shut down the
130718 10:28:02 InnoDB: database and make sure there were no errors in shutdown.
130718 10:28:02 InnoDB: Then delete the existing log files. Edit the .cnf file
130718 10:28:02 InnoDB: and start the database again.
xtrabackup: innodb_init(): Error occured.
innobackupex: ibbackup failed at /usr/bin/innobackupex line 389.
Systems Services Librarian
Vanderbilt University Library
From: Marcus Bointon [mailto:marcus@stripped]
Sent: Thursday, July 18, 2013 8:56 AM
To: McGranahan, Jamen
Subject: Re: 2 slaves - one Master; continues to fail
On 18 Jul 2013, at 15:29, "McGranahan, Jamen" <jamen.mcgranahan@stripped>
> Thanks Marcus. I will try that, though I just realized this morning that I missed a
> step - the --apply-log step. Though when I tried it, I got even more errors. This is
> making me mad/frustrated. This should not be this complicated.
Here's my procedure (I'm also using mmm and monit, which complicates things a little, but
you can ignore those bits):
Assuming you've got the latest backup from the master (daily.1.tgz) on your slave.
* If you're using mmm, mark the dead slave as offline on the monitor host: mmm_control
* Copy the backup file to the broken slave (use scp -l 400000 in order not to starve the
master of network bandwidth for live clients)
* Turn off process monitoring so mysql will not be restarted automatically: monit
* On the slave, stop mysql: service mysql stop
* On the master (if you're doing circular replication), stop the mysql slave: STOP SLAVE;
* Move the old MySQL data directory out of the way: mv /var/lib/mysql /var/lib/mysql2 (or
compress it somewhere if you're short on disk space)
* Make a new data directory and move into it: mkdir /var/lib/mysql; cd /var/lib/mysql
* Untar a backup file into the new folder: tar xvzif daily.1.tgz. Note the i option on
the tar operation - it will not work without that.This will take up to about 40 mins.
* Run the Innobackupex tool on the extracted files: innobackupex --apply-log
--use-memory=6G --ibbackup=xtrabackup /var/lib/mysql This only takes a few seconds; use a
smaller memory amount if on a smaller server.
* Assuming that completes successfully, delete the backup and set ownership of the files:
rm daily.1.tgz; chown -R mysql:mysql /var/lib/mysql
* Restart mysql: service mysql start
* Get the master log file name and position of the backup (note NOT the slave info in
xtrabackup_slave_info) : cat xtrabackup_binlog_info. It will say something like
* Connect to MySQL and check that stuff is there.
* Reset the replication settings using the details you got about the logs: CHANGE MASTER
TO MASTER_HOST='192.168.1.10', MASTER_USER='replica', MASTER_PASSWORD='password',
MASTER_LOG_FILE='db-bin.000916', MASTER_LOG_POS= 13889427; (Change to match real DB server
* Restart the slave: START SLAVE;
* Check the status of the slave as it catches up with the master until
'seconds_behind_master' is 0: SHOW SLAVE STATUS\G
* Re-enable monitoring: monit monitor mysql
* Once it's caught up, you can restart circular replication, if needed:
* On the slave:
* FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; Note the log file name and position
(something like mysql-bin.000031 and 17244785).
* On the master:
* CHANGE MASTER TO MASTER_HOST='192.168.1.11', MASTER_USER='replica',
MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000031', MASTER_LOG_POS=17244785;,
inserting values from the slave we just looked at.
* START SLAVE;
* On the slave:
* UNLOCK TABLES;
* Tell mmm the slave is back online: mmm_control set_online db2
* ...and you're done.
I've used this process many times.
Synchromedia Limited: Creators of http://www.smartmessages.net/ UK info@hand CRM solutions
marcus@stripped | http://www.synchromedia.co.uk/