Just one more suggestion to do full backups in large databases:
- Dedicated slave (either physical machine, a disk cabinet using iscsi
connections from a machine just with a bunch of RAM etc)
- Get the slave delayed a certain time (ie: 1 hour, 2 hours...depends on
your workload) using Percona Toolkit (
- Make sure that slave has (*log*-*slave*-*updates) *in my.cnf.
- Make sure the master keeps the necessary logs for a point in time
recovery if needed.
If something goes wrong you have a full copy of your database, just delayed
a few hours. If you want to recover from a drop database (or any big fail
executed from MySQL CLI) in the master, you can just sync up your delayed
slave to that specific statment - go thru your master binlog, locate the
"drop database" statement and set START SLAVE SQL_THREAD UNTIL
master_log_file = 'mysql-bin.XXX', master_log_pos=XXX; to the position
right before the bad one. That way you'd have your database as it was just
before the wrong statement execution.
As Rick said - if you're thinking about doing snapshots, make sure you stop
your mysql daemon in the delayed slave to make sure everything is committed
to disk. Otherwise can end up having a corrupted DB which won't boot when
you need it.
And lastly, but probably most important....test your backups periodically!!
Hope this helps
2012/11/1 Karen Abgarian <abvk@stripped>
> For doing backups on the primary database, I know nothing better than have
> your tables in InnoDB and use Innobackup (or MySQL Enterprise backup).
> This, however, still has the possibility of hanging as it is using FLUSH
> TABLES WITH READ LOCK for taking backups of MyISAM tables. One may want
> to script it to kill the backup if the wait exceeds some threshold. The
> backup taken this way has incremental backups feature which may reduce the
> For offloading the backups to a replica, there exist more options because
> the replica can be frozen and/or shut down. For an InnoDB database, it
> has to be shut down for taking a consistent backup. If it is not, it will
> result in cute little inconsistencies unless a DBA is one lucky guy and
> always wins playing roulette.
> Combining the two, I like the idea of doing EM backup on a replica and
> having all tables in InnoDB.
> After a backup has been taken, it will eventually need to be restored
> unless someone just likes taking them. For this reason, it will have to
> be brought to the recovered system. Unless somebody knows in advance when
> the database would need to be recovered (f.e. it is known that a bad guy
> always corrupts it on Monday mornings), the backup will need to be
> available for restore always. These considerations usually imply things
> like shared filesystems between primary and replica, rejecting backups for
> recoveries across datacenters and the like.
> Backing up binary logs allows providing continuous coverage for recovery
> instead of discrete.
> On 01.11.2012, at 8:53, machiel.richards@stripped wrote:
> > Well, the biggest problem we have to answer for the clients is the
> > 1. Backup method that doesn't take long and don't impact system
> > 2. Restore needs to be done on a quick as possible way in order to
> minimize downtime.
> > The one client is running master - master replication with master server
> in usa, and slave in south africa. They need master backup to be done in
> the states.
> > Sent via my BlackBerry from Vodacom - let your email find you!
> > -----Original Message-----
> > From: Reindl Harald <h.reindl@stripped>
> > Date: Thu, 01 Nov 2012 16:49:45
> > To: mysql@stripped<mysql@stripped>
> > Subject: Re: Mysql backup for large databases
> > good luck
> > i would call snapshots on a running system much more dumb
> > than "innodb_flush_log_at_trx_commit = 2" on systems with
> > 100% stable power instead waste IOPS on shared storages
> > Am 01.11.2012 16:45, schrieb Singer Wang:
> >> Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2
> and etc, you should be fine. We have been
> >> using the trio: flush tables with read lock, xfs_freeze, snapshot for
> months now without any issues. And we test
> >> the backups (we load the backup into a staging once a day, and dev once
> a week)
> >> On Thu, Nov 1, 2012 at 11:41 AM, Reindl Harald
> h.reindl@stripped>> wrote:
> >>> Why do you need downtime?
> >> because mysqld has many buffers in memory and there
> >> is no atomic "flush buffers in daemon and freeze backend FS"
> >> short ago there was a guy on this list which had to realize
> >> this the hard way with a corrupt slave taken from a snapshot
> >> that's why i would ALWAYS do master/slave what means ONE time
> >> down (rsync; stop master; rsync; start master) for a small
> >> timewindow and after that you can stop the slave, take a
> >> 100% consistent backup of it's whole datadir and start
> >> the slave again which will do all transactions from the
> >> binarylog happened in the meantime
> > ‹¢ÒÒ‹¤◊5 Â vVæW&
> Â Ö ˆÆˆær Æ˜7@‹¤f÷"
> Æ˜7B &6∫˜fW3¢ ΩGG
> Âæ6öÒö◊˜7 À‹¥Fò
> Vç7V'67&ˆ&S¢ ΩGG
> Âæ6öÒö◊˜7 À‹ ‹
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql