The flush tables with read lock issue has to do with copying the files
off the disk, not with using mysqldump's sql. The issue is that flush
tables with read lock is enough to prevent clients from modifying
innodb but it doesn't make innodb "hold still" . It still has
background threads modifying the data files. If you did a flush tables
with read lock; then copied the data files they will almost certainly
be corrupted because different parts of the files are copied at
If you use mysqldump --master-data it will turn on --lock-all-tables
automatically giving you a consistent snapshot across all storage
engines. The downside of this is that mysqldump has to hold a read
lock the entire time it's dumping data. If you're only using innodb
you can specify --master-data --single-transaction mysqldump will
issue a flush tables with read lock long enough to copy the master
data and start a transaction. With a transaction started innodb will
only return rows that were committed before the transaction was
started. Other transactions can proceed and innodb won't see the rows.
On Fri, Apr 18, 2008 at 4:14 PM, Rick James <rjames@stripped> wrote:
> Rumor has it that InnoDB used to still have some things in RAM after the "FLUSH
> TABLES WITH READ LOCK", thereby disallowing it as a safe way to dump InnoDB tables.
> Another rumor has it that that has been fixed. [Sorry for lack of details; hope someone
> will respond with details.]
> I prefer to stop mysql, copy the necessary directories, then restart mysql. An
> improvement on that is to use Snapshots (eg LVM on Linux). They make that extremely fast,
> hence very practical. (The copy can be done from the snapshot _after_ restarting mysql.)
> > -----Original Message-----
> > From: Marcus Bointon [mailto:marcus@stripped]
> > Sent: Friday, April 18, 2008 2:42 AM
> > To: replication@stripped
> > Subject: Re: innodb replication
> > On 18 Apr 2008, at 10:27, Ed W wrote:
> > >> I´m going to set up a innodb database replication, and I
> > have some
> > >> doubts
> > >> about the backing up the master database step.
> > >>
> > >> Normally, I do a "FLUSH TABLES WITH READ LOCK", backup the data
> > >> files (or
> > >> use mysqldump), get the current log coordinates with "SHOW MASTER
> > >> STATUS"
> > >> and unlock the tables. Using exactly the same steps work for innodb
> > >> databases?
> > >>
> > >> According to some texts, it seems not, but I haven´t found
> > a clear
> > >> cookbook.
> > >>
> > >>
> > >> Just doing a "mysqldump --master-data --single-transaction" is
> > >> enough to
> > >> replace the above steps?
> > >>
> > >>
> > >
> > > How would you put these two things together in the case of
> > a server
> > > using mixed database table types for a clean backup that
> > can be used
> > > to restart synchronisation?
> > You can't. Only InnoDB supports single-transaction, so though
> > it would
> > still generate a dump file for other table types, you'd have no
> > assurance that they are clean because writes could be
> > happening as you
> > do it. This is one of the advantages of using innodb.
> > You can of course do separate dumps for other DBs, and because that
> > wouldn't need to include the innodb tables, it would at least reduce
> > the time that your tables are locked.
> > You could also look at maatkit's parallel dump/load scripts.
> > Marcus
> > --
> > Marcus Bointon
> > Synchromedia Limited: Creators of http://www.smartmessages.net/
> > UK resellers of info@hand CRM solutions
> > marcus@stripped | http://www.synchromedia.co.uk/
> > --
> > MySQL Replication Mailing List
> > For list archives: http://lists.mysql.com/replication
> > To unsubscribe:
> > http://lists.mysql.com/replication?unsub=1
> MySQL Replication Mailing List
> For list archives: http://lists.mysql.com/replication
> To unsubscribe: http://lists.mysql.com/replication?unsub=1