that is also an option, but you are really not sure 100% that everything
that is on memory is on the disk (buffers etc...)
also if it is definitely good for a disaster recovery.
What I meant is that the only way to have a 100% guaranteed consistent
binary backup is when the database is shut down.
Of course this is almost never an option, unless (tada) you have a slave
dedicated for that.
One remark on your note:
Just a note though, I noticed someone added replication to a slave as a backup option. I
really discourage that. Replication makes no guarantees that the data on your slave is
the same as the data on your master. Unless you're also checking consistency, a slave
should be treated as a somewhat unreliable copy of your data.
While it is true that replication makes no guarantees, if your slave is
not the same as the master and you rely on that for production, you have
try to go to business and say, our slave (which at least 50% of our
applications use to read data) is not really in sync, watch their facial
Believe me, in many production environments the method used for backups
relies on the slave, not on the master.
It is so much useful and important that you should have all your efforts
go for having a consistent read-only slave 'dedicated' only for backups,
no other client messing with it.
Just my two cents
Gavin Towey wrote:
You can make binary backups from the master using filesystem snapshots. You only need to
hold a global read lock for a split second.