List:General Discussion« Previous MessageNext Message »
From:Gavin Towey Date:April 21 2010 5:26pm
Subject:RE: better way to backup 50 Gig db?
View as plain text  
Claudio,

So innodb may not be consistent between memory and disk at all times, but that's actually
not important.  What is important is that the files on disk to be consistent with a
specific binlog position.  That's all that is needed for a consistent backup, and can be
done with filesystem snapshots.

Innodb may continue to do background flushing even during a FLUSH TABLES WITH READ LOCK,
but it always keeps consistency between its log files, the binlog, and the tablespaces.

When you load your snapshot back into an instance of mysql, you'll often see it go through
the crash recovery as it applies log file items to the tablespace, and deals with any
unfinished open transactions that happened to be running, but once done your data will be
consistent with the binlog position that was recorded when the read lock was held.

I do this every day on many servers both for backup, and creating new slaves.  It always
works.  Neither of those links you gave contradict this, in fact they both essentially
say "this works great, as long as you're aware of the caveats"

Regards,
Gavin Towey


-----Original Message-----
From: Claudio Nanni [mailto:claudio.nanni@stripped]
Sent: Wednesday, April 21, 2010 1:51 AM
To: Johan De Meersman
Cc: mysql@stripped
Subject: Re: better way to backup 50 Gig db?

Johan,
Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
enough? :)
InnoDB has buffers and activities going on even if you locked the tables and
you are not sure that its buffers are on the disk when you snapshot.
Again, you might be lucky and trust in the InnoDB recovery, what I state is
that there is only one 100% guaranteed safe way to have binary backups.

have a look at these, very interesting:
http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_File_System_Snapshots-2009-02-26.pdf
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

Cheers

Claudio



2010/4/21 Johan De Meersman <vegivamp@stripped>

> How does the below not guarantee me 100% that everything that can be
> consistent, is ?
>
> mysql> flush tables with read lock;
> unixhost# sync
> unixhost# lvm create snapshot
> mysql> unlock tables;
>
> I agree that there may be data inconsistencies within MyISAM, as it has no
> transactions, but there's also no guarantee that there isn't an application
> in the middle of multiple insert statements the moment you shut your
> database or your application. You can't magically get full consistency out
> of your database if your application hasn't been designed for it.
>
> Shutting systems down for backup may be fine for small setups, but on a
> server that hosts multiple critical applications, you just can't do that.
> You back up as consistently as you can without downtime, and pick the time
> of night with the lowest likelyhood of activity for it.
>
>
> On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni <claudio.nanni@stripped>wrote:
>
>> Gavin,
>> Right,
>> that is also an option, but you are really not sure 100% that everything
>> that is on memory is on the disk (buffers etc...)
>> also if it is definitely good for a disaster recovery.
>> What I meant is that the only way to have a 100% guaranteed consistent
>> binary backup is when the database is shut down.
>> Of course this is almost never an option, unless (tada) you have a slave
>> dedicated for that.
>> One remark on your note:
>>
>>
>> Just a note though, I noticed someone added replication to a slave as a
>> backup option.  I really discourage that.  Replication makes no guarantees
>> that the data on your slave is the same as the data on your master.  Unless
>> you're also checking consistency, a slave should be treated as a somewhat
>> unreliable copy of your data.
>>
>> While it is true that replication makes no guarantees, if your slave is
>> not the same as the master and you rely on that for production, you have
>> some problems,
>> try to go to business and say, our slave (which at least 50% of our
>> applications use to read data) is not really in sync, watch their facial
>> expression!
>> Believe me, in many production environments the method used for backups
>> relies on the slave, not on the master.
>> It is so much useful and important that you should have all your efforts
>> go for having a consistent read-only slave 'dedicated' only for backups, no
>> other client messing with it.
>>
>> Just my two cents
>>
>> Claudio
>>
>>
>>
>> Gavin Towey wrote:
>>
>> You can make binary backups from the master using filesystem snapshots.
>>  You only need to hold a global read lock for a split second.
>>
>> Regards,
>> Gavin Towey
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:    http://lists.mysql.com/mysql?unsub=1
>>
>>
>
>
> --
> Bier met grenadyn
> Is als mosterd by den wyn
> Sy die't drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
>



--
Claudio

This message contains confidential information and is intended only for the individual
named.  If you are not the named addressee, you are notified that reviewing,
disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. 
Please notify the sender immediately by e-mail if you have received this e-mail by
mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed
to be secure or error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not
accept liability for any loss or damage caused by viruses or errors or omissions in the
contents of this message, which arise as a result of e-mail transmission. [FriendFinder
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com
Thread
better way to backup 50 Gig db?Mitchell Maltenfort19 Apr
  • Re: better way to backup 50 Gig db?Shawn Green20 Apr
    • RE: better way to backup 50 Gig db?Gavin Towey20 Apr
      • Re: better way to backup 50 Gig db?Jay Ess20 Apr
        • RE: better way to backup 50 Gig db?andrew.2.moore20 Apr
          • RE: better way to backup 50 Gig db?Gavin Towey20 Apr
            • Re: better way to backup 50 Gig db?Rob Wultsch20 Apr
            • Re: better way to backup 50 Gig db?Claudio Nanni20 Apr
              • RE: better way to backup 50 Gig db?Gavin Towey21 Apr
Re: better way to backup 50 Gig db?Claudio Nanni21 Apr
  • Re: better way to backup 50 Gig db?Johan De Meersman21 Apr
    • Re: better way to backup 50 Gig db?Claudio Nanni21 Apr
      • Re: better way to backup 50 Gig db?Johan De Meersman21 Apr
        • Re: better way to backup 50 Gig db?Claudio Nanni21 Apr
      • RE: better way to backup 50 Gig db?Gavin Towey21 Apr
  • RE: better way to backup 50 Gig db?Jerry Schwartz21 Apr
Re: better way to backup 50 Gig db?Claudio Nanni21 Apr