List:General Discussion« Previous MessageNext Message »
From:Johan De Meersman Date:April 21 2010 6:50am
Subject:Re: better way to backup 50 Gig db?
View as plain text  
How does the below not guarantee me 100% that everything that can be
consistent, is ?

mysql> flush tables with read lock;
unixhost# sync
unixhost# lvm create snapshot
mysql> unlock tables;

I agree that there may be data inconsistencies within MyISAM, as it has no
transactions, but there's also no guarantee that there isn't an application
in the middle of multiple insert statements the moment you shut your
database or your application. You can't magically get full consistency out
of your database if your application hasn't been designed for it.

Shutting systems down for backup may be fine for small setups, but on a
server that hosts multiple critical applications, you just can't do that.
You back up as consistently as you can without downtime, and pick the time
of night with the lowest likelyhood of activity for it.


On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni <claudio.nanni@stripped>wrote:

> Gavin,
> Right,
> that is also an option, but you are really not sure 100% that everything
> that is on memory is on the disk (buffers etc...)
> also if it is definitely good for a disaster recovery.
> What I meant is that the only way to have a 100% guaranteed consistent
> binary backup is when the database is shut down.
> Of course this is almost never an option, unless (tada) you have a slave
> dedicated for that.
> One remark on your note:
>
>
> Just a note though, I noticed someone added replication to a slave as a
> backup option.  I really discourage that.  Replication makes no guarantees
> that the data on your slave is the same as the data on your master.  Unless
> you're also checking consistency, a slave should be treated as a somewhat
> unreliable copy of your data.
>
> While it is true that replication makes no guarantees, if your slave is not
> the same as the master and you rely on that for production, you have some
> problems,
> try to go to business and say, our slave (which at least 50% of our
> applications use to read data) is not really in sync, watch their facial
> expression!
> Believe me, in many production environments the method used for backups
> relies on the slave, not on the master.
> It is so much useful and important that you should have all your efforts go
> for having a consistent read-only slave 'dedicated' only for backups, no
> other client messing with it.
>
> Just my two cents
>
> Claudio
>
>
>
> Gavin Towey wrote:
>
> You can make binary backups from the master using filesystem snapshots.
>  You only need to hold a global read lock for a split second.
>
> Regards,
> Gavin Towey
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=1
>
>


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

Thread
better way to backup 50 Gig db?Mitchell Maltenfort19 Apr
  • Re: better way to backup 50 Gig db?Shawn Green20 Apr
    • RE: better way to backup 50 Gig db?Gavin Towey20 Apr
      • Re: better way to backup 50 Gig db?Jay Ess20 Apr
        • RE: better way to backup 50 Gig db?andrew.2.moore20 Apr
          • RE: better way to backup 50 Gig db?Gavin Towey20 Apr
            • Re: better way to backup 50 Gig db?Rob Wultsch20 Apr
            • Re: better way to backup 50 Gig db?Claudio Nanni20 Apr
              • RE: better way to backup 50 Gig db?Gavin Towey21 Apr
Re: better way to backup 50 Gig db?Claudio Nanni21 Apr
  • Re: better way to backup 50 Gig db?Johan De Meersman21 Apr
    • Re: better way to backup 50 Gig db?Claudio Nanni21 Apr
      • Re: better way to backup 50 Gig db?Johan De Meersman21 Apr
        • Re: better way to backup 50 Gig db?Claudio Nanni21 Apr
      • RE: better way to backup 50 Gig db?Gavin Towey21 Apr
  • RE: better way to backup 50 Gig db?Jerry Schwartz21 Apr
Re: better way to backup 50 Gig db?Claudio Nanni21 Apr