List:General Discussion« Previous MessageNext Message »
From:Reindl Harald Date:November 1 2012 4:00pm
Subject:Re: Mysql backup for large databases
View as plain text  
as said:

use a replication slave dedicated for backups
you can even let a slave write a binlog and
sync another slave with this one

* rsync backups working with diff
* they are extremly fast after the first time
* a dedicated backup-slave has ZERO impact

i am doing rsync-backups of 1.5 TB data over a WAN
link since years each day and the real traffic is
between 2 and 5 GB each day

Am 01.11.2012 16:53, schrieb machiel.richards@stripped:
> Well, the biggest problem we have to answer for the clients is the following:
> 1. Backup method that doesn't take long and don't impact system
> 2. Restore needs to be done on a quick as possible way in order to minimize
> downtime.
> 
> The one client is running master - master replication with master server in usa, and
> slave in south africa. They need master backup to be done in the states.
> 
> Sent via my BlackBerry from Vodacom - let your email find you!
> 
> -----Original Message-----
> From: Reindl Harald <h.reindl@stripped>
> Date: Thu, 01 Nov 2012 16:49:45 
> To: mysql@stripped<mysql@stripped>
> Subject: Re: Mysql backup for large databases
> 
> good luck
> 
> i would call snapshots on a running system much more dumb
> than "innodb_flush_log_at_trx_commit = 2" on systems with
> 100% stable power instead waste IOPS on shared storages
> 
> Am 01.11.2012 16:45, schrieb Singer Wang:
>> Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2 and etc,
> you should be fine. We have been
>> using the trio: flush tables with read lock, xfs_freeze, snapshot for months now
> without any issues. And we test
>> the backups (we load the backup into a staging once a day, and dev once a week) 
>>
>> On Thu, Nov 1, 2012 at 11:41 AM, Reindl Harald <h.reindl@stripped
> <mailto:h.reindl@stripped>> wrote:
>>
>>     > Why do you need downtime?
>>
>>     because mysqld has many buffers in memory and there
>>     is no atomic "flush buffers in daemon and freeze backend FS"
>>
>>     short ago there was a guy on this list which had to realize
>>     this the hard way with a corrupt slave taken from a snapshot
>>
>>     that's why i would ALWAYS do master/slave what means ONE time
>>     down (rsync; stop master; rsync; start master) for a small
>>     timewindow and after that you can stop the slave, take a
>>     100% consistent backup of it's whole datadir and start
>>     the slave again which will do all transactions from the
>>     binarylog happened in the meantime


Attachment: [application/pgp-signature] OpenPGP digital signature signature.asc
Thread
Mysql backup for large databasesMachiel Richards - Gmail1 Nov
  • Re: Mysql backup for large databasesDimitre Radoulov1 Nov
  • Re: Mysql backup for large databasesReindl Harald1 Nov
    • RE: Mysql backup for large databasesRick James1 Nov
      • Re: Mysql backup for large databasesSinger Wang1 Nov
        • Re: Mysql backup for large databasesReindl Harald1 Nov
          • Re: Mysql backup for large databasesSinger Wang1 Nov
            • Re: Mysql backup for large databasesReindl Harald1 Nov
              • Re: Mysql backup for large databasesmachiel.richards1 Nov
  • Re: Mysql backup for large databasesKaren Abgarian1 Nov
    • Re: Mysql backup for large databasesManuel Arostegui2 Nov
  • Re: Mysql backup for large databasesKaren Abgarian2 Nov
Re: Mysql backup for large databasesReindl Harald1 Nov
RE: Mysql backup for large databasesRick James1 Nov