List:General Discussion« Previous MessageNext Message »
From:Steve Edberg Date:August 19 2006 11:09am
Subject:Re: Managing big mysqldump files
View as plain text  
At 4:03 PM +0530 8/19/06, Anil  wrote:
>Hi List,
>
>
>
>We are facing a problem of managing mysqldump out put file which is
>currently of size  80 GB and it is growing daily by 2 - 3 GB, but we have a
>linux partition of only 90 GB.. Our backup process is  first generate the
>mysqldump file of total database and then compress the dump file and remove
>the dump file. Is there any way  to get compressed dump file instead of
>generating dump file and then compressing it later. Any ideas or suggestions
>please
>
>
>
>Thanks
>
>Anil
>
>


Short answer: Yes -

mysqldump <mysqldump options> | gzip > outputfile.gz

Other alternatives:

You could direct output to a filesystem that is larger than the 90GB 
filesystem you mention (perhaps NFS mounted?).

You could pipe the output of gzip through ssh to a remote server.

You could use bzip2, which compresses substantially better than gzip, 
but with a significant performance/speed penalty (that is, do 
mysqldump | bzip2 > outputfile.bz2).

Try 'man gzip' and 'man bzip2' for more info.

	steve

-- 
+--------------- my people are the people of the dessert, ---------------+
| Steve Edberg                                http://pgfsun.ucdavis.edu/ |
| UC Davis Genome Center                            sbedberg@stripped |
| Bioinformatics programming/database/sysadmin             (530)754-9127 |
+---------------- said t e lawrence, picking up his fork ----------------+
Thread
Managing big mysqldump filesAnil 19 Aug
  • RE: Managing big mysqldump filesSST - Adelaide)19 Aug
  • Re: Managing big mysqldump filesSteve Edberg19 Aug