I never experience any dump that were slow due to the index. The index
aren't dumped anyway they will be recreate when you import them back so
it shouldn't matter. (And that will cause problem if the db is running)
so I wouldn't drop the index on your table if I were you...
Your getting a lot of compression ratio 2.7G => 270 Megs, is it possible
that your dump is CPU bound ? I have seen this quite often when using
bzip2 for example which makes the dump takes very long! You can see that
from top when the dump is running. If that's the case you could try gzip
which takes much less cpu (but will give a bigger dump size)
Also about using the mysqldump 5.0 on a mysql 4.1 server... hmmm not
sure about which side effect that may have! I usually use the version
that comes with the server...
GPG keys available @ http://rounoff.darktech.org
Adrian Bruce a écrit :
> Try dropping the indexes first if you can, would save you about half the
> time and then re-build them after the dump finishes. Obviously you
> would need to do it at a quite time though when the DB is not being
> used. Is a binary backup not an option? at 29G is a large text file to
> David Sparks wrote:
>> I'm trying to dump some bigger tables without much luck. Anyone have
>> any advice to dump larger tables?
>> mysqldump starts guns blazing, but quickly it isn't doing anything as
>> viewed by strace.
>> After 1 day trying to dump a MyISAM table with 2.7G .MYD and 5.3G .MYI
>> the dumpfile is 270MB compressed and it seems to be dumping 1K per
>> After 12 hours trying to dump an InnoDB table with a 29G .ibd, same
>> problem ... data is trickling out.
>> I'm using mysqldump from 5.0.26 dumping a 4.1.21 server. I've tried
>> several incarnations of options, that latest is (-e, -q *should* be
>> enabled by default):
>> mysqldump -e --no-create-db --skip-add-drop-table -q -single-transaction
>> -v database
>> How to speed this up?