On Tue, July 17, 2007 13:31, Baron Schwartz wrote:
> Mogens Melander wrote:
>> On Tue, July 17, 2007 04:29, Baron Schwartz wrote:
>>> B. Keith Murphy wrote:
>>>> The problem is that I am realizing that this dump/import is going to
>>>> hours and in some cases days. I am looking for any way to speed this up.
>>>> Any suggestions?
>>> The fastest way I've found is to do SELECT INTO OUTFILE on the master,
>>> selects into a sort of tab-delimited format by default -- don't specify
>>> options like field terminators or whatnot. This file can then be imported
>>> directly into LOAD DATA INFILE, again without options.
>>> I think this is faster than loading files full of SQL statements, which
>>> have to be parsed and query-planned etc.
That method has proven "very" quick in the past.
>>> I thought mysqldump had an option to dump this way, but I can't see it
>> I think you are looking for the --single-transaction option :)
> I found the option I meant:
> -T, --tab=name Creates tab separated textfile for each table to given
> path. (creates .sql and .txt files). NOTE: This only
> works if mysqldump is run on the same machine as the
> mysqld daemon.
Yup, that was what i was trying to write 8^) using this one with the other.
+45 40 85 71 38
+66 870 133 224
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.