List:General Discussion« Previous MessageNext Message »
From:Baron Schwartz Date:July 17 2007 11:31am
Subject:Re: speeding imports
View as plain text  
Mogens Melander wrote:
> On Tue, July 17, 2007 04:29, Baron Schwartz wrote:
>> B. Keith Murphy wrote:
>>> Hey all. I have a bit of a problem. We are upgrading from ver 4.1.x to
>>> 5.0.45. That's not the problem :). The problem is that MySQL recommends
>>> dumping and importing your data when doing this upgrade to make sure
>>> that
>>> things go as smoothly as possible. We have quite a bit of data spread
>>> over
>>> multiple servers. We have slaves off each master server and the plan is
>>> upgrade the slaves and then make them the masters.
>>> The problem is that I am realizing that this dump/import is going to
>>> take
>>> hours and in some cases days. I am looking for any way to speed this up.
>>> Any
>>> suggestions?
>> The fastest way I've found is to do SELECT INTO OUTFILE on the master,
>> which
>> selects into a sort of tab-delimited format by default -- don't specify
>> any
>> options like field terminators or whatnot.  This file can then be imported
>> directly into LOAD DATA INFILE, again without options.
>> I think this is faster than loading files full of SQL statements, which
>> have to
>> be parsed and query-planned etc.
>> I thought mysqldump had an option to dump this way, but I can't see it
>> now.
> I think you are looking for the --single-transaction option :)

I found the option I meant:

   -T, --tab=name      Creates tab separated textfile for each table to given
                       path. (creates .sql and .txt files). NOTE: This only
                       works if mysqldump is run on the same machine as the
                       mysqld daemon.

speeding importsB. Keith Murphy17 Jul
  • Re: speeding importsBaron Schwartz17 Jul
    • Re: speeding importsMogens Melander17 Jul
      • Re: speeding importsBaron Schwartz17 Jul
        • Re: speeding importsMogens Melander18 Jul
  • Re: speeding importsAndrew Hutchings17 Jul