List:General Discussion« Previous MessageNext Message »
From:B. Keith Murphy Date:July 10 2007 8:36pm
View as plain text  
hey all. Let me throw out a quick question. We are upgrading from some version of 4.1 to
the latest version of 5.0. MySQL recommends that you dump and re-import data with this
type of move. Not only do they recommend the dump/import - it would be nice to optimize
all our tables at the same time!! 

I have done a test dump of a couple of gigs of data with just a straight mysqldump. That
worked, but the import is happening slower than Christmas. I know that one technique for
speeding up the imports is to where you configure the dump so that it generates file(s)
with the schema for the table(s) and file(s) with actual data. Then you can use the load
data infile command to import the data after generating the new tables (maybe by using
'mysql <table1.sql'). 

So here is the problem. It worked, but it was slow. Not because of the speed of the import
but because I am having to issue a command (and type in the root password) for each table
import. I could do the following: 'cat *.sql > database.sql' to get a complete schema
for all the tables in the database and then just 'mysql < database.sql' to set up all
the tables at the same time. You can't do that with the import of the actual data. You
have to individually load the data into each table. something like 'load data infile

This is fine for one..but what happens when you have 100 tables? I can't take the time to
sit there and hit the up-arrow to retrieve the command and then type in the new file name.
Is there some way to automate this data import? 



B. Keith Murphy 
Database Administrator 
2635 Meridian Parkway, 2nd Floor 
Durham, North Carolina 27713 
(o) 919-433-0786 
(c) 850-637-3877 

mysqldump/restoreB. Keith Murphy10 Jul
  • RE: mysqldump/restoreJohn Mancuso10 Jul
    • View pocedures/backup proceduresAndrey Dmitriev11 Jul
      • Re: View pocedures/backup proceduresDan Nelson11 Jul