List:General Discussion« Previous MessageNext Message »
From:Gavin Towey Date:January 3 2011 6:46pm
Subject:RE: This just seems to slow
View as plain text  
I much prefer LOAD DATA INFILE to mysqlimport.  The issue looks like you have a file with
two columns, and a table with three.  You will probably need to be more specific about
which columns map to which fields in the file.  Please report the error with any commands
you run.

Also, most importantly, how slow is "slow?"  Have you measured the import speed in terms
of rows per second?

The largest factor I have found that influences overall import speed is the
innodb_buffer_pool_size.  Make sure you're not running with the default size.  A buffer
pool that's large enough to contain the secondary indexes on the table will also help a
lot.



-----Original Message-----
From: Jerry Schwartz [mailto:jerry@stripped]
Sent: Monday, January 03, 2011 7:56 AM
To: 'mos'; mysql@stripped
Subject: RE: This just seems to slow

Okay, I have a confession to make: I have never gotten Load Data Infile or
mysqlimport to work.

Here's my CSV file, named "t_dmu_history.txt":

13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
...

Here's my mysqlimport command:

mysqlimport -uaccess -pxxx --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID`
 --local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
maintable_usa t_dmu_history.txt

I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
Win32 (ia32)"

It runs for awhile, but I wind up with only one record:

localhost >select * from t_dmu_history;
+------------------+------------------+------------------------+
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+------------------+------------------+------------------------+
|                1 |            13071 |                   NULL |
+------------------+------------------+------------------------+
1 row in set (0.00 sec)

Obviously mysqlimport is parsing the input file incorrectly, but I don't know
why.

Here's the table itself:

+------------------------+---------+------+-----+---------+----------------+
| Field                  | Type    | Null | Key | Default | Extra          |
+------------------------+---------+------+-----+---------+----------------+
| t_dmu_history_id       | int(11) | NO   | PRI | NULL    | auto_increment |
| DM_History_DM_ID       | int(11) | YES  | MUL | NULL    |                |
| DM_History_Customer_ID | int(11) | YES  | MUL | NULL    |                |
+------------------------+---------+------+-----+---------+----------------+

       Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
  `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
  `DM_History_DM_ID` int(11) DEFAULT NULL,
  `DM_History_Customer_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`t_dmu_history_id`),
  KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
  KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: jerry@stripped
Web site: www.the-infoshop.com


>-----Original Message-----
>From: mos [mailto:mos99@stripped]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@stripped
>Subject: Re: This just seems to slow
>
>Jerry,
>       Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=====
>>us-gii >show create table t_dmu_history\G
>>*************************** 1. row ***************************
>>        Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>   PRIMARY KEY (`t_dmu_history_id`),
>>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=====
>>
>>Here's a snip of what the input file looks like:
>>=====
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299531);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299532);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299533);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299534);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299535);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298880);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298881);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298882);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298883);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298884);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298885);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298886);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298887);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298889);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298890);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298891);
>>=====
>>
>>There are about 870000 records.
>>
>>I realize that using one INSERT per row is going to hurt, but I don't
>>control
>>the format of the incoming data.
>>
>>Besides, I'd have thought this would be pretty quick regardless of how
>>clumsy
>>the method was.
>>
>>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>>I'm not worried about ACID.
>>
>>Any suggestions?
>>
>>
>>
>>
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: jerry@stripped
>>Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe:    http://lists.mysql.com/mysql?unsub=1
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:    http://lists.mysql.com/mysql?unsub=1





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=1


IMPORTANT: This email message is intended only for the use of the individual to whom, or
entity to which, it is addressed and may contain information that is privileged,
confidential and exempt from disclosure under applicable law. If you are NOT the intended
recipient, you are hereby notified that any use, dissemination, distribution or copying of
this communication is strictly prohibited.  If you have received this communication in
error, please reply to the sender immediately and permanently delete this email. Thank
you.
Thread
This just seems to slowJerry Schwartz3 Jan
  • Re: This just seems to slowmos3 Jan
    • RE: This just seems to slowDaevid Vincent3 Jan
      • Re: This just seems to slowWagner Bianchi3 Jan
        • RE: This just seems to slowJerry Schwartz3 Jan
      • RE: This just seems to slowJerry Schwartz3 Jan
        • Re: This just seems to slowMySQL)3 Jan
    • RE: This just seems to slowJerry Schwartz3 Jan
      • RE: This just seems to slowGavin Towey3 Jan
        • RE: This just seems to slowJerry Schwartz3 Jan
      • RE: This just seems to slowmos3 Jan
        • RE: This just seems to slowJerry Schwartz4 Jan
          • RE: This just seems to slowmos4 Jan
            • RE: This just seems to slowJerry Schwartz4 Jan
  • mysqlimport doesn't work for meJerry Schwartz3 Jan
    • Re: mysqlimport doesn't work for meCarsten Pedersen3 Jan
      • RE: mysqlimport doesn't work for meJerry Schwartz3 Jan