On Mon, 1999-10-04 11:18:07 +1000, Michael Kat wrote:
> 1. Is it faster to import a file with thousands of INSERT statements
> like this eg.
> % mysql -h myhost -u me -pmypassword myDB < fileWithInserts.txt
> 2. Or, to hard code the INSERT syntax into a C program, for example (I
> have a C program to extract data from file with fixed length fields),
> and insert the data straight into a table (with a loop)? I've done it
> this way eg.
> mysql_query(sock, "insert into temp (product_no, product_name, qty)
> values (%s, '%s', %s)"
Both methods are equally fast on the MySQL side, because both do the
same SQL commands: one INSERT per row.
(There might be tiny performance differences due to different string
handling in both programs, but this _most_ probably isn't relevant.)
But you can gain speed, if you insert many (or all) rows of data with
INSERT INTO yourtable
VALUES (1,'data one'), (2, data two), ... ... ...;
> I don't have file privs in the 'user' grant table, so I cannot use
> LOAD DATA INFILE.
In V3.22.6 and later you also can use LOAD DATA LOCAL INFILE, which
doesn't need the file privilege!
This is the fastest of the three methods!
From slowest to fastest:
- single row INSERT
- multiple row INSERT
- LOAD DATA LOCAL INFILE
- LOAD DATA INFILE
On tables with many indexes, each of these methodes can gain speed if
you first drop indexes, then add all the new rows, then re-create the
Martin Ramsch <m.ramsch@stripped> <URL: http://home.pages.de/~ramsch/ >
PGP KeyID=0xE8EF4F75 FiPr=5244 5EF3 B0B1 3826 E4EC 8058 7B31 3AD7