----- Original Message -----
> From: "Gary Aitken" <mysql@stripped>
> surprising as the source did not enforce uniqueness. My problem is
> the load data simply dies without indicating which line of the input
> file was in error; the error message refers to line 3, which is not
> even the SQL statement for the LOAD DATA INTO statement:
Would it not refer to line 3 of the datafile? Not sure, just guessing.
> So... I wanted to read the data line at a time and use a plain
> INSERT statement. That way I could check for duplicate keys and
> discover where the duplicate records are. However, I can't find a
> way to read input from the console or a file. What am I missing? I
> know I could write a java or C++ program to do this, but it seems
> like overkill for what should be a trivial task.
Yeah, that would be overkill :-p
You could easily use sed and awk to transform the input file into a list of SQL
statements. Another solution is to disable all keys on your target table - or create a
duplicate without any keys - and after the import run a select (unique key fields) group
by (unique key fields) having count(*) > 1.
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel