The problem could very well be in iterator optimization. What I have done is
bypassed sql++ integrator and used mysql_fetch_row directly on returned
result set. After I obtain ResUse object reference, I retrieve MYSQL_RES
with mysql_result() member call from ResUse object. That's how I save 1/2 of
processing time. However, it is still not as good as loading data from a
file. right now, it is double the time it takes to load data from the file.
At the same time, if I go into mysql and copy all rows from one table into
another, it takes approx 1-2s.
mysql> create table abc select * from oldtab;
Query OK, 397913 rows affected (1.12 sec)
Records: 397913 Duplicates: 0 Warnings: 0
So I am thinking if internally mysql takes 1-2 sec when walking thru old
table and creating new table, I should get similar performance. Maybe I am a
little naïve, but.... I did not expect it to be 2 to 3 times worse than I
have seen in db.
From: Chris Frey [mailto:cdfrey@stripped]
Sent: Wednesday, October 26, 2005 2:11 PM
Subject: Re: Retrieving 300K+ records
On Wed, Oct 26, 2005 at 10:46:49AM -0700, Earl Miles wrote:
> >Before you jump completely to C, you might try using the .at() function
> >to reference fields by index instead of by name. This removes the
> >name lookup on each cycle of your loop.
> The name lookups aren't what cause the slowdown (for me, at least) -- it's
> translating from a mysql row to a MySQL++ Row object that takes all the
Interesting. Looking at the code, there are about 3 objects, one of which
contain a vector of strings, involved in moving the data.
I think this goes back to that iterator optimization that I never got
around to. :-) There might have even been a patch floating around if
MySQL++ Mailing List
For list archives: http://lists.mysql.com/plusplus