A program of mine is continuously updating a remote MySQL database
table with a text file once per minute. Until last week my program
was doing this :
- gzip compression of the text file
- sending the .gz to the remote server using FTP
- http request on a cgi on the remote server which :
- gzip -d the text file
- LOAD DATA INFILE to the table.
This worked quite well but I found it no 'clean'.
I had to program that in the past because when I first wrote that
prog. there was not "LOAD DATA _LOCAL_ INFILE".
Next week, I upgraded to MySQL-3.22.21 and decided to use the LOAD
DATA LOCAL to update the remote table.
My new prog. is connecting to the remote MySQL server with the
But it seems that the CLIENT_COMPRESS protocol compresses a lot less
I have MRTG scanning the bandwidth used by the server which is
sending the data to the remote server. And compared to last week
when my prog was using gzip+ftp to send the data, the new LAOD DATA
LOCAL program uses twice more bandwidth and then the transfer delay
is more important.
Is it normal ? Is there a plan to optimize the CLIENT_COMPRESS
protocol in order to have more rapid transfers ? Why not to use the
gzip algorithm (it's GNU isn't it ?) ?
Another question :
- When a client is doing a 'LOAD DATA INFILE' command, during the
INSERT, all other clients doing select on that table are locked.
What I wonder is if when using 'LOAD DATA LOCAL INSERT', are the
others clients locked while the data are sent from the client to
the server or are they locked only once the data are all
transferred and the server is doing the insert ?