>>>>> "BAUMEISTER" == BAUMEISTER Alexandre <alex@stripped>
MW> MySQL does use the gzip algorithm for the compression. The difference
MW> is that MYSQL does a separate gzip for each package to keep things
MW> You can get better compression simply by increasing the
MW> 'max_allowed_packet' variable in your client.
BAUMEISTER> In fact, most of the file which are uploaded to the Mysql table are
BAUMEISTER> about 45ko. The biggest are about 400ko. And I found on the
BAUMEISTER> documentation :
BAUMEISTER> "The client's default maximum is 24M, but the default maximum in the
BAUMEISTER> server is 1M."
BAUMEISTER> So I suppose that all the file is sent in one time and that the
BAUMEISTER> compression should already by maximum isn't it ?
I just checked the code; The current 'send_file_to_server' code
reads the file in 15K packets and sends these (compressed) one by one.
MW> Have you checked that client/server protocol is really compressed?
MW> (In other words: Are you sure your server and client both are
MW> compiled with compression ?)
BAUMEISTER> Server compiled with compression ? I didn't see that option in
BAUMEISTER> ./configure --help .
BAUMEISTER> Both the server and the client are 3.22.21. I thought compression
BAUMEISTER> was by default allowed by server and on the demand using
BAUMEISTER> CLIENT_COMPRESS with mysql_real_connect() in the client. Isn't it ?
Yes, if the server and client are compiled with the compressed
libraries. If not, the protocol will automaticly fall back to not use
The easiest way to check this is to monitor the trafic between the
MW> MySQL inserts the rows at the same time it gets them; In other words,
MW> your other clients are locked during the whole transfer.
BAUMEISTER> Waou. That sounds really horrible for my solution. In fact I have
BAUMEISTER> about 4 little programs which are continuously updating a remote
BAUMEISTER> table with some quotation updates (about once per minute for each
BAUMEISTER> program taking the quotations from little text files which are well
BAUMEISTER> designed for LOAD DATA command). And on the server side, we have
BAUMEISTER> many web sites which are querying the Mysql quotation database (a
BAUMEISTER> lot of select with joins ...). This solution can't suffer of a to
BAUMEISTER> long lock delay while inserting the updates.
In this case, you will probably be better off by using LOAD DATA.
BAUMEISTER> Isn't there a simple way to add an option which would tell the
BAUMEISTER> server to do the insert into the table only once it has received all
BAUMEISTER> the data (for LOAD DATA _LOCAL_) ?
Not really. This would involve buffering the whole file (which may be
of ANY size).
BAUMEISTER> If for example someone uses LOAD DATA LOCAL in order to update a
BAUMEISTER> remote table with large files on a slow network, the delay while all
BAUMEISTER> SELECT on this table will be locked may be a REAL problem. Don't you
BAUMEISTER> agree ?
Yes; In this case it's better to not use LOCAL. Have you thought
about using multi-line inserts instead ?
BAUMEISTER> On my own, yesterday I reached for the first time the
BAUMEISTER> max_connections of 256 which I configured on the server and I
BAUMEISTER> was really surprised. But now I know why ! with the table being
BAUMEISTER> SELECT locked for 3-4 seconds due to the network transfer delay, the
BAUMEISTER> number of clients locked is rapidly increasing.