Jay Pipes wrote:
> Chris Boot wrote:
>> Rolando Edwards wrote:
>>> Also consider
>>> wait_timeout in my.ini
>> This is set to 28800. I don't consider this a problem since I've
>> hardly ever got anywhere near my connection limit.
>>> ----- Original Message -----
>>> From: "Rolando Edwards" <redwards@stripped>
>>> To: "Chris Boot" <bootc@stripped>
>>> Cc: mysql@stripped
>>> Sent: Friday, March 2, 2007 2:19:38 PM (GMT-0500) Auto-Detected
>>> Subject: Re: Heavily-loaded MySQL: Resource temporarily unavavailable?
>>> I hope I am not asking dumb questions
>>> but please consider the following:
>>> 1) Do you mysql_close every connection when rendering a page ?
>>> 2) Have you ever tried mysql_pconnect (which does not require doing
>>> a mysql_close to any persistent connection) ?
>> I have, and these suck up my connections like you wouldn't imagine.
>> There are lots of different sites hosted on the server, and using
>> persistent connections would require thousands of connections.
> Yeah, don't use pconnect.
>>> 3) Do you have interactive_timeout set (in seconds) in my.ini
>>> (default is 28800 [8 hours]) ?
>> It's set to the default. Once again I don't consider this an issue
>> since connections don't stay open long enough for this to even be
>>> 4) Do you have 'max_user_connections' set to zero (default anyway)
>>> in my.ini ?
>> I don't, this is set to 0.
>> I've done a few things since my first post that seem to have helped,
>> though I'm not sure quite yet. I've upped the back_log from 50 to
>> 128, upped thread_cache from 20 to 32, thread_concurrency from 4 to
>> 32 (does this do anything on Linux?).
> The thread_cache (I believe you mean *thread_cache_size*?) is the
> number of THD* cleared instances that the server keeps internally so
> that the memory allocation of creating a new THD (connection class
> instance) is mitigated. But, unless you have very high concurrent
> connections, this likely will not make much of a difference. Look at
> the difference between Connections and Threads_created status
> variables to see what percentage of your connections are being created
> from scratch (as opposed to partial recycling via the thread_cache)
phpMyAdmin claims my thread cache hitrate is 99.82%, so I doubt that
change would have made much difference indeed.
> thread_concurrency won't do anything on Linux. It's for Solaris.
I thought as much. Thanks.
> However, innodb_thread_concurrency is different. It's the maximum (or
> infinite if set to 0 or more than 20) number of operating system
> threads that InnoDB can use in its queueing system. I wouldn't
> recommend changing this.
I haven't, and although I use InnoDB most of the heavily-loaded tables
are all MyISAM. But none of the above should really affect my connection
problems which is, at the moment, the only place I'm seeing performance
>> Those in themselves didn't seem to make much difference, however I
>> have now made some of the busier sites connect to 127.0.0.1 (using
>> TCP instead of UNIX sockets) which has either slowed connections
>> enough to make the problem go away, or is making better use of
>> back_log which I'm not sure is used for UNIX sockets. Any ideas?
> You may want to try reverting that and simply turning off networking
> entirely, choosing to use unix sockets for everything:
> This should provide a good connection time reduction.
> See here for more information on that option:
Hmm, but that doesn't explain why I've stopped seeing the refused
connections though, or does it? DNS shouldn't really be much of an
impact due to the host cache no? Is there some kernel parameter or other
that I can use to tune how UNIX sockets work or something?
> Other variables you may want to look at is ensuring that your
> table_cache (or table_open_cache if 5.1+) has enough room to deal with
> 600 connections * the number of average tables in a typical SQL
> expression executed against the server. If this variable value is
> really low, you could be experiencing file descriptor swap/thrashing
> as so many threads are opening and then closing file descriptors rapidly.
table_cache is set to 512, which is probably a little low for my
environment but it seems ok for now (opened_tables is about 3K after 4
hours MySQL uptime).
Thanks for all the help,
>> Thanks again,
>>> ----- Original Message -----
>>> From: "Chris Boot" <bootc@stripped>
>>> To: mysql@stripped
>>> Sent: Friday, March 2, 2007 1:34:40 PM (GMT-0500) Auto-Detected
>>> Subject: Heavily-loaded MySQL: Resource temporarily unavavailable?
>>> I'm running a loaded MySQL server and every so often MySQL seems to
>>> connections in batches, which manifests itself as the following
>>> errors in PHP:
>>> mysql_connect() [<a
>>> Can't connect to local MySQL server through socket
>>> (11) [path to file] on line [x]
>>> I've got a carefully tuned my.ini and Apache configuration, which can
>>> theoretically process 400 connections at a time and this works most
>>> of the time.
>>> MySQL is set to accept 600 simultaneous connections though this is
>>> never reached
>>> (according to phpMyAdmin's stats anyway). I've upped the open files
>>> limit on
>>> MySQL and Apache (PHP runs as a module).
>>> At this stage I'm completely out of ideas as to what I can do to fix
>>> my problem.
>>> Any ideas? What extra information can I provide that could help?
>>> Many thanks,