At 04:42 24/05/99, Michael Widenius wrote:
> >>>>> "Terry" == Terry Brown <Terry.Brown@stripped> writes:
>Terry> Hi All,
>Terry> I have a query I'm hoping this list's experience can come up with a
>Terry> solution to.
>Terry> I'm running Mysql 3.22..22 on Sparc Solaris 2.6. It's a dual
>Terry> with 384MB real ram and 1.5GB swap (I had a lot of spare disk!)
Specs have changed over the last couple of days, we now have 512MB ram and
4 250MHz cpu's (don't know if this makes any difference!)
>Terry> We have a WWW cgi program which hits the database with
>Terry> concurrent connections (selects/inserts/updates) and keep those
>Terry> going for approximately 20 mins (though not necessarily persistently).
>Terry> Using apache benchmark to test the script and access to the
>Terry> fine, we can achieve concurrency levels of approx 400
>users without any
>Terry> problems (load average up around 180 but still going strong!)
>Terry> I'm running Mysql with the following startup line in
>Terry> [parsed for readability]
>Terry> -O back_log = 45000
>Terry> -O max_connections=45000
>Terry> -O connect_timeout=240
>Terry> -O wait_timeout=240
>Terry> -O key_buffer=65M
>Terry> -O tmp_table_size=2000000
>Terry> -O thread_stack=200000
>Terry> -O delayed_queue_size=8128
>Terry> -O table_cache=65M
>Terry> -O sort_buffer=65M
>Terry> -O record_buffer=65M
>Terry> -l &
>You will run into trouble with the last tree values; Try instead with:
>-O table_cache=600 -O sort_buffer=4M -O record_buffer=2M
Changed to the values you suggested.
Still having the problems.
I can run Apache benchmark with standard HTML pages on our server with
concurrency levels of 1000 users getting a single page.
The problem still arrises when we're doing any interactions with the database.
I'm running apache benchmark with 400 concurrent users on 1 script which
will do multiple SELECTs from a single $dbh using the Mysql libraries for perl.
The web server error logs give "Can't connect to Mysql server" but from
looking (with top) it definately doesn't go away, it just seems to be
Should the above configuration (with Montys suggestions implemented) be
able to deal with that sort of concurrency?
I know when we had the 2 processors and 384MB of ram, it succeeded on 400
users but obviously, with the extra processors and memory, I've knocked
about 100seconds off the Apache Benchmark timings of the process.
I think what I need is Mysql to accept as many connections as can be thrown
at it and then work through them at it's own pace (preferrably as fast as
possible) without refusing anyone?
Does anyone have any suggestions for configurations for this kind of
scaling? The amount of memory the Mysql daemon takes up is not an issue,
we have plenty now.
Hope someone can help.
>I also think that you don't have to set the 'thread_stack' variable at
>Terry> When we have users in the command line mysql, every so often, they
>Terry> message "The mysql server has gone away, retrying OK" and are able to
>Terry> continue on their merry way.
>Terry> With the basic, out of the box configuration of mysqld, this doesn't
>Terry> happen. I know using top, the mysqld isn't taxing the system too much
>Terry> (approx 69M in size but only 4M of that resident). Even so, the
>Terry> never really filled.
>Terry> Can anyone think of any reason the above configuration would make
>Terry> server go away and come straight back?
>Can you check if the problem is if the client only looses the
>connection or if the MySQL server really goes down ?
>As you have 'wait_timeout' set to 240, this means that any 'mysql' users will
>get the above message every 4 minutes.
>Terry> Many thanks for any responses, I'm tearing my hair out here and I
>Terry> much left :-(
Terry Brown http://numedsun.ncl.ac.uk/
C&IT Development Officer
UNIX System Administrator
Faculty of Medicine Computing Centre, University of Newcastle,
NE2 4HH Tel: +44 191 222 5116 Fax: +44 191 222 5016
PGP: 22A4 6205 0F2D 9DD7 5614 DD77 99AD FAD9 D766 E18F