On Wed, 2003-10-22 at 14:15, Lokesh K wrote:
> As part of performance bench marking, we started testing MySQL for
> Linux. But, in our testing we found that the readings between any two
> runs are not consistent. So, we are puzzled like how to compare the
> performance of the database.
> Please, let us know, if there is a way to get consistent results
> between two runs.
The results you're getting is quite expected. Due to various items you
can't get exactly the same results each time you run the benchmark,
The files can be allocated in different place on the file system,
scheduling can be a bit different. You never know when OS decides to do
flushing of delayed writes as well as to do other internal work.
Basically results are never 100% the same as results from previous run.
As you might see the accuracy is different for different tests, this is
due to test loading system different ways giving more or less chances to
So I would not be surprised if thee results would be from the same
system, not even from the different ones.
The first thing to do in oder to get results comparable is to try to run
them in as well as possible similar environment - avoid side load as
much as possible, use same kernel/GLIBC/other software versions,
configured the same way. Use same MySQL version with same runtime and
compile time options. Use same partitions using the same file systems
(the disk speed is normally uneven in the inner and outer tracks)
All of this will not allow you to get 100% the same result each time you
run the test, but the difference shall be minimal.
The next step to usable results is to understand how accurate results
are - run benchmark few times (make sure to restart MySQL server and
remount file system in between in order to clean caches, or just restart
whole system). Compute the average result you get as well as maximum
Finally you'll get something like "421 +-5 sec" as result.
This would allow you to compare different results, being able to
judge if the difference is withing possible for single configuration
or represents results change.
There are other approaches possible, depending on the goal, for example
you may be using best result possible from the series of runs.
The second important thing to notice is - you shall not compare these
large "tests" performance to judge about performance. The per operation
statistics (stored in the RUN file and printed in the end of benchmark)
This would give you the clue which operations are faster and which are
The tests are not balanced to represent some typical load, so using
other way could lead you to the wrong conclusion.
Imagine for example you had the following benchmark
insert 1000 iterations 10sec
select 10000 iterations 20sec
After changes you got the following:
insert 1000 iterations 5sec
select 10000 iterations 30sec
Total: 35sec (5 sec worse)
On other hand would have different balance,
1000 inserts, 1000 selects you would get (assuming same time per
iteration) 12sec vs 8sec,
So depending on the mix of operations used you can get quite different
> Attached document gives the results between two runs of SQL-bench
> against MySQL running on Linux.
> Following is the configuration used for testing:
> 1. MySQL server and SQL-bench are running on different machines.
> 2. Both client and server are in isolated network. And there is no
> other machine in the network.
> 3. Both machines are running MySQL 4.0.15
> 4. Both machines are running Redhat 9.0
> 5. SQL-bench was run to execute all the test cases
> Let me know, if I am doing something wrong
> Thanks in Advance,
> SQL Server Benchmarks Mailing List
> For list archives: http://lists.mysql.com/benchmarks
> To unsubscribe: http://lists.mysql.com/benchmarks?unsub=1
Peter Zaitsev, Full-Time Developer
MySQL AB, www.mysql.com
Are you MySQL certified? www.mysql.com/certification