I wrote a micro benchmark to test the performance of my MySQL server.
In the benchmark, which is written in C#, I created several threads,
each with a connection to the MySQL server, to insert rows into the
same table. Totally 3200 rows are inserted into the table.
When I try to vary the number of C# threads I found that the time
taken to finish the benchmark decreases, thus increasing the
throughput. The throughput increases almost linearly with the number
of C# threads, until I reach 100 threads, which is the maximum number
of connections allowed by my server.
This is quite unexpected, since the server has only two processors. I
expect the throughput to grow from one connection to two connections.
But I don't expect it to grow with more than two connections. Why is
it the case?
My Server has one Intel Xeon X3360 CPU with two cores running at
2.83GHz and 8GB of main memory. It runs Windows Server 2008 R2. The
MySQL version is 5.5.15 x64 Edition.