In the last episode (Apr 15), Charles Q. Shen said:
> This question is about tuning the innodb_buffer_pool_size in an
> experiment related to my earlier post. I am running MySQL 4.1.11 with
> innoDB on RedHatEnterprise Linux, kernel 2.6.9-22. The table contains
> 50M records, with a total data_length of 9GB and index_length of
> 9.5GB. I measured repeatedly the query latency of 50000 randomly
> selected records. The latency remains relatively small and are about
> the same regardless of whether I restart MySQL or not. As pointed out
> earlier by Philippe, this could be caused by hitting the FS cache.
> 1) Does that mean the MySQL innodb_buffer_pool_size setting will
> virtually have no effect at all? (if it is true, then it should
> happen quite commonly). In my test, I tried to change the
> innodb_buffer_pool_size from its default value (8MB) to half of the
> physical memory (4GB). I found almost no difference in latency in
> these two cases.
For a 50000-record test using the primary key to look up values, it may
not make a difference at all. You can basically ignore the
index_length, since for innodb that only applies to secondary indexes.
The primary index is counted as data.
If you were to graph lookup latency verses buffer_pool size, you would
probably see two bends: the first happens when your cache is large
enough to store most of the levels of the index you're using (since you
have to seek for each level), and the second happens when your cache is
large enough to store the data pages as well (so you go to 0 seeks per
record). I don't know how big your primary key is, but assuming 8
bytes (and a 4-byte pointer to the next index page), that'd need
50MB*(8+4)=600MB. The nice thing about indexes, though, is the first
levels get hit an awful lot, so it doesn't take many random lookups to
fetch them. You would need 9GB of cache to reach the next bend for
your particular table, and even then you would have to run a lot of
50000-record tests in sequence before the cache filled up enough to
show it (or do a full table scan beforehand to pull everything into
Your testing is made more difficult by your 8GB RAM size, since even if
your make your innodb cache ridiculously small, you still have a >50%
chance of the lookup taking 0ms because it's in the OS cache. It's
always more efficient to cache inside mysql, though, since read
syscalls aren't free.