Thanks! Please see comments inline.
> -----Original Message-----
> From: Dan Nelson [mailto:dnelson@stripped]
> Sent: Saturday, April 15, 2006 2:51 PM
> To: Charles Q. Shen
> Cc: 'Philippe Poelvoorde'; 'MySQL General'
> Subject: Re: MySQL cache problem - innodb_buffer_pool_size
> and FS cache
> In the last episode (Apr 15), Charles Q. Shen said:
> > This question is about tuning the innodb_buffer_pool_size in an
> > experiment related to my earlier post. I am running MySQL
> 4.1.11 with
> > innoDB on RedHatEnterprise Linux, kernel 2.6.9-22. The
> table contains
> > 50M records, with a total data_length of 9GB and index_length of
> > 9.5GB. I measured repeatedly the query latency of 50000 randomly
> > selected records. The latency remains relatively small and
> are about
> > the same regardless of whether I restart MySQL or not. As
> pointed out
> > earlier by Philippe, this could be caused by hitting the FS cache.
> > 1) Does that mean the MySQL innodb_buffer_pool_size setting will
> > virtually have no effect at all? (if it is true, then it
> should happen
> > quite commonly). In my test, I tried to change the
> > innodb_buffer_pool_size from its default value (8MB) to half of the
> > physical memory (4GB). I found almost no difference in latency in
> > these two cases.
> For a 50000-record test using the primary key to look up
> values, it may not make a difference at all.
Sorry I did not make it clear, the 50000-record tests are based on a
secondary-index, not the the primary one. And I repeated the tests for the
same 50000-records set several times.
> You can
> basically ignore the index_length, since for innodb that only
> applies to secondary indexes.
> The primary index is counted as data.
If the primary index is counted as data and become part of the data_length
and as I understand the secondary indexes are stored with the primary key
value for the row. Could you please explain why the index_length could be
larger than the data_length? ( Does that mean you have everything, data,
primary, secondary index mixed together??) thanks.
> If you were to graph lookup latency verses buffer_pool size,
> you would probably see two bends: the first happens when your
> cache is large enough to store most of the levels of the
> index you're using (since you have to seek for each level),
Is this the size of index_length or is this merely the size of index itself
( for example, 600M primary keys as you've shown below, plus whatever size
of secondary keys)?
> and the second happens when your cache is large enough to
> store the data pages as well (so you go to 0 seeks per
So given that my data_length is 9G and index_length is 9.5G, what do you
think would be the memory threshold for this to happen (9G, 9.5G, 18.5G) ?
> I don't know how big your primary key is,
It is an INT.
> assuming 8 bytes (and a 4-byte pointer to the next index
> page), that'd need 50MB*(8+4)=600MB. The nice thing about
> indexes, though, is the first levels get hit an awful lot, so
> it doesn't take many random lookups to fetch them. You would
> need 9GB of cache to reach the next bend for your particular
> table, and even then you would have to run a lot of
> 50000-record tests in sequence before the cache filled up
> enough to show it (or do a full table scan beforehand to pull
> everything into memory).
I tested the same 50000-record repeatedly. I also tried a smaller database
with a data_length 87M and index_length 96M. In that case I tested after a
full table scan, but encountered the same problem as reported. Probably the
reason is still the OS cache you also mentioned below.
> Your testing is made more difficult by your 8GB RAM size,
> since even if your make your innodb cache ridiculously small,
> you still have a >50% chance of the lookup taking 0ms because
> it's in the OS cache. It's always more efficient to cache
> inside mysql, though, since read syscalls aren't free.
> Dan Nelson