Thanks again :-)
From: Joerg Bruehe <joerg.bruehe@stripped>
To: Nunzio Daveri <nunziodaveri@stripped>; mysQL General List
Sent: Fri, July 30, 2010 1:31:54 PM
Subject: Re: Indexes larger than RAM (was: Do you know who can answer this
question I posted yesterday please?)
I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:
Nunzio Daveri schrieb:
> All, I was running slamdb against one of our QA boxes and noticed that the
> innodb database is 190Gb in size BUT the worrying issue is that the indexes are
> 30GB in size!!! When I hit this server hard, it tanks on memory but still
> performs, slower of course ;-)
Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.
Any suggestions on what I should do? I am
> thinking of doing one of these:
Whether any action is needed, and which, depends on the problem you
- If the system as a whole (both CPU and disk) has a significant idle
percentage, it isn't yet maxed out, and I don't expect that adding
resources would improve performance significantly.
- If your CPUs have significant "waiting for IO" percentage, then data
accesses need speedup. This could be done by faster disks, but I would
expect more results from adding RAM for larger caches.
This holds especially if your disk throughput is close to the possible
(Assuming your bulk work is read/select. If it is insert/update, then
*removing* indexes might reduce the workload, as there are fewer
indexes to maintain.)
- If your CPUs are "busy", then I don't expect any increase of caching
> 1. Remove all queries, run for a few days, look at the slow query logs and then
> find those queries that really need them and index those specificially for
Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).
> 2. Split the single server into two servers both with 16 gb and 2 quad core
> cpu's. One master the other a slave.
Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.
> 3. Just add another 16gb (32GB total) and that should take care of the indexing
Makes sense if the disks are the bottleneck (CPUs are in "waiting for
IO"), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).
> Anyone had this problem before???
> Oh this is a single box, 100% mysql only and it talks to 3 front end iPlanet
> servers that hit it with a few hundread queries per second.
For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
Joerg Bruehe, MySQL Build Team, joerg.bruehe@stripped
(+49 30) 417 01 487
ORACLE Deutschland B.V. & Co. KG, Komturstrasse 18a, D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603