----- Original Message -----
> From: "Antonio Fernández Pérez" <antoniofernandez@stripped>
> Subject: Advices for work with big tables
> I write to the list because I need your advices.
> I'm working with a database with some tables that have a lot of rows, for
> example I have a table with 8GB of data.
> How can I do to have a fluid job with this table?
The two easiest points of optimisation are:
* Make sure your queries can use indexes as much as possible
* Percona Toolkit has good stuff for figuring that out. Have a look at pt-query-digest.
* Provide enough memory to your MySQL server to fit the entire database (and don't forget
to allocate it to your InnoDB bufferpool if that applies :-) )
* failing that, at least enough memory to keep the most frequently used dataset in
* failing *that*, but here you're running into disk bottlenecks already, enough
memory to keep your indexes in memory
* faster disks for data that doesn't fit in memory (SSD, FusionIO etc)
Memory required for the full dataset: select sum(data_length+index_length) from
Memory required for indexes: select sum(index_length) from information_schema.tables;
There's no easy way to figure out your active data set size, that depends on what queries
are performed most often.
Depending on what type of disk cabinet you have, it may be possible to replace some drives
with full SSDs, or with disks that have a built-in SSD cache.
Unhappiness is discouraged and will be corrected with kitten pictures.