>>>>> "Eric" == Eric Kemp <kemp@stripped> writes:
Eric> I read a quote at the www.mysql.com site saying that some databases (under MySQL)
> have even reached the point of 50 million records. I take that as bragging rights. But
> my database background is mostly from mainframe DB2, where 50 million records would NOT be
> bragging rights.
Are you talking about rows in a table or in a database ?
Here at TCX we don't have many real big tables (as we only recently
fixed the 4G problem); I think our biggest table is about 30M rows.
From the MySQL manual:
We have now been using *MySQL* since 1996 in an environment with more
than 40 databases containing 10,000 tables, of which more than 500
have more than 7 million rows. This is about 100 gigabytes of
The above is the tables that are used in a production environment at
one of our customers.
Eric> Have there been any real tests to see how MySQL performs at larger numbers of
> records? Or has anyone gone any higher that 50 million? How about performance charts /
> benchmarks / competitive analysis at different high-volume counts?
MySQL uses B-tree indexes to find rows. This means that, as long you
are using key accesses, MySQL will slow down with nlog(rows), where the n
is the number of keys that fit into a 1024 K block. This means that
(if we forget the key cache), if we search on a integer column, which
should give us about n of 85:
85^4= 52M rows. This means that to find a row on a select that uses a
key, MySQL will need at most 5 diskseeks to find it (4 for the keys
and 1 for the data). Normally, even for big tables, the 2 first
levels of the key blocks is in the key cache, which means that it will
take 3 seeks to find a random row.
Eric> I just don't want to be surprised if/when I reach 60 million. So whether it be
> the need to restructure my database, move to something like DB2 at that time, or (better
> yet) stay with MySQL because it will continue to be a great performer out to... ???
> million records.
MySQL should be as fast as DB2 (or even faster as MySQL has a lot if
nice key compressions methods to make the index smaller) for key lookups.
If you are not using keys, then MySQL will have to scan the table, but
this is a different problem and not that common when you have 50 M
rows. (In normal cases MySQL should scan 1M rows/min on a moderate