>We are thinking of moving to MySQL. We have a table of several tens
>of millions of rows, with two indices, which will be accessed by
>roughly 100 different processes. At any one time, 5 or so of the
>processes will be doing selects on the table, while 40 or so will be
>doing updates. However, no two processes will ever try to update the
>same row at once.
>Can MySQL handle this efficiently and without allowing the table or
>indices to become corrupt?
>(The total throughput we need is on the order of 100 indexed updates
>per second; currently we are running a single 900 MHz Athlon with generic
>IDE disk but would buy more processors if it would help).
MySQL has only table level locking, which means that each update will
lock the entire table, which means that updates must be done one at a
time. There are a couple of open source extensions of MySQL that are
supposed to offer row level locking, but these aren't available yet.
Know thyself? Absurd direction!
Bubbles bear no introspection. -Khushhal Khan Khatak