List:General Discussion« Previous MessageNext Message »
From:Michael Widenius Date:March 15 1999 6:33pm
Subject:scaling over a few hundred million rows.
View as plain text  
>>>>> "Jim" == Jim Crumpler <Jim.Crumpler@stripped> writes:

Jim> I have some scaling questions for anyone who would like to assist.
Jim> Consider the following requirements for a data repository.

Jim> * a continuous input stream of data at a maximum rate of 500 rows per second.
Jim> (43 million rows per day).
Jim> * each row contains two timestamps, a key and a value.  An index would be
Jim> required for one timestamp and the key.
Jim> * a number of clients must read from this data in various ways.

Jim> I thought MySQL could be handy for this task, since its relatively lightweight,
Jim> its multithreaded and offers the flexibility of SQL.  I built a small scale
Jim> test server (Solaris 2.6, PII-400, 256MB RAM, a few 9GB drives striped onto two
Jim> ultra-wide controllers) and I've been working on test database for a few months
Jim> now and I'm almost ready to go to the next level and build a bigger test
Jim> machine.

<cut>

Hi!

If you only do INSERT and SELECT, the new INSERT DELAYED option may
help solve this problem.  Have you tried this?

Another option that also may help is to use a bigger table cache.
(mysqld -O table_cache=256)

The 1K blocks for the b-trees should be quite good for most
applications.  You can however change this by changing the variable:

nisam_block_size

in isam/static.c

Note that this must be divisible by 1024.

Regards,
Monty
Thread
scaling over a few hundred million rows.Jim Crumpler15 Mar
  • scaling over a few hundred million rows.Michael Widenius15 Mar