On Sat, 1999-09-11 23:38:15 +0300, Tim Groove wrote:
> >For example, why not just use a single table, and then
> > SELECT COUNT(*) FROM tab WHERE uid=###;
> I can't because this table is very big (it's size is about 200MB
> every evening)
> Each new INSERT lasts for second or two when table become big enough
> (it has indexes and several "char" fields also) and other processes
> are locked and awaiting while INSERT will be done. I need about 400
> new INSERTs each minute, so I decided to split one big table to 24
> parts to speed up INSERT process.
Tim, unfortunately I don't have much experience with tables that big
and so what I'm going to say is just based on what I read, not on own
- Can you maybe get rid of some of the indexes? Each index will
take some time to be updated for table changes (but I'm quite sure
you're aware of that).
- Maybe INSERT DELAYED is an option for you?
There's been some discussion here in the list on this topic ...
- And a maybe crazy idea: Because I assume, that updating the indexes
is the main bottleneck, you maybe could go with two identical
tables, one with only an index on an auto_increment field id, the
other version fully indexed. And then every five or ten minutes,
INSERT INTO indexedtable
WHERE id > lastremeberedid;
This "batch" insert should be faster than 1000s of single inserts.
So (in my theory) you should have the fasted inserts possible
on the input side (nonindexed table) and fast indexed selects
on the output side, with the only drawback, that the latter data
only is updated every some minutes.
- Another very different approach to solve your speed problem might be
in fiddling with some sort buffer sizes and the like, but there are
other's more experienced to step in on this ... :)
Martin Ramsch <m.ramsch@stripped> <URL: http://home.pages.de/~ramsch/ >
PGP KeyID=0xE8EF4F75 FiPr=52 44 5E F3 B0 B1 38 26 E4 EC 80 58 7B 31 3A D7