List:General Discussion« Previous MessageNext Message »
From:Sasha Pachev Date:May 14 1999 11:01pm
Subject:Re: architechtural/performance question
View as plain text  
Ben Margolin wrote:
> 
> I am building a web application and plan to use mysql on the backend
> for now, and Oracle eventually (if necessary). It uses persistent db
> connections (via mod_perl Apache::DBI and php3), but will have to contend
> with massive datarates. I'd like some opinions about this design.
> 
> My records are about 200 bytes long, and have two keys (one a char(24)
> and the other an int unsigned). expecting to have 10,000,000 rows.
> update speed is *critical*. insert speed not terribly important.
> basically the update i want to do modifies a varchar (usually about
> 60 chars) and an int unsigned. and i need this to be fast.
> 
> i ran some tests with 1,000,000 rows (latest mysql on a PII-300 with
> 128M RAM, IDE, RHlinux6) and insert speed was exceptional, around 650-700
> inserts/second. BUT updates (random records) were awful! around 15-20
> updates/second! Is this reasonable? I need to be able to sustain at least
> 120 updates/second, or think up something else. (I tried with char() and
> varchar() fields, thinking it would make a difference, and it didn't--not
> much. Surprisingly, varchars were faster! I expected the opposite.)
> 
> Oh, I should mention that the production hardware is a dual P3 with 256M
> RAM and SCSI disks, by the way; it is not quite configured so I haven't
> gotten to run test on it yet. It will be dedicated to the DB, not doing
> anything else of importance.
> 
> Is anyone doing apps that sustain large UPDATE rates? Is mysql capable of
> this on this kind of hardware? Suggestions for lightening the load...?
> Would breaking up the table (less rows) help? I am thinking of breaking it
> into several (say, 10? maybe up to 100) tables, and seeing if that would
> make any difference. Before I do, would it? The traffic is just as heavy,
> but the indices etc. would of course be smaller. Is it 6 of one, half
> dozen of the other, performancewise?
> 
> Greatly appreciate any advice! Thank you!
> 
> # ben margolin - ben@stripped - http://www.prince.org/
> # prince.org is a completely unauthorized, unofficial, independent fansite

Suggestions:

- Splitting the tables would help a great deal if you do
it smart. Come up with a good hash function that will
evenly distrubute the data.

- Also see if you can get rid of char and use enum or
int in its place. Less disk space, faster searches.

- And of course get as much RAM as possible.

- Last, try to think like MySQL. It will help you come
up with a lot of opmitimization for your code.

-- 
Sasha Pachev
http://www.sashanet.com/ (home)
http://www.direct1.com/ (work)
Thread
architechtural/performance questionBen Margolin15 May
  • Re: architechtural/performance questionEduardo Arino de la Rubia15 May
  • Re: architechtural/performance questionSasha Pachev15 May