List:General Discussion« Previous MessageNext Message »
From:Eduardo Arino de la Rubia Date:May 14 1999 10:19pm
Subject:Re: architechtural/performance question
View as plain text  
suggestion:

RAM, RAM, RAM.

The more RAM, the faster the updates.

Ed

(128 mbs is not NEAR enough for a DB of that size... I am not sure 256 is
either... I would think 512 a good point to be at.)

-------------------------------------------------------------------------------
1 + 1 = 3, for large values of 1. |     ed@stripped    | arino@stripped
-------------------------------------------------------------------------------

On Fri, 14 May 1999, Ben Margolin wrote:

> 
> I am building a web application and plan to use mysql on the backend
> for now, and Oracle eventually (if necessary). It uses persistent db
> connections (via mod_perl Apache::DBI and php3), but will have to contend
> with massive datarates. I'd like some opinions about this design.
> 
> My records are about 200 bytes long, and have two keys (one a char(24)
> and the other an int unsigned). expecting to have 10,000,000 rows.
> update speed is *critical*. insert speed not terribly important.
> basically the update i want to do modifies a varchar (usually about
> 60 chars) and an int unsigned. and i need this to be fast.
> 
> i ran some tests with 1,000,000 rows (latest mysql on a PII-300 with
> 128M RAM, IDE, RHlinux6) and insert speed was exceptional, around 650-700
> inserts/second. BUT updates (random records) were awful! around 15-20
> updates/second! Is this reasonable? I need to be able to sustain at least
> 120 updates/second, or think up something else. (I tried with char() and
> varchar() fields, thinking it would make a difference, and it didn't--not
> much. Surprisingly, varchars were faster! I expected the opposite.)
> 
> Oh, I should mention that the production hardware is a dual P3 with 256M
> RAM and SCSI disks, by the way; it is not quite configured so I haven't
> gotten to run test on it yet. It will be dedicated to the DB, not doing
> anything else of importance.
> 
> Is anyone doing apps that sustain large UPDATE rates? Is mysql capable of
> this on this kind of hardware? Suggestions for lightening the load...?
> Would breaking up the table (less rows) help? I am thinking of breaking it
> into several (say, 10? maybe up to 100) tables, and seeing if that would
> make any difference. Before I do, would it? The traffic is just as heavy,
> but the indices etc. would of course be smaller. Is it 6 of one, half
> dozen of the other, performancewise?
> 
> Greatly appreciate any advice! Thank you!
> 
> # ben margolin - ben@stripped - http://www.prince.org/
> # prince.org is a completely unauthorized, unofficial, independent fansite 
> 
> 
> ---------------------------------------------------------------------
> Please check "http://www.mysql.com/Manual_chapter/manual_toc.html" before
> posting. To request this thread, e-mail mysql-thread3397@stripped
> 
> To unsubscribe, send a message to the address shown in the
> List-Unsubscribe header of this message. If you cannot see it,
> e-mail mysql-unsubscribe@stripped instead.
> 

Thread
architechtural/performance questionBen Margolin15 May
  • Re: architechtural/performance questionEduardo Arino de la Rubia15 May
  • Re: architechtural/performance questionSasha Pachev15 May