List:General Discussion« Previous MessageNext Message »
From:Sasha Pachev Date:June 22 1999 3:28pm
Subject:Re: Update,Insert and Select
View as plain text  
Dan Nelson wrote:
> 
> In the last episode (Jun 22), David Johnson said:
> > Sasha Pachev <sasha@stripped> said:
> > > I have noticed that unlike selects, updates and inserts are more
> > > disk I/O than CPU bound. Setting key_buffer to 0 in this situation
> > > actually improved performance. Regardless of the buffer settings,
> > > disk I/O is very intensive and mysqld is using only a very small
> > > fraction of CPU.
> > >
> > > Does that mean that if someone is doing a lot of inserts and
> > > updates they need to focus on a fast disk? Or maybe there is a way
> > > to tweak the settings to avoid it (other than mounting ramdisk on
> > > /usr/local/mysql/data/db_name) What do you guys think?
> >
> > We are quickly running into the IO boundary problem.  We do allot of
> > updates and have recently figured out that we will run into the IO
> > wall long before we run into the CPU wall.
> >
> > We would be VERY interested in avoiding that problem.
> 
> You're pretty much stuck with it, I'm afraid.  Inserts and updates have
> to modify both the table and any affected indexes, and have to sync the
> data to avoid corruption in case of a crash.  You can make selects
> faster by adding RAM (and caching more of your table/indexes), but more
> RAM doesn't help updates much at all.
> 
> Bigger databases (like Oracle) overcome this problem by using redo logs
> and background database writer processes to optimize writes, and by
> using row or block-level locking so that multiple updates can happen
> simultaneously.
> 
> Mysql doesn't have either redo logs or fine-grained locking, so you'll
> just have to buy the fastest disks you can find, and/or get a
> nonvolatile disk cache like Prestoserve.
> 

I have done some more benchmarks and discovered that creating a
reasonable amount of smaller tables can speed up inserts and updates. In
my case:

one large table with 1 million records - about 70 updates/s 
the same table broken down into 64 smaller tables - worst case - 190
updates/s, sometimes 520 updates/s.

One way to improve the smaller table performance would be to schedule
the queries better, queue up several sorting them according to which
table they will be modifying, and then once in a while flushing the
queue. The advantage of this is that you can do lock tables, and also
this will cause less disk swapping since the update will be more
orderly.
-- 
Sasha Pachev
http://www.sashanet.com/ (home)
http://www.direct1.com/ (work)
Thread
Update,Insert and SelectSasha Pachev23 Jun
  • Re: Update,Insert and SelectDavid Johnson23 Jun
    • Re: Update,Insert and SelectDan Nelson23 Jun
  • Re: Update,Insert and SelectSasha Pachev23 Jun
  • Re: Update,Insert and SelectJim Faucette23 Jun
  • Update,Insert and SelectMichael Widenius29 Jun