List:General Discussion« Previous MessageNext Message »
From:Ryan R. Tharp Date:April 24 2003 11:19pm
Subject:Re: performance
View as plain text  
IO bound I believe, I've seen the controller hit 74MBps (averages around
30MBps) where the load average is always under 1.

Here the queries from the quick benchmark test I put together:

"seq 1000 row Select"=>"select * from <table> limit 1000",
"seq 1000 row Select starting at random point"=>"select * from <table> limit
"ran 1000 row Select"=>"select * from <table> order by rand() limit 1000",
"seq 1 row Select"=>"select * from <table> limit 1",
"starting at ran 1 row Select"=>"select * from <table> limit $rand,1",
"ran 1 row Select"=>"select * from <table> order by rand() limit 1",
"1 row Insert Indexed columns"=>"insert into <table>
(encCC,username,hitreferrerid) values
"1 row Insert NonIndexed columns"=>"insert into <table>
(city,state,intoprice) values ('deleteme','deleteme',".rand(1,100).")",
"1 row Update Indexed columns"=>"update <table> set encCC='DELETME',
username='USERNAME', hitreferrerid=".rand(1,100)." where encCC='deleteme'",
"1 row Update NonIndexed columns"=>"update <table> set city='DELETME',
state='STATE', intoprice=".rand(1,100)." where city='deleteme' and
"1 row Delete Indexed columns"=>"delete from <table> where encCC='DELETME'
and username='USERNAME' limit 1",
"1 row Delete NonIndexed columns"=>"delete from <table> where city='DELETME'
and state='USERNAME' limit 1"

On the tables that have indexes, we have 9 single column indexes, and 2
multicolumn indexes (one is comprised of 3 fields, the other is 2 fields).


----- Original Message -----
From: "Dan Nelson" <dnelson@stripped>
To: "Ryan R. Tharp" <mysql@stripped>
Cc: <mysql@stripped>
Sent: Thursday, April 24, 2003 4:14 PM
Subject: Re: performance

> In the last episode (Apr 24), Ryan R. Tharp said:
> > Running MySQL 4.0.12 on a dual Xeon 2.8ghz with 2gb memory running
> > FreeBSD 4.8 on a Hardware RAID 0+1 SCSI HDDs
> >
> > Have this table that I'm trying to optimize for. It has 1.1million
> > rows with 29 fields averaging 506 Bytes per row. We've considered
> > splitting the table but we don't have enough developer time available
> > to convert all the code that uses the table. So we'd like just to
> > throw hardware/memory at it for the time being.
> >
> > Some software tricks I've tried are playing with:
> > Dynamic sized rows and static sized rows
> > InnoDB and MyISAM
> > Indexes and No Indexes
> > Primary Keys and No Primary Key (Just an Index for the auto-increment)
> >
> > I wrote a benchmark that tests most of these scenarios and then I
> > began to adjust the various mysql buffers but to my amazement the
> > benchmarks ran best when I didn't set any buffers in the my.cnf at
> > all.
> >
> > Also the fastest benchmark numbers aren't that fast. Is this table
> > too big? Any other optimization tips to try? Any one have any ideas
> > or suggestions?
> We need more data.  Are you currently CPU bound?  I/O bound?  "top" and
> "systat -v"  will tell you that.  Are you doing many small queries, or
> mainly complicated selects?  Lots of updates, or lots of selects?  How
> many indexes, and are your queries using them?
> --
> Dan Nelson
> dnelson@stripped

performanceRyan R. Tharp25 Apr
  • Re: performanceDan Nelson25 Apr
  • Re: performanceRyan R. Tharp25 Apr
    • Re: performancecolbey25 Apr
    • Re: performanceDan Nelson25 Apr
  • Re: performancecolbey25 Apr
  • Re: performanceRyan R. Tharp25 Apr
  • Re: performanceRyan R. Tharp25 Apr
    • Re: performancecolbey25 Apr
      • RE: performanceAdam Nelson25 Apr
        • Re: performanceJeremy Zawodny15 May
  • Re: performanceJonathan Disher25 Apr
  • Re: performanceRyan R. Tharp25 Apr
    • Re: performancecolbey25 Apr
    • Re: performanceChristian Jaeger28 Apr
  • Re: performanceRyan R. Tharp25 Apr
    • RE: performanceAdam Nelson28 Apr