"select * from <table> limit 1000"
Are you sure you need all the column data? Also if the webserver
calling this data isn't local, you've got to push it over the network.. might want
to take a lookse at that.. Could turn out to be a large amount of data
depending on actual row size...
Any chance you could estimate a query load percentages?
amount select: %
amount update: %
amount delete: %
amount insert: %
I've got a php script that does this.. you just feed it a mysql text query
log if you need one...
On Thu, 24 Apr 2003, Ryan R. Tharp wrote:
> IO bound I believe, I've seen the controller hit 74MBps (averages around
> 30MBps) where the load average is always under 1.
> Here the queries from the quick benchmark test I put together:
> "seq 1000 row Select"=>"select * from <table> limit 1000",
> "seq 1000 row Select starting at random point"=>"select * from <table>
> "ran 1000 row Select"=>"select * from <table> order by rand() limit 1000",
> "seq 1 row Select"=>"select * from <table> limit 1",
> "starting at ran 1 row Select"=>"select * from <table> limit $rand,1",
> "ran 1 row Select"=>"select * from <table> order by rand() limit 1",
> "1 row Insert Indexed columns"=>"insert into <table>
> (encCC,username,hitreferrerid) values
> "1 row Insert NonIndexed columns"=>"insert into <table>
> (city,state,intoprice) values ('deleteme','deleteme',".rand(1,100).")",
> "1 row Update Indexed columns"=>"update <table> set encCC='DELETME',
> username='USERNAME', hitreferrerid=".rand(1,100)." where encCC='deleteme'",
> "1 row Update NonIndexed columns"=>"update <table> set city='DELETME',
> state='STATE', intoprice=".rand(1,100)." where city='deleteme' and
> "1 row Delete Indexed columns"=>"delete from <table> where encCC='DELETME'
> and username='USERNAME' limit 1",
> "1 row Delete NonIndexed columns"=>"delete from <table> where
> and state='USERNAME' limit 1"
> On the tables that have indexes, we have 9 single column indexes, and 2
> multicolumn indexes (one is comprised of 3 fields, the other is 2 fields).
> ----- Original Message -----
> From: "Dan Nelson" <dnelson@stripped>
> To: "Ryan R. Tharp" <mysql@stripped>
> Cc: <mysql@stripped>
> Sent: Thursday, April 24, 2003 4:14 PM
> Subject: Re: performance
> > In the last episode (Apr 24), Ryan R. Tharp said:
> > > Running MySQL 4.0.12 on a dual Xeon 2.8ghz with 2gb memory running
> > > FreeBSD 4.8 on a Hardware RAID 0+1 SCSI HDDs
> > >
> > > Have this table that I'm trying to optimize for. It has 1.1million
> > > rows with 29 fields averaging 506 Bytes per row. We've considered
> > > splitting the table but we don't have enough developer time available
> > > to convert all the code that uses the table. So we'd like just to
> > > throw hardware/memory at it for the time being.
> > >
> > > Some software tricks I've tried are playing with:
> > > Dynamic sized rows and static sized rows
> > > InnoDB and MyISAM
> > > Indexes and No Indexes
> > > Primary Keys and No Primary Key (Just an Index for the auto-increment)
> > >
> > > I wrote a benchmark that tests most of these scenarios and then I
> > > began to adjust the various mysql buffers but to my amazement the
> > > benchmarks ran best when I didn't set any buffers in the my.cnf at
> > > all.
> > >
> > > Also the fastest benchmark numbers aren't that fast. Is this table
> > > too big? Any other optimization tips to try? Any one have any ideas
> > > or suggestions?
> > We need more data. Are you currently CPU bound? I/O bound? "top" and
> > "systat -v" will tell you that. Are you doing many small queries, or
> > mainly complicated selects? Lots of updates, or lots of selects? How
> > many indexes, and are your queries using them?
> > --
> > Dan Nelson
> > dnelson@stripped
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=1