In the last episode (Apr 24), Ryan R. Tharp said:
> Running MySQL 4.0.12 on a dual Xeon 2.8ghz with 2gb memory running
> FreeBSD 4.8 on a Hardware RAID 0+1 SCSI HDDs
> Have this table that I'm trying to optimize for. It has 1.1million
> rows with 29 fields averaging 506 Bytes per row. We've considered
> splitting the table but we don't have enough developer time available
> to convert all the code that uses the table. So we'd like just to
> throw hardware/memory at it for the time being.
> Some software tricks I've tried are playing with:
> Dynamic sized rows and static sized rows
> InnoDB and MyISAM
> Indexes and No Indexes
> Primary Keys and No Primary Key (Just an Index for the auto-increment)
> I wrote a benchmark that tests most of these scenarios and then I
> began to adjust the various mysql buffers but to my amazement the
> benchmarks ran best when I didn't set any buffers in the my.cnf at
> Also the fastest benchmark numbers aren't that fast. Is this table
> too big? Any other optimization tips to try? Any one have any ideas
> or suggestions?
We need more data. Are you currently CPU bound? I/O bound? "top" and
"systat -v" will tell you that. Are you doing many small queries, or
mainly complicated selects? Lots of updates, or lots of selects? How
many indexes, and are your queries using them?