In the last episode (Jun 22), Peter Hicks said:
> I have a relatively simple table of 200+ network devices and 60+
> sites. This is accessed about 30 times a minute by various perl
> scripts to pick up device/site information.
> Will I see any noticable benefit from creating a cached copy of this
> table as a MEMORY table? The data doesn't change often, so slightly
> stale cached data isn't an issue.
30 times a minute is only 1 query every 2 seconds. Unless you mean
each script is doing 30/minute and you have 50 scripts, I don't think
you really need to worry about optimizing mysql. What's the total time
spent waiting for mysql vs the total runtime of the script?
Anyway, it's easy enough for you to test: copy the table to a backup
name, "ALTER TABLE devices ENGINE=MEMORY", then benchmark. If you are
doing the same queries repeatedly, try enabling the query cache first.
Also make sure your tables are appropriately indexed. If the table's
small, chances are mysql has fully cached the index and the OS has
cached the table. If they are simple queries, appropiate multi-column
indexing will let mysql return results directly from the index.
|• Table caching||Peter Hicks||24 Jun|
| • Re: Table caching||Dan Nelson||24 Jun|