List:General Discussion« Previous MessageNext Message »
From:matthew mcglynn Date:March 24 1999 3:55am
Subject:RAM and caching
View as plain text  
We have a production server that sees up to 400 transactions
per second. None of the tables are monstrous -- 600k to a million
records in the largest.

On disk, the database is just under a gig in size.

I'd like to be able to cache much or most of this in RAM
to speed queries, as this server just keeps getting busier.
My question is this: what is the algorithm for determining
how much RAM is necessary to slurp this db into memory ?
What I'm thinking is that there may be some unknown factors
that make this answer more complicated than "you need just
under a gig of RAM".

For example, are the disk files compressed in any way? Or,
are the RAM copies compressed ? If multiple mysqld threads
open a table, does it occupy more memory than if only a single
thread opens the table ?

The host OS is Linux 2.0.36. Right now we're using 
v. 3.22.12-beta, although we'll be upgrading to whatever
the current binary release is in a few days.

Thanks for any assistance.


RAM and cachingmatthew mcglynn24 Mar
  • Re: RAM and cachingCorin Hartland-Swann24 Mar