In the last episode (Nov 22), Ryan Chan said:
> Is it common heard from people that if you have large table (assume MyISAM
> in my case), you need large memory in order to have the key/index in
> memory for performance, otherwise, table scan on disk is slow.
> But how to estimate how much memory I need?
> Consider a simple case, a MyISAM table is 10GB in size, with 2GB
> index, how much memory I need?
If by "table scan" you mean a full table scan with no index usage, your RAM
is irrevelant unless you have at leat 10GB (enough to cache the entire
table). Anything less than that and you will have to read the entire table
every time, and in that case, your disks' sequential throughput is the only
If you're more interested in random record lookups: Ideal case is 12 GB (no
disk reads needed ever). Next best would be 2GB (enough to cache the entire
index, but not the data, so you need to do one disk seek per lookup). Next
best would be enough to cache all but the leaves of the index (requiring one
index and one table seek per lookup); this depends on your key size but
200MB should be enough.