>>>>> "Rick" == Rick James <rjames@stripped> writes:
Rick> Cache segmentation is a bad idea -- at least if you are deciding that
Rick> certain things go in each segment. It's like the difference between
Rick> pre-assigning disks for different purposes versus using RAID striping.
Rick> You end up with some segments that are too busy, and some segments that
Rick> are too idle. That is, you have free blocks you can't use.
The 'basic' plan we have is to first segment the hash tables where we
look up things. This would be more like RAID striping than
pre-assigned disks as the buffers are shared among all entries.
There would still need to be a global mutex for the LRU but this would
be hold for a very short time and we may even be able to use atomic
instructions to avoid mutex altogether.
Rick> OTOH, if you meant having a lock for each, say, 10MB of the cache
Rick> irrespective of what is stored where, then segmentation sounds like a
Rick> good idea. This decreases lock contention (by having more locks), but
Rick> does not restrict which page is stored where.
We have briefly discussed do some tests to try to split the cache
based on a hash of file number + block. This would be more in the
line of pre-assigned disk, but as it's likely that things would be
spread quite uniform this way, it may still work decently.
I personally think it's better to avoid depending on where pages are
stored, but as it's quite easy to test the above, I think we should at
least test it to get some experience while working in something we
think is a better solution.