Most RAID controllers will happily do Elevator stuff like you mentioned.
So will Linux.
For MySQL + RAID, a Linux elevator strategy of 'deadline' or 'noop' is optimal. (The
default, 'cfq', is not as good.)
A RAID controller with multiple drives striped (and optionally parity-checked) (RAID-5,
-10) and with a BBU (Battery Backed Write Cache) is excellent for I/O.
I don't know about "chronologically later". InnoDB "does the right thing", as long as the
OS does not cheat on fsync, etc.
Only 16 subdirectories per directory? I would expect 256 to be more efficient overall.
This is because of fewer levels. Scanning 256 is probably less costly than doing an extra
level. (Yeah, again, I can't _prove_ it in _your_ environment.)
4K tables on a single machine -- that is beginning to get into 'big' in reference to
ulimit, table_open_cache, etc. That is, if you went much past that, you would be getting
into new areas of inefficiency.
I do not like splitting a database "table" into multiple tables, except by PARTITIONing.
PARTITIONing would also provide a 'instantaneous' way of purging old data. (DROP
PARTITION + REORGANIZE PARTITION)
Almost always (again no proof for your case), a single table is more efficient than many
tables. This applies to PARTITIONing, too, but there are can be other gains by using
InnoDB has a 64TB limit per PARTITION.
> -----Original Message-----
> From: william drescher [mailto:william@stripped]
> Sent: Saturday, July 27, 2013 4:32 AM
> To: mysql@stripped
> Subject: Re: hypothetical question about data storage
> On 7/26/2013 6:58 PM, Chris Knipe wrote:
> > The issue that we have identified is caused by seek time - hundreds of
> > clients simultaneously searching for a single file. The only real way
> > to explain this is to run 100 concurrent instances of bonnie++ doing
> > random read/writes... Your disk utilization and disk latency
> > essentially goes through the roof resulting in IO wait and insanely
> > high load averages (we've seen it spike to over 150 on a 8-core Xeon -
> > at which time the application (at a 40 load average already) stops
> > processing requests to prevent the server crashing).
> back in the day (many years ago) when I worked for IBM we had disk
> controllers that would queue and sort pending reads so that the heads
> would seek from low tracks across the disk to high tracks and then back to
> low. This resulted in very low seek _averages_.
> The controller was smart enough to make sure that if a write occurred,
> chronologically later reads got the right data, even if it had not been
> physically written to disk yet.
> Is there such a controller available now?
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql