One of our community testers has noticed that changing
innodb_log_files_in_group from 2 to 6 gives dbt2 ~8%
gain with 10 warehouses and 32 connections.
It gives a hint that there is an optimization opportunity
on a single serial log for Falcon which has lots of
syncWrite contention, from SerialLog::flush,
SRLUpdateIndex::append, and SRLUpdateRecords::append.
>Ann Harrison noted;
>One important aspect of the Serial Log is that it is
>*serial* - events are logged in the order they occur.
>It may be possible to maintain that property while
> writing to two different files in parallel, but it
> will certainly complicate managing the log.
>We need to find another way to reduce contention.
Another way to reduce contention would be good to find, but I want to
explore a little more how difficult multiple serial logs would be.
The guts of the changes in a repeatable-read transaction can be
separated from the guts of another transaction as long as the order of
completion is known and the recovery applies changes for several
transactions back in that transaction completion order. If there were a
central ordered serial log that contains commit messages and several
other serial logs that contain several transactions (each transaction is
dedicated to a particular log, then the 'guts' log would need to be
flushed before the central log. That is two flushes instead of one!
The other alternative I can think of is to flush the Transaction commit
message with the rest of its 'guts' to the same log, but add to it the
end transaction event that Olav is implementing in the new dependency
manager. The recovery thread would need to juggle all these logs in
order to replay the transactions in order.
However, the changes to concurrent read-committed transactions must be
replayed in the exact same order at the physical change level. I do not
know how to separate the guts of read-committed transaction into several
Ideas and comments welcome.
Falcon Team Lead