For the actual question, I agree with the points Johan mentioned. MySQL, to my
knowledge, does not have an option to use raw devices for binary logs. Even if it had it,
it would not have the benefits Chao is seeking. There is indeed a tradeoff between
losing transactions and performance. If the goal is performance, the raw device would be
slower since every write would have to actually complete, instead of leaving the block in
the OS cache. The best is probably achieved by the battery backed cache: the server could
be configured to not lose transactions and at the same time perform the work fast.
For the question of tweaking the sync_binlog, I find difficult to use values other than 0
and 1. With 0, it just ignores fsyncs, and the amount of transactions lost is at the
mercy of OS cache. With 1, all transactions will always be on disk before returning to
the user. I cannot make sense out of the doco's remark about that this would lose 'at
most one transaction' and I assume it is a mistake.
With the value of 10, say, however, what I expect to happen, is the server will attempt to
do fsync every 10 statements. Say 10 transactions are in the binary log buffer, and the
server does an fsync. What is to happen with the other transactions that keep coming? If
they commit in memory and return, the statement that sync_binlog syncs every 10
transactions is false. If they wait, the wait would be as large as the wait for the disk
write and the result is that all transactions will be waiting for disk writes.
If somebody can shed more light on this, I would like to hear it.
On Mar 17, 2011, at 12:14 AM, Johan De Meersman wrote:
> ----- Original Message -----
>> From: "Chao Zhu" <zhuchao@stripped>
>> One Q: Can mysql binlog use raw device on Linux?
> Mmm, good question. Don't really know; but I'm not convinced you'll get huge benefits
> from it, either. Modern filesystems tend to perform pretty close to raw throughput.
> From a just-thinking-it-through point of view, I'd guess no - mysqld never seems to
> open binlogs for append, it always opens a new one. This may have something to do with the
> way replication works; not to mention the question of what'll happen if the log is full -
> it's not a circular buffer.
>> Can we use asynch IO for binlog writing? sequential non-qio fsync is slowing our
> Mmm... Theoretically, yes, you could use an async device (even nfs over UDP if you're
> so inclined) but async means that you're going to be losing some transactions if the
> server crashes.
> You can also tweak
> - basically, this controls how often the binlog fsyncs. Same caveat applies, obviously:
> set this to ten, and you'll have ten times less fsyncs, but you risk losing ten
> transactions in a crash.
> If your binlogs are async, then you also risk having slaves out of sync if your
> master crashes.
> Personally, if your binlogs are slowing you down, I would recommend putting them on
> faster storage. Multiple small, fast disks in RAID10 are going to be very fast, or you
> could invest in solid state disks - not all that expensive anymore, really. Maybe even
> just a RAM disk - you'll lose data when the machine crashes (and need an initscript for
> save/load of the data on that disk), but not if just the mysqld crashes.
> Weigh the benefits of each option very, very carefully against the risk of losing
> data before you go through with this.
> Bier met grenadyn
> Is als mosterd by den wyn
> Sy die't drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=1