Really sorry I didn't give a update here. Fininally I root caused this
issue is due to one DBT2 kit bug (I thought).
My analysis processes are as follows:
a) The possible reason should be Repeatable-Read (RR) needs to read
more rows which deleted by other concurrent transactions while
Read-COMMITTED (RC) doesn't need to read. For insert rows, RC should
read more than RR, instead of less. So all these rows should be
releated with the delete query.
b) During DBT2 queries, only one DELETE query exists in delivery transaction.
c) From TPCC specification about DELIVERY transaction profile,
DELIVERY would select one row from NEW-ORDER table and delete it.
However from DBT2 DELIVERY implementation, it would select thousands
of rows from NEW-ORDER table, and forward the first one to delete. So
the problem is during higher concurrency, many rows were deleted by
other transactions, while one transaction still need to read the *old*
row. Since DBT2 thansactions are fairly long running (probably
hundreds of milliseconds or even seconds), it could exist many of
d) So according to the specification, the DELIVERY_1 query should be
optimized from “SELECT no_o_id FROM new_order WHERE no_w_id = %d AND
no_d_id = %d” to “SELECT no_o_id FROM new_order WHERE no_d_id = %d AND
no_w_id = %d ORDER BY no_o_id ASC LIMIT 1 FOR UPDATE”. “FOR UPDATE”
should be used to avoid possible dead lock.
e) So after the modification, RR and RC read the same number of rows.
g) To easa benchmark work, by avoiding the noise of more instructions
were executed, I filed a DBT2 kit bug at
Hope the dbt2 owner fixes asap.
Guilhem, I also thought of your guess. According to the meanings of
"innodb_rows_read", it will count the number of "physical one" rows,
not the "record versions". However since more record versions exists,
RR still need to execute more instruction (need to scan the record
versions and return the proper one) than RC.
On Thu, Jul 30, 2009 at 5:20 AM, Guilhem Bichot<guilhem@stripped> wrote:
> Xuekun Hu a écrit, Le 14.07.2009 08:02:
>> Hi, Guys
>> I set transaction isolation level to REPEATABLE-READ and
>> READ-COMMITTED separately to test DBT2 10 warehouses on 16 cores
>> system with 32 connections. The dbt2 performance with REPEATABLE-READ
>> was 9% lower than with READ-COMMITTED. By checking status, I found the
>> REPEATABLE-READ mode was reading 64% more "innodb_rows_read" than
>> READ-COMMITTED mode.
> A wild guess: maybe a transaction which wants to read a row starts by
> looking at the latest version of the row; assuming this row is not dirty
> (has been committed by some other transaction):
> - if in READ COMMITTED, the row is good
> - if in REPEATABLE READ, the row may be newer than the moment when the
> transaction started, then the transaction has to throw away this too-new row
> and read the previous version of the row; and so on until it has reached the
> proper point in the past.
> That could explain why REPEATABLE READ reads more rows.
> But it's just a wild guess, I cannot be more helpful, don't know enough of
> Mr. Guilhem Bichot <guilhem@stripped>
> Sun Microsystems / MySQL, Lead Software Engineer
> Bordeaux, France
> www.sun.com / www.mysql.com