List:Internals« Previous MessageNext Message »
From:Venu Kalyan Date:April 23 2010 6:11am
Subject:Re: HA_EXTRA_WRITE_CAN_REPLACE flag
View as plain text  
Hi Zardosht

Someone can correct me if am mistaken.. but here is my understanding:

- On plain INSERT (or LOAD DATA .. IGNORE..), duplicates are always
error-ed out, provided storage engine returned HA_ERR_FOUND_DUPP_KEY
- On REPLACE statements (or LOAD DATA .. REPLACE..or ON DUPLICATE
KEY), engine first sets the flags HA_EXTRA_WRITE_CAN_REPLACE and
HA_EXTRA_IGNORE_DUP_KEY  using ::extra()
  - If storage engine finds a duplicate match, then it should return
HA_ERR_FOUND_DUPP_KEY by keeping the error key position
  - Now engine gets the error key position by calling info::HA_STATUS_ERRKEY
  - Once it gets the key position, then it reads the duplicate row
values by doing an index scan on that key
  - Now it builds the old row (index read returned) and new row (from
replace command) and then calls the update_row function by passing
both old_row and new_row; and storage engine can take appropriate
action
  - If there is no duplicate; then engine simply commits the change

If one uses ON DUPLICATE KEY; then the new_row from case #2 changes;
but the same logic applies (engine also will not set
HA_EXTRA_WRITE_CAN_REPLACE as that only for replace cases, not sure
why as the logic is more or less the same)

Also; before committing or posting index read; the engine resets both
the flags (HA_EXTRA_NO_IGNORE_DUP_KEY, HA_EXTRA_WRITE_CANNOT_REPLACE)

So; storage engine simply can't do an inline replace on write_row
call....as that violates the interface semantics; but may be one day
we can expect replace_row; so that it makes sense not to make two
different calls to storage engine and storage can also make efficient
inline replace as it can do this in first place itself without any
buffering overhead (that's why storage engine should watch for this
flags and take appropriate action during the index traversal; so that
next call to replace is not expensive to find the last read row,
innodb keeps a write lock on the row during write_row call when
HA_EXTRA_WRITE_CAN_REPLACE is passed on )

Thanks
Venu Anuganti


On Wed, Apr 21, 2010 at 7:10 PM, Zardosht Kasheff <zardosht@stripped> wrote:
>
> Hello all,
>
> If I understand correctly, the flag HA_EXTRA_WRITE_CAN_REPLACE implies
> that handler::write_row can overwrite existing rows, if a duplicate is
> found. That is what the comment states, and that is what I heard NDB
> does in MySQL Cluster. If this is the case, then I assume that
> handler::write_row returns success even when the row is overwritten.
>
> If that is the case, how is this part of the contract honored, as
> stated by the MySQL 5.1 reference manual
> (http://dev.mysql.com/doc/refman/5.1/en/replace.html):
> "The REPLACE statement returns a count to indicate the number of rows
> affected. This is the sum of the rows deleted and inserted."
>
> Is something in my understanding not correct? Because it seems that if
> handler::write_row silently overwrites an existing row, as
> HA_EXTRA_WRITE_CAN_REPLACE seems to allow, then the counts will not be
> accurate.
>
> Thanks
> -Zardosht
>
> --
> MySQL Internals Mailing List
> For list archives: http://lists.mysql.com/internals
> To unsubscribe:    http://lists.mysql.com/internals?unsub=1
>
Thread
HA_EXTRA_WRITE_CAN_REPLACE flagZardosht Kasheff22 Apr
  • Re: HA_EXTRA_WRITE_CAN_REPLACE flagVenu Kalyan23 Apr
    • Re: HA_EXTRA_WRITE_CAN_REPLACE flagVenu Kalyan23 Apr
  • Re: HA_EXTRA_WRITE_CAN_REPLACE flagKonstantin Osipov23 Apr
    • Re: HA_EXTRA_WRITE_CAN_REPLACE flagKonstantin Osipov23 Apr
Re: HA_EXTRA_WRITE_CAN_REPLACE flagZardosht Kasheff23 Apr
Re: HA_EXTRA_WRITE_CAN_REPLACE flagZardosht Kasheff23 Apr
  • Re: HA_EXTRA_WRITE_CAN_REPLACE flagVenu Kalyan23 Apr
Re: HA_EXTRA_WRITE_CAN_REPLACE flagZardosht Kasheff24 Apr