MySQL Lists are EOL. Please join:

List:Commits« Previous MessageNext Message »
From:ingo Date:June 7 2006 2:04pm
Subject:bk commit into 4.0 tree (ingo:1.2182) BUG#14400
View as plain text  
Below is the list of changes that have just been committed into a local
4.0 repository of mydev. When mydev does a push these changes will
be propagated to the main repository and, within 24 hours after the
push, to the public repository.
For information on how to access the public repository

  1.2182 06/06/07 16:04:36 ingo@stripped +2 -0
  Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert"
  It was possible that fetching a record by an exact key value could
  return a record with a different key value. This happened only
  if a concurrent insert added a record with the searched key
  value after the fetching statement locked the table for read.
  The search succeded on the key value, but the record was
  rejected as it was past the file length that was remembered
  at start of the fetching statement. With other words it was 
  rejected as being a concurrently inserted record.
  The action to recover from this problem is to fetch the record 
  that is pointed at by the next key of the index. (This is
  repeated until a record below the file length is found).
  However it was forgotten to check if this key had the 
  requested key value if an exact key match was requested.
  I did now add this check. If no record with that key value can
  be found below the file length, a "key not found" error is

    1.114 06/06/07 16:04:35 ingo@stripped +3 -3
    Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert"
    Fixed some DBUG_ENTER strings.

    1.15 06/06/07 16:04:35 ingo@stripped +18 -1
    Bug#14400 - Query joins wrong rows from table which is subject of "concurrent insert"
    Added key comparison after selecting a record by the next key
    value after a concurrently inserted record was found.

# This is a BitKeeper patch.  What follows are the unified diffs for the
# set of deltas contained in the patch.  The rest of the patch, the part
# that BitKeeper cares about, is below these diffs.
# User:	ingo
# Host:	chilla.local
# Root:	/home/mydev/mysql-4.0-bug14400

--- 1.14/myisam/mi_rkey.c	2005-09-23 10:15:08 +02:00
+++ 1.15/myisam/mi_rkey.c	2006-06-07 16:04:35 +02:00
@@ -66,6 +66,7 @@ int mi_rkey(MI_INFO *info, byte *buf, in
   if (fast_mi_readinfo(info))
     goto err;
   if (share->concurrent_insert)
@@ -79,19 +80,35 @@ int mi_rkey(MI_INFO *info, byte *buf, in
     while (info->lastpos >= info->state->data_file_length)
+      uint not_used;
 	Skip rows that are inserted by other threads since we got a lock
 	Note that this can only happen if we are not searching after an
 	exact key, because the keys are sorted according to position
       if  (_mi_search_next(info, keyinfo, info->lastkey,
+      /*
+        If the next key does not have the same key value,
+        but should have, then the result is "key not found".
+      */
+      if ((search_flag == HA_READ_KEY_EXACT) &&
+          (info->lastpos != HA_OFFSET_ERROR) &&
+          _mi_key_cmp(keyinfo->seg, info->lastkey, key, key_len, SEARCH_FIND,
+                      &not_used))
+      {
+        my_errno= HA_ERR_KEY_NOT_FOUND;
+        info->lastpos= HA_OFFSET_ERROR;
+        break;
+      }
   if (share->concurrent_insert)

--- 1.113/sql/	2005-11-03 18:24:00 +01:00
+++ 1.114/sql/	2006-06-07 16:04:35 +02:00
@@ -477,7 +477,7 @@ bool select_send::send_data(List<Item> &
   List_iterator_fast<Item> li(items);
   String *packet= &thd->packet;
-  DBUG_ENTER("send_data");
+  DBUG_ENTER("select_send::send_data");
   /* We may be passing the control from mysqld to the client: release the
@@ -611,7 +611,7 @@ select_export::prepare(List<Item> &list)
 bool select_export::send_data(List<Item> &items)
-  DBUG_ENTER("send_data");
+  DBUG_ENTER("select_export::send_data");
   char buff[MAX_FIELD_WIDTH],null_buff[2],space[MAX_FIELD_WIDTH];
   bool space_inited=0;
   String tmp(buff,sizeof(buff)),*res;
@@ -828,7 +828,7 @@ bool select_dump::send_data(List<Item> &
   String tmp(buff,sizeof(buff)),*res;
   Item *item;
-  DBUG_ENTER("send_data");
+  DBUG_ENTER("select_dump::send_data");
   if (thd->offset_limit)
   {						// using limit offset,count
bk commit into 4.0 tree (ingo:1.2182) BUG#14400ingo7 Jun