MySQL Lists are EOL. Please join:

List:Commits« Previous MessageNext Message »
From:dlenev Date:June 30 2006 1:33pm
Subject:bk commit into 5.0 tree (dlenev:1.2203) BUG#20728
View as plain text  
Below is the list of changes that have just been committed into a local
5.0 repository of dlenev. When dlenev does a push these changes will
be propagated to the main repository and, within 24 hours after the
push, to the public repository.
For information on how to access the public repository
see http://dev.mysql.com/doc/mysql/en/installing-source-tree.html

ChangeSet
  1.2203 06/06/30 17:32:58 dlenev@stripped +18 -0
  Proposed fix for bug#18437 "Wrong values inserted with a before update
  trigger on NDB table".
  
  SQL-layer was not marking fields which were used in triggers as such. As
  result these fields were not always properly retrieved/stored by handler
  layer. So one might got wrong values or lost changes in triggers for NDB,
  Federated and possibly InnoDB tables.
  This fix solves the problem by marking fields used in triggers
  appropriately.
  
  Also this patch contains the following cleanup of ha_ndbcluster code:
  
  We no longer rely on reading LEX::sql_command value in handler in order
  to determine if we can enable optimization which allows us to handle REPLACE
  statement in more efficient way by doing replaces directly in write_row()
  method without reporting error to SQL-layer.
  Instead we rely on SQL-layer informing us whether this optimization
  applicable by calling handler::extra() method with
  HA_EXTRA_WRITE_CAN_REPLACE flag.
  As result we no longer apply this optimzation in cases when it should not
  be used (e.g. if we have on delete triggers on table) and use in some
  additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
  
  Finally this patch includes fix for bug#20728 "REPLACE does not work
  correctly for NDB table with PK and unique index".
  
  This was yet another problem which was caused by improper field mark-up.
  During row replacement fields which weren't explicity used in REPLACE
  statement were not marked as fields to be saved (updated) so they have
  retained values from old row version. The fix is to mark all table
  fields as set for REPLACE statement. Note that in 5.1 we already solve
  this problem by notifying handler that it should save values from all
  fields only in case when real replacement happens.

  mysql-test/t/ndb_trigger.test
    1.1 06/06/30 17:32:53 dlenev@stripped +92 -0
    Added test for bug#18437 "Wrong values inserted with a before update trigger
    on NDB table".

  mysql-test/r/ndb_trigger.result
    1.1 06/06/30 17:32:53 dlenev@stripped +119 -0
    Added test for bug#18437 "Wrong values inserted with a before update trigger
    on NDB table".

  sql/sql_update.cc
    1.191 06/06/30 17:32:53 dlenev@stripped +6 -0
    Mark fields which are used by ON UPDATE triggers so handler will retrieve
    and save values for these fields.

  sql/sql_trigger.h
    1.20 06/06/30 17:32:53 dlenev@stripped +8 -0
    Table_triggers_list:
      Added mark_fields_used() method which is used to mark fields read/set by
      triggers as such so handlers will be able properly retrieve/store values
      in these fields. To implement this method added 'trigger_fields' member
      which is array of lists linking items for all fields used in triggers
      grouped by event and action time.

  mysql-test/t/ndb_trigger.test
    1.0 06/06/30 17:32:53 dlenev@stripped +0 -0
    BitKeeper file /home/dlenev/mysql-5.0-bg18437-2/mysql-test/t/ndb_trigger.test

  mysql-test/r/ndb_trigger.result
    1.0 06/06/30 17:32:53 dlenev@stripped +0 -0
    BitKeeper file /home/dlenev/mysql-5.0-bg18437-2/mysql-test/r/ndb_trigger.result

  sql/sql_trigger.cc
    1.51 06/06/30 17:32:52 dlenev@stripped +42 -2
    Added Table_triggers_list::mark_fields_used() method which is used to mark
    fields read/set by triggers as such so handlers will be able properly
    retrieve/store values in these fields.

  sql/sql_table.cc
    1.317 06/06/30 17:32:52 dlenev@stripped +6 -13
    Got rid of handle_duplicates argument in mysql_alter_table() and
    copy_data_between_tables() functions. These functions were always
    called with handle_duplicates == DUP_ERROR and thus contained dead
    (and probably incorrect) code.

  sql/sql_parse.cc
    1.556 06/06/30 17:32:52 dlenev@stripped +3 -4
    mysql_alter_table() function no longer takes handle_duplicates argument.

  sql/sql_load.cc
    1.96 06/06/30 17:32:52 dlenev@stripped +10 -0
    Explicitly inform handler that we are doing LOAD DATA REPLACE (using
    ha_extra() method) in cases when it can promote insert operation done by
    write_row() to replace.
    Also when we do replace we want to save (replace) values for all columns
    so we should inform handler about it.
    Finally to properly execute LOAD DATA for table with triggers we should
    mark fields used by ON INSERT triggers as such so handler can properly
    store values for these fields.

  sql/sql_insert.cc
    1.195 06/06/30 17:32:52 dlenev@stripped +76 -1
    Explicitly inform handler that we are doing REPLACE (using ha_extra() method)
    in cases when it can promote insert operation done by write_row() to replace.
    Also when we do REPLACE we want to store values for all columns so we should
    inform handler about it.
    Finally we should mark fields used by ON UPDATE/ON DELETE triggers as such
    so handler can properly retrieve/restore values in these fields during
    execution of REPLACE and INSERT ... ON DUPLICATE KEY UPDATE statements.

  sql/sql_delete.cc
    1.175 06/06/30 17:32:52 dlenev@stripped +6 -0
    Mark fields which are used by ON DELETE triggers so handler will retrieve
    values for these fields.

  sql/mysql_priv.h
    1.396 06/06/30 17:32:52 dlenev@stripped +3 -3
    mysql_alter_table() function no longer takes handle_duplicates argument.
    Added declaration of mark_fields_used_by_triggers_for_insert_stmt() function.

  sql/item.cc
    1.225 06/06/30 17:32:52 dlenev@stripped +7 -2
    Item_trigger_field::setup_field():
      Added comment explaining why we don't set Field::query_id in this method.

  sql/ha_ndbcluster.cc
    1.263 06/06/30 17:32:52 dlenev@stripped +15 -12
    We no longer rely on reading LEX::sql_command value in handler in order
    to determine if we can enable optimization which allows us to handle REPLACE
    statement in more efficient way by doing replaces directly in write_row()
    method without reporting error to SQL-layer.
    Instead we rely on SQL-layer informing us whether this optimization
    applicable by calling handler::extra() method with
    HA_EXTRA_WRITE_CAN_REPLACE flag.
    As result we no longer apply this optimization in cases when it should not
    be used (e.g. if we have on delete triggers on table) and use in some
    additional cases when it is applicable (e.g. for LOAD DATA REPLACE).

  mysql-test/t/ndb_replace.test
    1.7 06/06/30 17:32:52 dlenev@stripped +37 -0
    Added test for bug #20728 "REPLACE does not work correctly for NDB table
    with PK and unique index".

  mysql-test/t/federated.test
    1.23 06/06/30 17:32:52 dlenev@stripped +42 -0
    Additional test for bug#18437 "Wrong values inserted with a before update
    trigger on NDB table"

  mysql-test/r/ndb_replace.result
    1.6 06/06/30 17:32:52 dlenev@stripped +46 -1
    Added test for bug #20728 "REPLACE does not work correctly for NDB table
    with PK and unique index". Updated wrong results from older test.

  mysql-test/r/federated.result
    1.27 06/06/30 17:32:52 dlenev@stripped +28 -0
    Additional test for bug#18437 "Wrong values inserted with a before update
    trigger on NDB table"

  include/my_base.h
    1.78 06/06/30 17:32:52 dlenev@stripped +10 -1
    Added HA_EXTRA_WRITE_CAN_REPLACE, HA_EXTRA_WRITE_CANNOT_REPLACE - new
    parameters for ha_extra() method. We use them to inform handler that
    write_row() which tries to insert new row into the table and encounters
    some already existing row with same primary/unique key can replace old
    row with new row instead of reporting error.

# This is a BitKeeper patch.  What follows are the unified diffs for the
# set of deltas contained in the patch.  The rest of the patch, the part
# that BitKeeper cares about, is below these diffs.
# User:	dlenev
# Host:	jabberwock.site
# Root:	/home/dlenev/mysql-5.0-bg18437-2

--- 1.77/include/my_base.h	2006-03-10 17:04:04 +03:00
+++ 1.78/include/my_base.h	2006-06-30 17:32:52 +04:00
@@ -152,7 +152,16 @@
     other fields intact. When this is off (by default) InnoDB will use memcpy
     to overwrite entire row.
   */
-  HA_EXTRA_KEYREAD_PRESERVE_FIELDS
+  HA_EXTRA_KEYREAD_PRESERVE_FIELDS,
+  /*
+    Informs handler that write_row() which tries to insert new row into the
+    table and encounters some already existing row with same primary/unique
+    key can replace old row with new row instead of reporting error (basically
+    it informs handler that we do REPLACE instead of simple INSERT).
+    Off by default.
+  */
+  HA_EXTRA_WRITE_CAN_REPLACE,
+  HA_EXTRA_WRITE_CANNOT_REPLACE
 };
 
 	/* The following is parameter to ha_panic() */

--- 1.224/sql/item.cc	2006-05-18 22:25:38 +04:00
+++ 1.225/sql/item.cc	2006-06-30 17:32:52 +04:00
@@ -5350,9 +5350,14 @@
 void Item_trigger_field::setup_field(THD *thd, TABLE *table,
                                      GRANT_INFO *table_grant_info)
 {
+  /*
+    There is no sense in marking fields used by trigger with current value
+    of THD::query_id since it is completely unrelated to the THD::query_id
+    value for statements which will invoke trigger. So instead we use
+    Table_triggers_list::mark_fields_used() method which is called during
+    execution of these statements.
+  */
   bool save_set_query_id= thd->set_query_id;
-
-  /* TODO: Think more about consequences of this step. */
   thd->set_query_id= 0;
   /*
     Try to find field by its name and if it will be found

--- 1.395/sql/mysql_priv.h	2006-06-26 21:19:04 +04:00
+++ 1.396/sql/mysql_priv.h	2006-06-30 17:32:52 +04:00
@@ -726,9 +726,7 @@
                        TABLE_LIST *table_list,
                        List<create_field> &fields,
                        List<Key> &keys,
-                       uint order_num, ORDER *order,
-                       enum enum_duplicates handle_duplicates,
-                       bool ignore,
+                       uint order_num, ORDER *order, bool ignore,
                        ALTER_INFO *alter_info, bool do_send_ok);
 bool mysql_recreate_table(THD *thd, TABLE_LIST *table_list, bool do_send_ok);
 bool mysql_create_like_table(THD *thd, TABLE_LIST *table,
@@ -764,6 +762,8 @@
                   bool ignore);
 int check_that_all_fields_are_given_values(THD *thd, TABLE *entry,
                                            TABLE_LIST *table_list);
+void mark_fields_used_by_triggers_for_insert_stmt(THD *thd, TABLE *table,
+                                                  enum_duplicates duplic);
 bool mysql_prepare_delete(THD *thd, TABLE_LIST *table_list, Item **conds);
 bool mysql_delete(THD *thd, TABLE_LIST *table_list, COND *conds,
                   SQL_LIST *order, ha_rows rows, ulonglong options,

--- 1.174/sql/sql_delete.cc	2006-05-26 12:47:46 +04:00
+++ 1.175/sql/sql_delete.cc	2006-06-30 17:32:52 +04:00
@@ -194,6 +194,10 @@
   deleted=0L;
   init_ftfuncs(thd, select_lex, 1);
   thd->proc_info="updating";
+
+  if (table->triggers)
+    table->triggers->mark_fields_used(thd, TRG_EVENT_DELETE);
+
   while (!(error=info.read_record(&info)) && !thd->killed &&
 	 !thd->net.report_error)
   {
@@ -507,6 +511,8 @@
 	transactional_tables= 1;
       else
 	normal_tables= 1;
+      if (tbl->triggers)
+        tbl->triggers->mark_fields_used(thd, TRG_EVENT_DELETE);
     }
     else if ((tab->type != JT_SYSTEM && tab->type != JT_CONST) &&
              walk == delete_tables)

--- 1.194/sql/sql_insert.cc	2006-06-27 03:34:07 +04:00
+++ 1.195/sql/sql_insert.cc	2006-06-30 17:32:52 +04:00
@@ -241,6 +241,33 @@
 }
 
 
+/*
+  Mark fields used by triggers for INSERT-like statement.
+
+  SYNOPSIS
+    mark_fields_used_by_triggers_for_insert_stmt()
+      thd     The current thread
+      table   Table to which insert will happen
+      duplic  Type of duplicate handling for insert which will happen
+
+  NOTE
+    For REPLACE there is no sense in marking particular fields
+    used by ON DELETE trigger as to execute it properly we have
+    to retrieve and store values for all table columns anyway.
+*/
+
+void mark_fields_used_by_triggers_for_insert_stmt(THD *thd, TABLE *table,
+                                                  enum_duplicates duplic)
+{
+  if (table->triggers)
+  {
+    table->triggers->mark_fields_used(thd, TRG_EVENT_INSERT);
+    if (duplic == DUP_UPDATE)
+      table->triggers->mark_fields_used(thd, TRG_EVENT_UPDATE);
+  }
+}
+
+
 bool mysql_insert(THD *thd,TABLE_LIST *table_list,
                   List<Item> &fields,
                   List<List_item> &values_list,
@@ -400,6 +427,17 @@
   thd->proc_info="update";
   if (duplic != DUP_ERROR || ignore)
     table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+  if (duplic == DUP_REPLACE)
+  {
+    if (!table->triggers || !table->triggers->has_delete_triggers())
+      table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
+    /*
+      REPLACE should change values of all columns so we should mark
+      all columns as columns to be set. As nice side effect we will
+      retrieve columns which values are needed for ON DELETE triggers.
+    */
+    table->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS);
+  }
   /*
     let's *try* to start bulk inserts. It won't necessary
     start them as values_list.elements should be greater than
@@ -428,6 +466,8 @@
     error= 1;
   }
 
+  mark_fields_used_by_triggers_for_insert_stmt(thd, table, duplic);
+
   if (table_list->prepare_where(thd, 0, TRUE) ||
       table_list->prepare_check_option(thd))
     error= 1;
@@ -598,6 +638,9 @@
   thd->next_insert_id=0;			// Reset this if wrongly used
   if (duplic != DUP_ERROR || ignore)
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+  if (duplic == DUP_REPLACE &&
+      (!table->triggers || !table->triggers->has_delete_triggers()))
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
 
   /* Reset value of LAST_INSERT_ID if no rows where inserted */
   if (!info.copied && thd->insert_id_used)
@@ -1879,7 +1922,8 @@
 {
   int error;
   ulong max_rows;
-  bool using_ignore=0, using_bin_log=mysql_bin_log.is_open();
+  bool using_ignore= 0, using_opt_replace= 0;
+  bool using_bin_log= mysql_bin_log.is_open();
   delayed_row *row;
   DBUG_ENTER("handle_inserts");
 
@@ -1941,6 +1985,13 @@
       table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
       using_ignore=1;
     }
+    if (info.handle_duplicates == DUP_REPLACE &&
+        (!table->triggers ||
+         !table->triggers->has_delete_triggers()))
+    {
+      table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
+      using_opt_replace= 1;
+    }
     thd.clear_error(); // reset error for binlog
     if (write_record(&thd, table, &info))
     {
@@ -1953,6 +2004,11 @@
       using_ignore=0;
       table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
     }
+    if (using_opt_replace)
+    {
+      using_opt_replace= 0;
+      table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
+    }
     if (row->query && row->log_query && using_bin_log)
     {
       Query_log_event qinfo(&thd, row->query, row->query_length, 0, FALSE);
@@ -2198,6 +2254,12 @@
   thd->cuted_fields=0;
   if (info.ignore || info.handle_duplicates != DUP_ERROR)
     table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+  if (info.handle_duplicates == DUP_REPLACE)
+  {
+    if (!table->triggers || !table->triggers->has_delete_triggers())
+      table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
+    table->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS);
+  }
   thd->no_trans_update= 0;
   thd->abort_on_warning= (!info.ignore &&
                           (thd->variables.sql_mode &
@@ -2207,6 +2269,10 @@
          check_that_all_fields_are_given_values(thd, table, table_list)) ||
         table_list->prepare_where(thd, 0, TRUE) ||
         table_list->prepare_check_option(thd));
+
+  if (!res)
+    mark_fields_used_by_triggers_for_insert_stmt(thd, table,
+                                                 info.handle_duplicates);
   DBUG_RETURN(res);
 }
 
@@ -2372,6 +2438,7 @@
 
   error= (!thd->prelocked_mode) ? table->file->end_bulk_insert():0;
   table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+  table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
 
   /*
     We must invalidate the table in the query cache before binlog writing
@@ -2601,6 +2668,12 @@
   thd->cuted_fields=0;
   if (info.ignore || info.handle_duplicates != DUP_ERROR)
     table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+  if (info.handle_duplicates == DUP_REPLACE)
+  {
+    if (!table->triggers || !table->triggers->has_delete_triggers())
+      table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
+    table->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS);
+  }
   if (!thd->prelocked_mode)
     table->file->start_bulk_insert((ha_rows) 0);
   thd->no_trans_update= 0;
@@ -2640,6 +2713,7 @@
   else
   {
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     VOID(pthread_mutex_lock(&LOCK_open));
     mysql_unlock_tables(thd, lock);
     /*
@@ -2673,6 +2747,7 @@
   if (table)
   {
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     enum db_type table_type=table->s->db_type;
     if (!table->s->tmp_table)
     {

--- 1.95/sql/sql_load.cc	2006-05-26 12:47:46 +04:00
+++ 1.96/sql/sql_load.cc	2006-06-30 17:32:52 +04:00
@@ -225,6 +225,8 @@
       DBUG_RETURN(TRUE);
   }
 
+  mark_fields_used_by_triggers_for_insert_stmt(thd, table, handle_duplicates);
+
   uint tot_length=0;
   bool use_blobs= 0, use_vars= 0;
   List_iterator_fast<Item> it(fields_vars);
@@ -357,6 +359,13 @@
     if (ignore ||
 	handle_duplicates == DUP_REPLACE)
       table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+    if (handle_duplicates == DUP_REPLACE)
+    {
+      if (!table->triggers ||
+          !table->triggers->has_delete_triggers())
+        table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
+      table->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS);
+    }
     if (!thd->prelocked_mode)
       table->file->start_bulk_insert((ha_rows) 0);
     table->copy_blobs=1;
@@ -381,6 +390,7 @@
       error= 1;
     }
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     table->next_number_field=0;
   }
   ha_enable_transaction(thd, TRUE);

--- 1.555/sql/sql_parse.cc	2006-06-27 03:34:07 +04:00
+++ 1.556/sql/sql_parse.cc	2006-06-30 17:32:52 +04:00
@@ -3072,8 +3072,7 @@
 			       lex->key_list,
 			       select_lex->order_list.elements,
                                (ORDER *) select_lex->order_list.first,
-			       lex->duplicates, lex->ignore, &lex->alter_info,
-                               1);
+			       lex->ignore, &lex->alter_info, 1);
       }
       break;
     }
@@ -7011,7 +7010,7 @@
   DBUG_RETURN(mysql_alter_table(thd,table_list->db,table_list->table_name,
 				&create_info, table_list,
 				fields, keys, 0, (ORDER*)0,
-				DUP_ERROR, 0, &alter_info, 1));
+                                0, &alter_info, 1));
 }
 
 
@@ -7029,7 +7028,7 @@
   DBUG_RETURN(mysql_alter_table(thd,table_list->db,table_list->table_name,
 				&create_info, table_list,
 				fields, keys, 0, (ORDER*)0,
-				DUP_ERROR, 0, alter_info, 1));
+                                0, alter_info, 1));
 }
 
 

--- 1.316/sql/sql_table.cc	2006-06-27 03:34:07 +04:00
+++ 1.317/sql/sql_table.cc	2006-06-30 17:32:52 +04:00
@@ -35,9 +35,7 @@
 static bool check_if_keyname_exists(const char *name,KEY *start, KEY *end);
 static char *make_unique_key_name(const char *field_name,KEY *start,KEY *end);
 static int copy_data_between_tables(TABLE *from,TABLE *to,
-				    List<create_field> &create,
-				    enum enum_duplicates handle_duplicates,
-                                    bool ignore,
+                                    List<create_field> &create, bool ignore,
 				    uint order_num, ORDER *order,
 				    ha_rows *copied,ha_rows *deleted);
 static bool prepare_blob_field(THD *thd, create_field *sql_field);
@@ -3141,8 +3139,7 @@
                        HA_CREATE_INFO *create_info,
                        TABLE_LIST *table_list,
                        List<create_field> &fields, List<Key> &keys,
-                       uint order_num, ORDER *order,
-                       enum enum_duplicates handle_duplicates, bool ignore,
+                       uint order_num, ORDER *order, bool ignore,
                        ALTER_INFO *alter_info, bool do_send_ok)
 {
   TABLE *table,*new_table=0;
@@ -3737,8 +3734,7 @@
   {
     new_table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET;
     new_table->next_number_field=new_table->found_next_number_field;
-    error=copy_data_between_tables(table,new_table,create_list,
-				   handle_duplicates, ignore,
+    error=copy_data_between_tables(table, new_table, create_list, ignore,
 				   order_num, order, &copied, &deleted);
   }
   thd->last_insert_id=next_insert_id;		// Needed for correct log
@@ -3961,7 +3957,6 @@
 static int
 copy_data_between_tables(TABLE *from,TABLE *to,
 			 List<create_field> &create,
-			 enum enum_duplicates handle_duplicates,
                          bool ignore,
 			 uint order_num, ORDER *order,
 			 ha_rows *copied,
@@ -4064,8 +4059,7 @@
   */
   from->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS);
   init_read_record(&info, thd, from, (SQL_SELECT *) 0, 1,1);
-  if (ignore ||
-      handle_duplicates == DUP_REPLACE)
+  if (ignore)
     to->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
   thd->row_count= 0;
   restore_record(to, s->default_values);        // Create empty record
@@ -4092,8 +4086,7 @@
     }
     if ((error=to->file->write_row((byte*) to->record[0])))
     {
-      if ((!ignore &&
-	   handle_duplicates != DUP_REPLACE) ||
+      if (!ignore ||
 	  (error != HA_ERR_FOUND_DUPP_KEY &&
 	   error != HA_ERR_FOUND_DUPP_UNIQUE))
       {
@@ -4171,7 +4164,7 @@
   DBUG_RETURN(mysql_alter_table(thd, NullS, NullS, &create_info,
                                 table_list, lex->create_list,
                                 lex->key_list, 0, (ORDER *) 0,
-                                DUP_ERROR, 0, &lex->alter_info, do_send_ok));
+                                0, &lex->alter_info, do_send_ok));
 }
 
 

--- 1.190/sql/sql_update.cc	2006-06-19 16:50:46 +04:00
+++ 1.191/sql/sql_update.cc	2006-06-30 17:32:53 +04:00
@@ -433,6 +433,9 @@
                                (MODE_STRICT_TRANS_TABLES |
                                 MODE_STRICT_ALL_TABLES)));
 
+  if (table->triggers)
+    table->triggers->mark_fields_used(thd, TRG_EVENT_UPDATE);
+
   while (!(error=info.read_record(&info)) && !thd->killed)
   {
     if (!(select && select->skip_record()))
@@ -754,6 +757,9 @@
         my_error(ER_NON_UPDATABLE_TABLE, MYF(0), tl->alias, "UPDATE");
         DBUG_RETURN(TRUE);
       }
+
+      if (table->triggers)
+        table->triggers->mark_fields_used(thd, TRG_EVENT_UPDATE);
 
       DBUG_PRINT("info",("setting table `%s` for update", tl->alias));
       /*
--- New file ---
+++ mysql-test/r/ndb_trigger.result	06/06/30 17:32:53
drop table if exists t1, t2, t3;
create table t1 (id int primary key, a int not null, b decimal (63,30) default 0) engine=ndb;
create table t2 (op char(1), a int not null, b decimal (63,30));
create table t3 select 1 as i;
create trigger t1_bu before update on t1 for each row
begin
insert into t2 values ("u", old.a, old.b);
set new.b = old.b + 10;
end;//
create trigger t1_bd before delete on t1 for each row
begin
insert into t2 values ("d", old.a, old.b);
end;//
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (3, 3, 3.05), (4, 4, 4.05);
update t1 set a=5 where a != 3;
select * from t1 order by id;
id	a	b
1	5	11.050000000000000000000000000000
2	5	12.050000000000000000000000000000
3	3	3.050000000000000000000000000000
4	5	14.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	1	1.050000000000000000000000000000
u	2	2.050000000000000000000000000000
u	4	4.050000000000000000000000000000
delete from t2;
update t1, t3 set a=6 where a = 5;
select * from t1 order by id;
id	a	b
1	6	21.050000000000000000000000000000
2	6	22.050000000000000000000000000000
3	3	3.050000000000000000000000000000
4	6	24.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	5	11.050000000000000000000000000000
u	5	12.050000000000000000000000000000
u	5	14.050000000000000000000000000000
delete from t2;
delete from t1 where a != 3;
select * from t1 order by id;
id	a	b
3	3	3.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	6	21.050000000000000000000000000000
d	6	22.050000000000000000000000000000
d	6	24.050000000000000000000000000000
delete from t2;
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (4, 4, 4.05);
delete t1 from t1, t3 where a != 3;
select * from t1 order by id;
id	a	b
3	3	3.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	1	1.050000000000000000000000000000
d	2	2.050000000000000000000000000000
d	4	4.050000000000000000000000000000
delete from t2;
insert into t1 values (4, 4, 4.05);
insert into t1 (id, a) values (4, 1), (3, 1) on duplicate key update a= a + 1;
select * from t1 order by id;
id	a	b
3	4	13.050000000000000000000000000000
4	5	14.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	3	3.050000000000000000000000000000
u	4	4.050000000000000000000000000000
delete from t2;
delete from t3;
insert into t3 values (4), (3);
insert into t1 (id, a) (select i, 1 from t3) on duplicate key update a= a + 1;
select * from t1 order by id;
id	a	b
3	5	23.050000000000000000000000000000
4	6	24.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	4	13.050000000000000000000000000000
u	5	14.050000000000000000000000000000
delete from t2;
replace into t1 (id, a) values (4, 1), (3, 1);
select * from t1 order by id;
id	a	b
3	1	0.000000000000000000000000000000
4	1	0.000000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	5	23.050000000000000000000000000000
d	6	24.050000000000000000000000000000
delete from t1;
delete from t2;
insert into t1 values (3, 1, 1.05), (4, 1, 2.05);
replace into t1 (id, a) (select i, 2 from t3);
select * from t1 order by id;
id	a	b
3	2	0.000000000000000000000000000000
4	2	0.000000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	1	1.050000000000000000000000000000
d	1	2.050000000000000000000000000000
delete from t1;
delete from t2;
insert into t1 values (3, 1, 1.05), (5, 2, 2.05);
load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (id, a);
select * from t1 order by id;
id	a	b
3	4	0.000000000000000000000000000000
5	6	0.000000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	1	1.050000000000000000000000000000
d	2	2.050000000000000000000000000000
drop tables t1, t2, t3;
End of 5.0 tests

--- New file ---
+++ mysql-test/t/ndb_trigger.test	06/06/30 17:32:53
# Tests which involve triggers and NDB storage engine
--source include/have_ndb.inc
--source include/not_embedded.inc

# 
# Test for bug#18437 "Wrong values inserted with a before update
# trigger on NDB table". SQL-layer didn't properly inform handler
# about fields which were read and set in triggers. In some cases 
# this resulted in incorrect (garbage) values of OLD variables and
# lost changes to NEW variables.
# You can find similar tests for ON INSERT triggers in federated.test
# since this engine so far is the only engine in MySQL which cares
# about field mark-up during handler::write_row() operation.
#

--disable_warnings
drop table if exists t1, t2, t3;
--enable_warnings

create table t1 (id int primary key, a int not null, b decimal (63,30) default 0) engine=ndb;
create table t2 (op char(1), a int not null, b decimal (63,30));
create table t3 select 1 as i;
	
delimiter //;
create trigger t1_bu before update on t1 for each row
begin
  insert into t2 values ("u", old.a, old.b);
  set new.b = old.b + 10;
end;//
create trigger t1_bd before delete on t1 for each row
begin
  insert into t2 values ("d", old.a, old.b);
end;//
delimiter ;//
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (3, 3, 3.05), (4, 4, 4.05);

# Check that usual update works as it should
update t1 set a=5 where a != 3;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check that everything works for multi-update
update t1, t3 set a=6 where a = 5;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for delete
delete from t1 where a != 3;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for multi-delete
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (4, 4, 4.05);
delete t1 from t1, t3 where a != 3;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for insert ... on duplicate key update
insert into t1 values (4, 4, 4.05);
insert into t1 (id, a) values (4, 1), (3, 1) on duplicate key update a= a + 1;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for insert ... select ... on duplicate key update
delete from t3;
insert into t3 values (4), (3);
insert into t1 (id, a) (select i, 1 from t3) on duplicate key update a= a + 1;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for replace
replace into t1 (id, a) values (4, 1), (3, 1);
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t1;
delete from t2;
# Check for replace ... select ...
insert into t1 values (3, 1, 1.05), (4, 1, 2.05);
replace into t1 (id, a) (select i, 2 from t3);
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t1;
delete from t2;
# Check for load data replace
insert into t1 values (3, 1, 1.05), (5, 2, 2.05);
load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (id, a);
select * from t1 order by id;
select * from t2 order by op, a, b;

drop tables t1, t2, t3;

--echo End of 5.0 tests


--- 1.50/sql/sql_trigger.cc	2006-06-27 00:47:47 +04:00
+++ 1.51/sql/sql_trigger.cc	2006-06-30 17:32:52 +04:00
@@ -1013,8 +1013,15 @@
         }
 
         /*
-          Let us bind Item_trigger_field objects representing access to fields
-          in old/new versions of row in trigger to Field objects in table being
+          Gather all Item_trigger_field objects representing access to fields
+          in old/new versions of row in trigger into lists containing all such
+          objects for the triggers with same action and timing.
+        */
+        triggers->trigger_fields[lex.trg_chistics.event]
+                                [lex.trg_chistics.action_time]=
+          (Item_trigger_field *)(lex.trg_table_fields.first);
+        /*
+          Also let us bind these objects to Field objects in table being
           opened.
 
           We ignore errors here, because if even something is wrong we still
@@ -1521,6 +1528,39 @@
   }
 
   return err_status;
+}
+
+
+/*
+  Mark fields of subject table which we read/set in its triggers as such.
+
+  SYNOPSIS
+    mark_fields_used()
+      thd    Current thread context
+      event  Type of event triggers for which we are going to inspect
+
+  DESCRIPTION
+    This method marks fields of subject table which are read/set in its
+    triggers as such (by setting Field::query_id equal to THD::query_id)
+    and thus informs handler that values for these fields should be
+    retrieved/stored during execution of statement.
+*/
+
+void Table_triggers_list::mark_fields_used(THD *thd, trg_event_type event)
+{
+  int action_time;
+  Item_trigger_field *trg_field;
+
+  for (action_time= 0; action_time < (int)TRG_ACTION_MAX; action_time++)
+  {
+    for (trg_field= trigger_fields[event][action_time]; trg_field;
+         trg_field= trg_field->next_trg_field)
+    {
+      /* We cannot mark fields which does not present in table. */
+      if (trg_field->field_idx != (uint)-1)
+        table->field[trg_field->field_idx]->query_id = thd->query_id;
+    }
+  }
 }
 
 

--- 1.19/sql/sql_trigger.h	2006-02-26 16:32:52 +03:00
+++ 1.20/sql/sql_trigger.h	2006-06-30 17:32:53 +04:00
@@ -26,6 +26,11 @@
   /* Triggers as SPs grouped by event, action_time */
   sp_head           *bodies[TRG_EVENT_MAX][TRG_ACTION_MAX];
   /*
+    Heads of the lists linking items for all fields used in triggers
+    grouped by event and action_time.
+  */
+  Item_trigger_field *trigger_fields[TRG_EVENT_MAX][TRG_ACTION_MAX];
+  /*
     Copy of TABLE::Field array with field pointers set to TABLE::record[1]
     buffer instead of TABLE::record[0] (used for OLD values in on UPDATE
     trigger and DELETE trigger when it is called for REPLACE).
@@ -82,6 +87,7 @@
     record1_field(0), table(table_arg)
   {
     bzero((char *)bodies, sizeof(bodies));
+    bzero((char *)trigger_fields, sizeof(trigger_fields));
     bzero((char *)&subject_table_grants, sizeof(subject_table_grants));
   }
   ~Table_triggers_list();
@@ -118,6 +124,8 @@
   }
 
   void set_table(TABLE *new_table);
+
+  void mark_fields_used(THD *thd, trg_event_type event);
 
   friend class Item_trigger_field;
   friend int sp_cache_routines_and_add_tables_for_triggers(THD *thd, LEX *lex,

--- 1.5/mysql-test/r/ndb_replace.result	2006-06-21 11:36:01 +04:00
+++ 1.6/mysql-test/r/ndb_replace.result	2006-06-30 17:32:52 +04:00
@@ -30,7 +30,8 @@
 SELECT * from t1 ORDER BY i;
 i	j	k
 3	1	42
-17	2	24
+17	2	NULL
+DROP TABLE t1;
 CREATE TABLE t2 (a INT(11) NOT NULL,
 b INT(11) NOT NULL,
 c INT(11) NOT NULL,
@@ -52,3 +53,47 @@
 a	b	c	x	y	z	id	i
 1	1	1	b	b	b	5	2
 DROP TABLE t2;
+drop table if exists t1;
+create table t1 (pk int primary key, apk int unique, data int) engine=ndbcluster;
+insert into t1 values (1, 1, 1), (2, 2, 2), (3, 3, 3);
+replace into t1 (pk, apk) values (4, 1), (5, 2);
+select * from t1 order by pk;
+pk	apk	data
+3	3	3
+4	1	NULL
+5	2	NULL
+delete from t1;
+insert into t1 values (1, 1, 1), (2, 2, 2), (3, 3, 3);
+replace into t1 (pk, apk) values (1, 4), (2, 5);
+select * from t1 order by pk;
+pk	apk	data
+1	4	NULL
+2	5	NULL
+3	3	3
+delete from t1;
+insert into t1 values (1, 1, 1), (4, 4, 4), (6, 6, 6);
+load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (pk, apk);
+select * from t1 order by pk;
+pk	apk	data
+1	1	1
+3	4	NULL
+5	6	NULL
+delete from t1;
+insert into t1 values (1, 1, 1), (3, 3, 3), (5, 5, 5);
+load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (pk, apk);
+select * from t1 order by pk;
+pk	apk	data
+1	1	1
+3	4	NULL
+5	6	NULL
+delete from t1;
+insert into t1 values (1, 1, 1), (2, 2, 2), (3, 3, 3);
+replace into t1 (pk, apk) select 4, 1;
+replace into t1 (pk, apk) select 2, 4;
+select * from t1 order by pk;
+pk	apk	data
+2	4	NULL
+3	3	3
+4	1	NULL
+drop table t1;
+End of 5.0 tests.

--- 1.6/mysql-test/t/ndb_replace.test	2006-06-21 11:36:01 +04:00
+++ 1.7/mysql-test/t/ndb_replace.test	2006-06-30 17:32:52 +04:00
@@ -39,6 +39,7 @@
 REPLACE INTO t1 (j,k) VALUES (1,42);
 REPLACE INTO t1 (i,j) VALUES (17,2);
 SELECT * from t1 ORDER BY i;
+DROP TABLE t1;
 
 # bug#19906
 CREATE TABLE t2 (a INT(11) NOT NULL,
@@ -64,4 +65,40 @@
 
 DROP TABLE t2;
 
+#
+# Bug #20728 "REPLACE does not work correctly for NDB table with PK and
+#             unique index"
+#
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+create table t1 (pk int primary key, apk int unique, data int) engine=ndbcluster;
+# Test for plain replace which updates pk
+insert into t1 values (1, 1, 1), (2, 2, 2), (3, 3, 3);
+replace into t1 (pk, apk) values (4, 1), (5, 2);
+select * from t1 order by pk;
+delete from t1;
+# Another test for plain replace which doesn't touch pk
+insert into t1 values (1, 1, 1), (2, 2, 2), (3, 3, 3);
+replace into t1 (pk, apk) values (1, 4), (2, 5);
+select * from t1 order by pk;
+delete from t1;
+# Test for load data replace which updates pk
+insert into t1 values (1, 1, 1), (4, 4, 4), (6, 6, 6);
+load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (pk, apk);
+select * from t1 order by pk;
+delete from t1;
+# Now test for load data replace which doesn't touch pk
+insert into t1 values (1, 1, 1), (3, 3, 3), (5, 5, 5);
+load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (pk, apk);
+select * from t1 order by pk;
+delete from t1;
+# Finally test for both types of replace ... select
+insert into t1 values (1, 1, 1), (2, 2, 2), (3, 3, 3);
+replace into t1 (pk, apk) select 4, 1;
+replace into t1 (pk, apk) select 2, 4;
+select * from t1 order by pk;
+# Clean-up
+drop table t1;
 
+--echo End of 5.0 tests.

--- 1.262/sql/ha_ndbcluster.cc	2006-06-21 11:50:32 +04:00
+++ 1.263/sql/ha_ndbcluster.cc	2006-06-30 17:32:52 +04:00
@@ -3212,20 +3212,11 @@
     break;
   case HA_EXTRA_IGNORE_DUP_KEY:       /* Dup keys don't rollback everything*/
     DBUG_PRINT("info", ("HA_EXTRA_IGNORE_DUP_KEY"));
-    if (current_thd->lex->sql_command == SQLCOM_REPLACE && !m_has_unique_index)
-    {
-      DBUG_PRINT("info", ("Turning ON use of write instead of insert"));
-      m_use_write= TRUE;
-    } else 
-    {
-      DBUG_PRINT("info", ("Ignoring duplicate key"));
-      m_ignore_dup_key= TRUE;
-    }
+    DBUG_PRINT("info", ("Ignoring duplicate key"));
+    m_ignore_dup_key= TRUE;
     break;
   case HA_EXTRA_NO_IGNORE_DUP_KEY:
     DBUG_PRINT("info", ("HA_EXTRA_NO_IGNORE_DUP_KEY"));
-    DBUG_PRINT("info", ("Turning OFF use of write instead of insert"));
-    m_use_write= FALSE;
     m_ignore_dup_key= FALSE;
     break;
   case HA_EXTRA_RETRIEVE_ALL_COLS:    /* Retrieve all columns, not just those
@@ -3255,7 +3246,19 @@
   case HA_EXTRA_KEYREAD_PRESERVE_FIELDS:
     DBUG_PRINT("info", ("HA_EXTRA_KEYREAD_PRESERVE_FIELDS"));
     break;
-
+  case HA_EXTRA_WRITE_CAN_REPLACE:
+    DBUG_PRINT("info", ("HA_EXTRA_WRITE_CAN_REPLACE"));
+    if (!m_has_unique_index)
+    {
+      DBUG_PRINT("info", ("Turning ON use of write instead of insert"));
+      m_use_write= TRUE;
+    }
+    break;
+  case HA_EXTRA_WRITE_CANNOT_REPLACE:
+    DBUG_PRINT("info", ("HA_EXTRA_WRITE_CANNOT_REPLACE"));
+    DBUG_PRINT("info", ("Turning OFF use of write instead of insert"));
+    m_use_write= FALSE;
+    break;
   }
   
   DBUG_RETURN(0);

--- 1.26/mysql-test/r/federated.result	2006-02-28 13:17:35 +03:00
+++ 1.27/mysql-test/r/federated.result	2006-06-30 17:32:52 +04:00
@@ -1601,6 +1601,34 @@
 5	Torkel	0	0
 DROP TABLE federated.t1;
 DROP TABLE federated.bug_17377_table;
+drop table if exists federated.t1;
+create table federated.t1 (a int, b int, c int);
+drop table if exists federated.t1;
+drop table if exists federated.t2;
+create table federated.t1 (a int,  b int, c int) engine=federated connection='mysql://root@stripped:SLAVE_PORT/federated/t1';
+create trigger federated.t1_bi before insert on federated.t1 for each row set new.c= new.a * new.b;
+create table federated.t2 (a int, b int);
+insert into federated.t2 values (13, 17), (19, 23);
+insert into federated.t1 (a, b) values (1, 2), (3, 5), (7, 11);
+select * from federated.t1;
+a	b	c
+1	2	2
+3	5	15
+7	11	77
+delete from federated.t1;
+insert into federated.t1 (a, b) select * from federated.t2;
+select * from federated.t1;
+a	b	c
+13	17	221
+19	23	437
+delete from federated.t1;
+load data infile '../std_data_ln/loaddata5.dat' into table federated.t1 fields terminated by '' enclosed by '' ignore 1 lines (a, b);
+select * from federated.t1;
+a	b	c
+3	4	12
+5	6	30
+drop tables federated.t1, federated.t2;
+drop table federated.t1;
 DROP TABLE IF EXISTS federated.t1;
 DROP DATABASE IF EXISTS federated;
 DROP TABLE IF EXISTS federated.t1;

--- 1.22/mysql-test/t/federated.test	2006-02-28 13:17:35 +03:00
+++ 1.23/mysql-test/t/federated.test	2006-06-30 17:32:52 +04:00
@@ -1310,4 +1310,46 @@
 DROP TABLE federated.bug_17377_table;
 
 
+# 
+# Additional test for bug#18437 "Wrong values inserted with a before
+# update trigger on NDB table". SQL-layer didn't properly inform
+# handler about fields which were read and set in triggers. In some
+# cases this resulted in incorrect (garbage) values of OLD variables
+# and lost changes to NEW variables.
+# Since for federated engine only operation which is affected by wrong
+# fields mark-up is handler::write_row() this file constains coverage
+# for ON INSERT triggers only. Tests for other types of triggers reside
+# in ndb_trigger.test.
+#
+--disable_warnings
+drop table if exists federated.t1;
+--enable_warnings
+create table federated.t1 (a int, b int, c int);
+connection master;
+--disable_warnings
+drop table if exists federated.t1;
+drop table if exists federated.t2;
+--enable_warnings
+--replace_result $SLAVE_MYPORT SLAVE_PORT
+eval create table federated.t1 (a int,  b int, c int) engine=federated connection='mysql://root@stripped:$SLAVE_MYPORT/federated/t1';
+create trigger federated.t1_bi before insert on federated.t1 for each row set new.c= new.a * new.b;
+create table federated.t2 (a int, b int);
+insert into federated.t2 values (13, 17), (19, 23);
+# Each of three statements should correctly set values for all three fields
+# insert
+insert into federated.t1 (a, b) values (1, 2), (3, 5), (7, 11);
+select * from federated.t1;
+delete from federated.t1;
+# insert ... select
+insert into federated.t1 (a, b) select * from federated.t2;
+select * from federated.t1;
+delete from federated.t1;
+# load
+load data infile '../std_data_ln/loaddata5.dat' into table federated.t1 fields terminated by '' enclosed by '' ignore 1 lines (a, b);
+select * from federated.t1;
+drop tables federated.t1, federated.t2;
+
+connection slave;
+drop table federated.t1;
+
 source include/federated_cleanup.inc;
Thread
bk commit into 5.0 tree (dlenev:1.2203) BUG#20728dlenev30 Jun