List:Commits« Previous MessageNext Message »
From:dlenev Date:June 6 2006 1:16pm
Subject:bk commit into 5.0 tree (dlenev:1.2145) BUG#18437
View as plain text  
Below is the list of changes that have just been committed into a local
5.0 repository of dlenev. When dlenev does a push these changes will
be propagated to the main repository and, within 24 hours after the
push, to the public repository.
For information on how to access the public repository
see http://dev.mysql.com/doc/mysql/en/installing-source-tree.html

ChangeSet
  1.2145 06/06/06 17:16:08 dlenev@stripped +15 -0
  Proposed fix for bug#18437 "Wrong values inserted with a before update
  trigger on NDB table".
  
   SQL-layer was not marking fields which were used in triggers as such. As 
   result these fields were not always properly retrieved/stored by handler
   layer. So one might got wrong values or lost changes in triggers for NDB
   (and possibly InnoDB) tables. 
   This fix solves the problem by marking fields used in triggers
   appropriately.
  
  Also this patch contains the following cleanup of ha_ndbcluster code:
  
   We no longer rely on reading LEX::sql_command value in handler in order 
   to determine if we can enable optimization which allows us to handle REPLACE
   statement in more efficient way by doing replaces directly in write_row()
   method without reporting error to SQL-layer.
   Instead we rely on SQL-layer informing us whether this optimization
   applicable by calling handler::extra() method with
   HA_EXTRA_WRITE_CAN_REPLACE flag.
   As result we no longer apply this optimzation in cases when it should not
   be used (e.g. if we have on delete triggers on table) and use in some
   additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
  
  Finally this patch includes fix for bug#19634 "Re-execution of multi-delete
  which involve trigger/stored function crashes server" which was exposed by
  tests for bug#18437.
    
   The problem was caused by yet another incorrect usage of check_table_access()
   routine (the latter assumes that table list which it gets as argument
   corresponds to value LEX::query_tables_own_last). We solve this problem by
   juggling with LEX::query_tables_own_last value when we call
   check_table_access() for LEX::auxilliary_table_list (better solution is too
   intrusive and should be done in 5.1)

  mysql-test/t/ndb_trigger.test
    1.1 06/06/06 17:16:02 dlenev@stripped +89 -0
    Added test for bug#18437 "Wrong values inserted with a before update trigger
    on NDB table".

  mysql-test/r/ndb_trigger.result
    1.1 06/06/06 17:16:02 dlenev@stripped +117 -0
    Added test for bug#18437 "Wrong values inserted with a before update trigger
    on NDB table".

  sql/sql_update.cc
    1.188 06/06/06 17:16:02 dlenev@stripped +12 -1
    Mark fields which are used by on update triggers so handler will retrieve and
    save values for these fields.

  sql/sql_trigger.h
    1.20 06/06/06 17:16:02 dlenev@stripped +16 -0
    Table_triggers_list:
      Added has_update_or_delete_triggers() method which is used to catch cases
      when we cannot use HA_EXTRA_WRITE_CAN_REPLACE optimization. 
      Added mark_used_fields() method which is used to mark fields read/set by
      triggers as such so handlers will be able properly retrieve/store values
      in these fields.

  sql/sql_trigger.cc
    1.50 06/06/06 17:16:02 dlenev@stripped +35 -0
    Added Table_triggers_list::mark_used_fields() method which is used to mark
    fields read/set by triggers as such so handlers will be able properly
    retrieve/store values in these fields.

  sql/sql_table.cc
    1.304 06/06/06 17:16:02 dlenev@stripped +5 -0
    Added question for reviewer (to be removed in final version).

  sql/sql_parse.cc
    1.538 06/06/06 17:16:02 dlenev@stripped +36 -4
    To call safely check_table_access() for LEX::auxilliary_table_list we have
    to juggle with LEX::query_tables_own_last value.

  sql/sql_load.cc
    1.95 06/06/06 17:16:02 dlenev@stripped +12 -0
    Explicitly inform handler that we are doing LOAD DATA REPLACE (using
    ha_extra() method) in cases when it can promote insert operation done
    by write_row() to replace.
    Mark fields used by on update/on delete triggers as such so handler can
    properly retrieve/restore values in these fields during execution of
    LOAD DATA REPLACE.

  sql/sql_insert.cc
    1.186 06/06/06 17:16:02 dlenev@stripped +51 -11
    Explicitly inform handler that we are doing REPLACE (using ha_extra() method)
    in cases when it can promote insert operation done by write_row() to replace.
    Mark fields used by on update/on delete triggers as such so handler can
    properly retrieve/restore values in these fields during execution of
    REPLACE and INSERT ... ON DUPLICATE KEY UPDATE statements.

  sql/sql_delete.cc
    1.174 06/06/06 17:16:02 dlenev@stripped +9 -0
    Mark fields which are used by on delete triggers so handler will retrieve
    values for these fields.

  sql/item.cc
    1.214 06/06/06 17:16:02 dlenev@stripped +7 -2
    Item_trigger_field::setup_field():
      Added comment explaining why we don't set Field::query_id in this method.

  mysql-test/t/ndb_trigger.test
    1.0 06/06/06 17:16:02 dlenev@stripped +0 -0
    BitKeeper file /home/dlenev/mysql-5.0-bg18437/mysql-test/t/ndb_trigger.test

  mysql-test/r/ndb_trigger.result
    1.0 06/06/06 17:16:02 dlenev@stripped +0 -0
    BitKeeper file /home/dlenev/mysql-5.0-bg18437/mysql-test/r/ndb_trigger.result

  sql/ha_ndbcluster.cc
    1.241 06/06/06 17:16:01 dlenev@stripped +15 -12
    We no longer rely on reading LEX::sql_command value in handler in order 
    to determine if we can enable optimization which allows us to handle REPLACE
    statement in more efficient way by doing replaces directly in write_row()
    method without reporting error to SQL-layer.
    Instead we rely on SQL-layer informing us whether this optimization
    applicable by calling handler::extra() method with
    HA_EXTRA_WRITE_CAN_REPLACE flag.
    As result we no longer apply this optimzation in cases when it should not
    be used (e.g. if we have on delete triggers on table) and use in some
    additional cases when it is applicable (e.g. for LOAD DATA REPLACE).

  mysql-test/t/sp-prelocking.test
    1.5 06/06/06 17:16:01 dlenev@stripped +31 -0
    Added test for bug#19634 "Re-execution of multi-delete which involve trigger/
    stored function crashes server".

  mysql-test/r/sp-prelocking.result
    1.5 06/06/06 17:16:01 dlenev@stripped +18 -0
    Added test for bug#19634 "Re-execution of multi-delete which involve trigger/
    stored function crashes server".

  include/my_base.h
    1.78 06/06/06 17:16:01 dlenev@stripped +10 -1
    Added HA_EXTRA_WRITE_CAN_REPLACE, HA_EXTRA_WRITE_CANNOT_REPLACE - new
    parameters for ha_extra() method. We use them to inform handler that
    write_row() which tries to insert new row into the table and encounters
    some already existing row with same primary/unique key can replace old
    row with new row instead of reporting error.

# This is a BitKeeper patch.  What follows are the unified diffs for the
# set of deltas contained in the patch.  The rest of the patch, the part
# that BitKeeper cares about, is below these diffs.
# User:	dlenev
# Host:	jabberwock.site
# Root:	/home/dlenev/mysql-5.0-bg18437

--- 1.77/include/my_base.h	2006-03-10 17:04:04 +03:00
+++ 1.78/include/my_base.h	2006-06-06 17:16:01 +04:00
@@ -152,7 +152,16 @@
     other fields intact. When this is off (by default) InnoDB will use memcpy
     to overwrite entire row.
   */
-  HA_EXTRA_KEYREAD_PRESERVE_FIELDS
+  HA_EXTRA_KEYREAD_PRESERVE_FIELDS,
+  /*
+    Informs handler that write_row() which tries to insert new row into the
+    table and encounters some already existing row with same primary/unique
+    key can replace old row with new row instead of reporting error (basically
+    it informs handler that we do REPLACE instead of simple INSERT).
+    Off by default.
+  */
+  HA_EXTRA_WRITE_CAN_REPLACE,
+  HA_EXTRA_WRITE_CANNOT_REPLACE
 };
 
 	/* The following is parameter to ha_panic() */

--- 1.213/sql/item.cc	2006-04-04 08:07:19 +04:00
+++ 1.214/sql/item.cc	2006-06-06 17:16:02 +04:00
@@ -5331,9 +5331,14 @@
 void Item_trigger_field::setup_field(THD *thd, TABLE *table,
                                      GRANT_INFO *table_grant_info)
 {
+  /*
+    There is no sense in marking fields used by trigger with current value
+    of THD::query_id since it is completely unrelated to the THD::query_id
+    value for statements which will invoke trigger. So instead we use
+    Table_triggers_list::mark_used_fields() method which is called during
+    execution of these statements.
+  */
   bool save_set_query_id= thd->set_query_id;
-
-  /* TODO: Think more about consequences of this step. */
   thd->set_query_id= 0;
   /*
     Try to find field by its name and if it will be found

--- 1.173/sql/sql_delete.cc	2006-04-06 14:18:57 +04:00
+++ 1.174/sql/sql_delete.cc	2006-06-06 17:16:02 +04:00
@@ -91,6 +91,9 @@
     /* Handler didn't support fast delete; Delete rows one by one */
   }
 
+  if (table->triggers)
+    table->triggers->mark_used_fields(thd, TRG_EVENT_DELETE);
+
   table->used_keys.clear_all();
   table->quick_keys.clear_all();		// Can't use 'only index'
   select=make_select(table, 0, 0, conds, 0, &error);
@@ -482,8 +485,14 @@
     DBUG_RETURN(1);
 
   table_map tables_to_delete_from=0;
+
   for (walk= delete_tables; walk; walk= walk->next_local)
+  {
     tables_to_delete_from|= walk->table->map;
+    /* QQ: may be this code should be moved to another place ? */
+    if (walk->table->triggers)
+      walk->table->triggers->mark_used_fields(thd, TRG_EVENT_DELETE);
+  }
 
   walk= delete_tables;
   delete_while_scanning= 1;

--- 1.185/sql/sql_insert.cc	2006-03-29 14:54:59 +04:00
+++ 1.186/sql/sql_insert.cc	2006-06-06 17:16:02 +04:00
@@ -401,6 +401,9 @@
   thd->proc_info="update";
   if (duplic != DUP_ERROR || ignore)
     table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+  if (duplic == DUP_REPLACE &&
+      (!table->triggers || !table->triggers->has_update_or_delete_triggers()))
+    table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
   /*
     let's *try* to start bulk inserts. It won't necessary
     start them as values_list.elements should be greater than
@@ -429,6 +432,23 @@
     error= 1;
   }
 
+  if (table->triggers)
+  {
+    /*
+      QQ: Should we mark fields used by on INSERT triggers here ?
+      In 5.0 it is not neccessary but Monty says that it may be
+      neccessary in future.
+    */
+    if (duplic == DUP_UPDATE)
+      table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
+    else if (duplic == DUP_REPLACE)
+    {
+      /* We mark fields used in both optimized and non-optimized cases */
+      table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
+      table->triggers->mark_used_fields(thd, TRG_EVENT_DELETE);
+    }
+  }
+
   if (table_list->prepare_where(thd, 0, TRUE) ||
       table_list->prepare_check_option(thd))
     error= 1;
@@ -597,8 +617,8 @@
   table->next_number_field=0;
   thd->count_cuted_fields= CHECK_FIELD_IGNORE;
   thd->next_insert_id=0;			// Reset this if wrongly used
-  if (duplic != DUP_ERROR || ignore)
-    table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+  table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+  table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
 
   /* Reset value of LAST_INSERT_ID if no rows where inserted */
   if (!info.copied && thd->insert_id_used)
@@ -1901,7 +1921,7 @@
 {
   int error;
   ulong max_rows;
-  bool using_ignore=0, using_bin_log=mysql_bin_log.is_open();
+  bool using_bin_log= mysql_bin_log.is_open();
   delayed_row *row;
   DBUG_ENTER("handle_inserts");
 
@@ -1951,10 +1971,11 @@
     info.handle_duplicates= row->dup;
     if (info.ignore ||
 	info.handle_duplicates != DUP_ERROR)
-    {
       table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
-      using_ignore=1;
-    }
+    if (info.handle_duplicates == DUP_REPLACE &&
+        (!table->triggers ||
+         !table->triggers->has_update_or_delete_triggers()))
+      table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
     thd.clear_error(); // reset error for binlog
     if (write_record(&thd, table, &info))
     {
@@ -1962,11 +1983,8 @@
       thread_safe_increment(delayed_insert_errors,&LOCK_delayed_status);
       row->log_query = 0;
     }
-    if (using_ignore)
-    {
-      using_ignore=0;
-      table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
-    }
+    table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     if (row->query && row->log_query && using_bin_log)
     {
       Query_log_event qinfo(&thd, row->query, row->query_length, 0, FALSE);
@@ -2212,6 +2230,9 @@
   thd->cuted_fields=0;
   if (info.ignore || info.handle_duplicates != DUP_ERROR)
     table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+  if (info.handle_duplicates == DUP_REPLACE &&
+      (!table->triggers || !table->triggers->has_update_or_delete_triggers()))
+    table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
   thd->no_trans_update= 0;
   thd->abort_on_warning= (!info.ignore &&
                           (thd->variables.sql_mode &
@@ -2221,6 +2242,19 @@
          check_that_all_fields_are_given_values(thd, table, table_list)) ||
         table_list->prepare_where(thd, 0, TRUE) ||
         table_list->prepare_check_option(thd));
+
+  if (!res && table->triggers)
+  {
+    /* QQ: Should we mark fields used by ON INSERT triggers here ? */
+    if (info.handle_duplicates == DUP_UPDATE)
+      table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
+    else if (info.handle_duplicates == DUP_REPLACE)
+    {
+     table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
+     table->triggers->mark_used_fields(thd, TRG_EVENT_DELETE);
+    }
+  }
+
   DBUG_RETURN(res);
 }
 
@@ -2386,6 +2420,7 @@
 
   error= (!thd->prelocked_mode) ? table->file->end_bulk_insert():0;
   table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+  table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
 
   /*
     We must invalidate the table in the query cache before binlog writing
@@ -2467,6 +2502,9 @@
   thd->cuted_fields=0;
   if (info.ignore || info.handle_duplicates != DUP_ERROR)
     table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+  if (info.handle_duplicates == DUP_REPLACE &&
+      (!table->triggers || !table->triggers->has_update_or_delete_triggers()))
+    table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
   if (!thd->prelocked_mode)
     table->file->start_bulk_insert((ha_rows) 0);
   thd->no_trans_update= 0;
@@ -2506,6 +2544,7 @@
   else
   {
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     VOID(pthread_mutex_lock(&LOCK_open));
     mysql_unlock_tables(thd, lock);
     /*
@@ -2539,6 +2578,7 @@
   if (table)
   {
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     enum db_type table_type=table->s->db_type;
     if (!table->s->tmp_table)
     {

--- 1.94/sql/sql_load.cc	2006-03-29 20:17:14 +04:00
+++ 1.95/sql/sql_load.cc	2006-06-06 17:16:02 +04:00
@@ -224,6 +224,13 @@
       DBUG_RETURN(TRUE);
   }
 
+  if (table->triggers && handle_duplicates == DUP_REPLACE)
+  {
+    /* QQ: should we mark fields set by on INSERT triggers here ? */
+    table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
+    table->triggers->mark_used_fields(thd, TRG_EVENT_DELETE);
+  }
+
   uint tot_length=0;
   bool use_blobs= 0, use_vars= 0;
   List_iterator_fast<Item> it(fields_vars);
@@ -356,6 +363,10 @@
     if (ignore ||
 	handle_duplicates == DUP_REPLACE)
       table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+    if (handle_duplicates == DUP_REPLACE &&
+        (!table->triggers ||
+         !table->triggers->has_update_or_delete_triggers()))
+      table->file->extra(HA_EXTRA_WRITE_CAN_REPLACE);
     if (!thd->prelocked_mode)
       table->file->start_bulk_insert((ha_rows) 0);
     table->copy_blobs=1;
@@ -380,6 +391,7 @@
       error= 1;
     }
     table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
+    table->file->extra(HA_EXTRA_WRITE_CANNOT_REPLACE);
     table->next_number_field=0;
   }
   ha_enable_transaction(thd, TRUE);

--- 1.537/sql/sql_parse.cc	2006-04-06 14:18:57 +04:00
+++ 1.538/sql/sql_parse.cc	2006-06-06 17:16:02 +04:00
@@ -5154,8 +5154,26 @@
 
 
 /*
-  Check the privilege for all used tables.  Table privileges are cached
-  in the table list for GRANT checking
+  Check the privilege for all used tables.
+
+  SYNOPSYS
+    check_table_access()
+      thd          Thread context
+      want_access  Privileges requested
+      tables       List of tables to be checked
+      no_errors    FALSE/TRUE - report/don't report error to
+                   the client (using my_error() call).
+
+  NOTES
+    Table privileges are cached in the table list for GRANT checking.
+    This functions assumes that table list used and
+    thd->lex->query_tables_own_last value correspond to each other
+    (the latter should be either 0 or point to next_global member
+    of one of elements of this table list).
+
+  RETURN VALUE
+    FALSE - OK
+    TRUE  - Access denied
 */
 
 bool
@@ -6998,14 +7016,28 @@
   SELECT_LEX *select_lex= &thd->lex->select_lex;
   TABLE_LIST *aux_tables=
     (TABLE_LIST *)thd->lex->auxilliary_table_list.first;
+  TABLE_LIST **save_query_tables_own_last= thd->lex->query_tables_own_last;
   DBUG_ENTER("multi_delete_precheck");
 
   /* sql_yacc guarantees that tables and aux_tables are not zero */
   DBUG_ASSERT(aux_tables != 0);
   if (check_db_used(thd, tables) || check_db_used(thd,aux_tables) ||
-      check_table_access(thd,SELECT_ACL, tables,0) ||
-      check_table_access(thd,DELETE_ACL, aux_tables,0))
+      check_table_access(thd, SELECT_ACL, tables, 0))
+    DBUG_RETURN(TRUE);
+
+  /*
+    Since aux_tables list is not part of LEX::query_tables list we
+    have to juggle with LEX::query_tables_own_last value to be able
+    call check_table_access() safely.
+  */
+  thd->lex->query_tables_own_last= 0;
+  if (check_table_access(thd, DELETE_ACL, aux_tables, 0))
+  {
+    thd->lex->query_tables_own_last= save_query_tables_own_last;
     DBUG_RETURN(TRUE);
+  }
+  thd->lex->query_tables_own_last= save_query_tables_own_last;
+
   if ((thd->options & OPTION_SAFE_UPDATES) && !select_lex->where)
   {
     my_message(ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE,

--- 1.303/sql/sql_table.cc	2006-03-30 17:14:51 +04:00
+++ 1.304/sql/sql_table.cc	2006-06-06 17:16:02 +04:00
@@ -4087,6 +4087,11 @@
   */
   from->file->extra(HA_EXTRA_RETRIEVE_ALL_COLS);
   init_read_record(&info, thd, from, (SQL_SELECT *) 0, 1,1);
+  /*
+    QQ: Should we enable HA_EXTRA_WRITE_CAN_REPLACE optimization here or
+        simply put assert that will catch cases when copy_data_between_tables()
+        is supposed to do REPLACE (it seems that there are no such cases now) ?
+  */
   if (ignore ||
       handle_duplicates == DUP_REPLACE)
     to->file->extra(HA_EXTRA_IGNORE_DUP_KEY);

--- 1.187/sql/sql_update.cc	2006-04-05 13:27:49 +04:00
+++ 1.188/sql/sql_update.cc	2006-06-06 17:16:02 +04:00
@@ -228,6 +228,9 @@
     DBUG_RETURN(1);				/* purecov: inspected */
   }
 
+  if (table->triggers)
+    table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
+
   if (conds)
   {
     Item::cond_result cond_value;
@@ -268,7 +271,12 @@
   }
   init_ftfuncs(thd, select_lex, 1);
   /* Check if we are modifying a key that we are used to search with */
-  
+
+  /*
+    QQ: May be should take into account fields which we set in triggers
+        when we evaluate value of 'used_key_is_modified'?
+  */
+
   if (select && select->quick)
   {
     used_index= select->quick->index;
@@ -978,6 +986,9 @@
       table->no_keyread=1;
       table->used_keys.clear_all();
       table->pos_in_table_list= tl;
+
+      if (table->triggers)
+        table->triggers->mark_used_fields(thd, TRG_EVENT_UPDATE);
     }
   }
 
--- New file ---
+++ mysql-test/r/ndb_trigger.result	06/06/06 17:16:02
drop table if exists t1, t2, t3;
create table t1 (id int primary key, a int not null, b decimal (63,30) default 0) engine=ndb;
create table t2 (op char(1), a int not null, b decimal (63,30));
create table t3 select 1 as i;
create trigger t1_bu before update on t1 for each row
begin
insert into t2 values ("u", old.a, old.b);
set new.b = old.b + 10;
end;//
create trigger t1_bd before delete on t1 for each row
begin
insert into t2 values ("d", old.a, old.b);
end;//
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (3, 3, 3.05), (4, 4, 4.05);
update t1 set a=5 where a != 3;
select * from t1 order by id;
id	a	b
1	5	11.050000000000000000000000000000
2	5	12.050000000000000000000000000000
3	3	3.050000000000000000000000000000
4	5	14.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	1	1.050000000000000000000000000000
u	2	2.050000000000000000000000000000
u	4	4.050000000000000000000000000000
delete from t2;
update t1, t3 set a=6 where a = 5;
select * from t1 order by id;
id	a	b
1	6	21.050000000000000000000000000000
2	6	22.050000000000000000000000000000
3	3	3.050000000000000000000000000000
4	6	24.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	5	11.050000000000000000000000000000
u	5	12.050000000000000000000000000000
u	5	14.050000000000000000000000000000
delete from t2;
delete from t1 where a != 3;
select * from t1 order by id;
id	a	b
3	3	3.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	6	21.050000000000000000000000000000
d	6	22.050000000000000000000000000000
d	6	24.050000000000000000000000000000
delete from t2;
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (4, 4, 4.05);
delete t1 from t1, t3 where a != 3;
select * from t1 order by id;
id	a	b
3	3	3.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
d	1	1.050000000000000000000000000000
d	2	2.050000000000000000000000000000
d	4	4.050000000000000000000000000000
delete from t2;
insert into t1 values (4, 4, 4.05);
insert into t1 (id, a) values (4, 1), (3, 1) on duplicate key update a= a + 1;
select * from t1 order by id;
id	a	b
3	4	13.050000000000000000000000000000
4	5	14.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	3	3.050000000000000000000000000000
u	4	4.050000000000000000000000000000
delete from t2;
delete from t3;
insert into t3 values (4), (3);
insert into t1 (id, a) (select i, 1 from t3) on duplicate key update a= a + 1;
select * from t1 order by id;
id	a	b
3	5	23.050000000000000000000000000000
4	6	24.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	4	13.050000000000000000000000000000
u	5	14.050000000000000000000000000000
delete from t2;
replace into t1 (id, a) values (4, 1), (3, 1);
select * from t1 order by id;
id	a	b
3	1	33.050000000000000000000000000000
4	1	34.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	5	23.050000000000000000000000000000
u	6	24.050000000000000000000000000000
delete from t2;
replace into t1 (id, a) (select i, 2 from t3);
select * from t1 order by id;
id	a	b
3	2	43.050000000000000000000000000000
4	2	44.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	1	33.050000000000000000000000000000
u	1	34.050000000000000000000000000000
delete from t1;
delete from t2;
insert into t1 values (3, 1, 1.05), (5, 2, 2.05);
load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (id, a);
select * from t1 order by id;
id	a	b
3	4	11.050000000000000000000000000000
5	6	12.050000000000000000000000000000
select * from t2 order by op, a, b;
op	a	b
u	1	1.050000000000000000000000000000
u	2	2.050000000000000000000000000000
delete from t2;
drop tables t1, t2, t3;

--- New file ---
+++ mysql-test/t/ndb_trigger.test	06/06/06 17:16:02
# Tests which involve triggers and NDB storage engine
--source include/have_ndb.inc
--source include/not_embedded.inc

# 
# Test for bug#18437 "Wrong values inserted with a before update
# trigger on NDB table". SQL-layer didn't properly inform handler
# about fields which were read and set in triggers. In some cases 
# this resulted in incorrect (garbage) values of OLD variables and
# lost changes to NEW variables.
#

--disable_warnings
drop table if exists t1, t2, t3;
--enable_warnings

create table t1 (id int primary key, a int not null, b decimal (63,30) default 0) engine=ndb;
create table t2 (op char(1), a int not null, b decimal (63,30));
create table t3 select 1 as i;
	
delimiter //;
create trigger t1_bu before update on t1 for each row
begin
  insert into t2 values ("u", old.a, old.b);
  set new.b = old.b + 10;
end;//
create trigger t1_bd before delete on t1 for each row
begin
  insert into t2 values ("d", old.a, old.b);
end;//
delimiter ;//
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (3, 3, 3.05), (4, 4, 4.05);

# Check that usual update works as it should
update t1 set a=5 where a != 3;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check that everything works for multi-update
update t1, t3 set a=6 where a = 5;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for delete
delete from t1 where a != 3;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for multi-delete
insert into t1 values (1, 1, 1.05), (2, 2, 2.05), (4, 4, 4.05);
delete t1 from t1, t3 where a != 3;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for insert ... on duplicate key update
insert into t1 values (4, 4, 4.05);
insert into t1 (id, a) values (4, 1), (3, 1) on duplicate key update a= a + 1;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for insert ... select ... on duplicate key update
delete from t3;
insert into t3 values (4), (3);
insert into t1 (id, a) (select i, 1 from t3) on duplicate key update a= a + 1;
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for replace
# The results of this and following tests will change once bug #13479 is fixed.
replace into t1 (id, a) values (4, 1), (3, 1);
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;
# Check for replace ... select ...
replace into t1 (id, a) (select i, 2 from t3);
select * from t1 order by id;
select * from t2 order by op, a, b;
# Check for load data replace
delete from t1;
delete from t2;
insert into t1 values (3, 1, 1.05), (5, 2, 2.05);
load data infile '../std_data_ln/loaddata5.dat' replace into table t1 fields terminated by '' enclosed by '' ignore 1 lines (id, a);
select * from t1 order by id;
select * from t2 order by op, a, b;
delete from t2;

drop tables t1, t2, t3;

# End of 5.0 tests


--- 1.49/sql/sql_trigger.cc	2006-03-28 01:01:48 +04:00
+++ 1.50/sql/sql_trigger.cc	2006-06-06 17:16:02 +04:00
@@ -1013,6 +1013,9 @@
           continue;
         }
 
+        triggers->trigger_fields[lex.trg_chistics.event]
+                                [lex.trg_chistics.action_time]=
+          (Item_trigger_field *)(lex.trg_table_fields.first);
         /*
           Let us bind Item_trigger_field objects representing access to fields
           in old/new versions of row in trigger to Field objects in table being
@@ -1524,6 +1527,38 @@
   }
 
   return err_status;
+}
+
+
+/*
+  Mark fields of subject table which we read/set in its triggers as such.
+
+  SYNOPSIS
+    mark_used_fields()
+      thd    Current thread context
+      event  Type of event triggers for which we are going to inspect
+
+  DESCRIPTION
+    This method marks fields of subject table which are read/set in its
+    triggers as such (by setting Field::query_id equal to THD::query_id)
+    and thus informs handler that values for these fields should be
+    retrieved/stored during execution of statement.
+*/
+
+void Table_triggers_list::mark_used_fields(THD *thd, trg_event_type event)
+{
+  int action_time;
+  Item_trigger_field *trg_field;
+
+  for (action_time= 0; action_time < (int)TRG_ACTION_MAX; action_time++)
+  {
+    for (trg_field= trigger_fields[event][action_time]; trg_field;
+         trg_field= trg_field->next_trg_field)
+    {
+      if (trg_field->field_idx != (uint)-1)
+        table->field[trg_field->field_idx]->query_id = thd->query_id;
+    }
+  }
 }
 
 

--- 1.19/sql/sql_trigger.h	2006-02-26 16:32:52 +03:00
+++ 1.20/sql/sql_trigger.h	2006-06-06 17:16:02 +04:00
@@ -26,6 +26,11 @@
   /* Triggers as SPs grouped by event, action_time */
   sp_head           *bodies[TRG_EVENT_MAX][TRG_ACTION_MAX];
   /*
+    Heads of the lists linking items for all fields used in triggers
+    grouped by event and action_time.
+  */
+  Item_trigger_field *trigger_fields[TRG_EVENT_MAX][TRG_ACTION_MAX];
+  /*
     Copy of TABLE::Field array with field pointers set to TABLE::record[1]
     buffer instead of TABLE::record[0] (used for OLD values in on UPDATE
     trigger and DELETE trigger when it is called for REPLACE).
@@ -82,6 +87,7 @@
     record1_field(0), table(table_arg)
   {
     bzero((char *)bodies, sizeof(bodies));
+    bzero((char *)trigger_fields, sizeof(trigger_fields));
     bzero((char *)&subject_table_grants, sizeof(subject_table_grants));
   }
   ~Table_triggers_list();
@@ -117,7 +123,17 @@
     return test(bodies[TRG_EVENT_UPDATE][TRG_ACTION_BEFORE]);
   }
 
+  bool has_update_or_delete_triggers()
+  {
+    return (bodies[TRG_EVENT_UPDATE][TRG_ACTION_BEFORE] ||
+            bodies[TRG_EVENT_UPDATE][TRG_ACTION_AFTER] ||
+            bodies[TRG_EVENT_DELETE][TRG_ACTION_BEFORE] ||
+            bodies[TRG_EVENT_DELETE][TRG_ACTION_AFTER]);
+  }
+
   void set_table(TABLE *new_table);
+
+  void mark_used_fields(THD *thd, trg_event_type event);
 
   friend class Item_trigger_field;
   friend int sp_cache_routines_and_add_tables_for_triggers(THD *thd, LEX *lex,

--- 1.240/sql/ha_ndbcluster.cc	2006-03-28 17:40:04 +04:00
+++ 1.241/sql/ha_ndbcluster.cc	2006-06-06 17:16:01 +04:00
@@ -3129,20 +3129,11 @@
     break;
   case HA_EXTRA_IGNORE_DUP_KEY:       /* Dup keys don't rollback everything*/
     DBUG_PRINT("info", ("HA_EXTRA_IGNORE_DUP_KEY"));
-    if (current_thd->lex->sql_command == SQLCOM_REPLACE && !m_has_unique_index)
-    {
-      DBUG_PRINT("info", ("Turning ON use of write instead of insert"));
-      m_use_write= TRUE;
-    } else 
-    {
-      DBUG_PRINT("info", ("Ignoring duplicate key"));
-      m_ignore_dup_key= TRUE;
-    }
+    DBUG_PRINT("info", ("Ignoring duplicate key"));
+    m_ignore_dup_key= TRUE;
     break;
   case HA_EXTRA_NO_IGNORE_DUP_KEY:
     DBUG_PRINT("info", ("HA_EXTRA_NO_IGNORE_DUP_KEY"));
-    DBUG_PRINT("info", ("Turning OFF use of write instead of insert"));
-    m_use_write= FALSE;
     m_ignore_dup_key= FALSE;
     break;
   case HA_EXTRA_RETRIEVE_ALL_COLS:    /* Retrieve all columns, not just those
@@ -3172,7 +3163,19 @@
   case HA_EXTRA_KEYREAD_PRESERVE_FIELDS:
     DBUG_PRINT("info", ("HA_EXTRA_KEYREAD_PRESERVE_FIELDS"));
     break;
-
+  case HA_EXTRA_WRITE_CAN_REPLACE:
+    DBUG_PRINT("info", ("HA_EXTRA_WRITE_CAN_REPLACE"));
+    if (!m_has_unique_index)
+    {
+      DBUG_PRINT("info", ("Turning ON use of write instead of insert"));
+      m_use_write= TRUE;
+    }
+    break;
+  case HA_EXTRA_WRITE_CANNOT_REPLACE:
+    DBUG_PRINT("info", ("HA_EXTRA_WRITE_CANNOT_REPLACE"));
+    DBUG_PRINT("info", ("Turning OFF use of write instead of insert"));
+    m_use_write= FALSE;
+    break;
   }
   
   DBUG_RETURN(0);

--- 1.4/mysql-test/r/sp-prelocking.result	2006-03-28 15:06:17 +04:00
+++ 1.5/mysql-test/r/sp-prelocking.result	2006-06-06 17:16:01 +04:00
@@ -237,3 +237,21 @@
 drop table t1;
 drop view v1, v2, v3;
 drop function bug15683;
+drop table if exists t1, t2, t3;
+drop function if exists bug19634;
+create table t1 (id int, data int);
+create table t2 (id int);
+create table t3 (data int);
+create function bug19634() returns int return (select count(*) from t3);
+prepare stmt from "delete t1 from t1, t2 where t1.id = t2.id and bug19634()";
+execute stmt;
+execute stmt;
+deallocate prepare stmt;
+create trigger t1_bi before delete on t1 for each row insert into t3 values (old.data);
+prepare stmt from "delete t1 from t1, t2 where t1.id = t2.id";
+execute stmt;
+execute stmt;
+deallocate prepare stmt;
+drop function bug19634;
+drop table t1, t2, t3;
+End of 5.0 tests

--- 1.4/mysql-test/t/sp-prelocking.test	2006-03-28 15:06:17 +04:00
+++ 1.5/mysql-test/t/sp-prelocking.test	2006-06-06 17:16:01 +04:00
@@ -272,3 +272,34 @@
 drop view v1, v2, v3;
 drop function bug15683;
 
+
+#
+# Bug#19634 "Re-execution of multi-delete which involve trigger/stored 
+#            function crashes server"
+#
+--disable_warnings
+drop table if exists t1, t2, t3;
+drop function if exists bug19634;
+--enable_warnings
+create table t1 (id int, data int);
+create table t2 (id int);
+create table t3 (data int);
+create function bug19634() returns int return (select count(*) from t3);
+prepare stmt from "delete t1 from t1, t2 where t1.id = t2.id and bug19634()";
+# This should not crash server
+execute stmt;
+execute stmt;
+deallocate prepare stmt;
+
+create trigger t1_bi before delete on t1 for each row insert into t3 values (old.data);
+prepare stmt from "delete t1 from t1, t2 where t1.id = t2.id";
+
+execute stmt;
+execute stmt;
+deallocate prepare stmt;
+
+drop function bug19634;
+drop table t1, t2, t3;
+
+
+--echo End of 5.0 tests
Thread
bk commit into 5.0 tree (dlenev:1.2145) BUG#18437dlenev6 Jun