List:Commits« Previous MessageNext Message »
From:Guilhem Bichot Date:March 31 2011 1:41pm
Subject:bzr commit into mysql-trunk branch (guilhem.bichot:3286)
View as plain text  
#At file:///home/mysql_src/bzrrepos_new/new_stuf_wl4800/ based on revid:guilhem.bichot@strippedqvh16uin7d7f374

 3286 Guilhem Bichot	2011-03-31
      1) Follow-up to previous patch.
      glob_recursive_disable_I_S should be encapsulated and thread-safe: it should
      belong to a per-thread optimizer trace context (Opt_trace_context).
      But it also must exist at all times, though the context (THD::opt_trace) is
      allocated only when needed (when @@optimizer_trace has enabled=on or --debug
      is used).
      Thus we do this refactoring:
      from
      "THD points to Opt_trace_context allocated when needed"
      to
      "THD contains Opt_trace_context which:
        - contains the I_S_disabled counter (replaces glob_recursive_disable_I_S)
        - and points to Opt_trace_context_impl allocated when needed".
      In other words: the old Opt_trace_context moves to Opt_trace_context_impl;
      the new Opt_trace_context exists at all times and is just a container for
      I_S_disabled plus a pointer to Opt_trace_context_impl.
      This imposes a change in usage in the optimizer code:
      Opt_trace_object(thd->opt_trace) becomes
      Opt_trace_object(&thd->opt_trace) .
      
      2) Because of the new counter Opt_trace_context::I_S_disabled, we don't use
      NO_FOR_THIS_AND_CHILDREN anymore, so enum_support_I_S only has two values now:
      changing it back to boolean:
      YES_FOR_THIS -> true; NO_FOR_THIS -> false.
      
      3) Simplifying code and removing one failure possibility.
      This follows from the fact that enum_support_I_S was eliminated.
      In the life of an Opt_trace_stmt, support for I_S may be temporarily
      disabled (using Opt_trace_disable_I_S or on a per-feature basis).
      Once disabled, it must stay disabled until re-enabled at the same stack
      frame. This:
        Opt_trace_object1 // disables I_S
          Opt_trace_object2 // re-enables I_S
      is impossible (the top object wins).
      So it is sufficient, to keep track of the current state, to have a counter
      incremented each time we get a request to disable I_S.
      This allows:
      - getting rid of one Dynamic_array in Opt_trace_stmt, and of its possible
      memory allocation failure
      - and thus having Opt_trace_stmt::disable_I_S() always succeed
      - and thus having Opt_trace_context::disable_I_S_for_this_and_children()
      always succeed
      - so less if()s (no return code to test)
      - and Opt_trace_disable_I_S can now guarantee that it will do its job,
      which will be useful for the upcoming security-related patch.
      
      4) Code simplification: instead of having one parameter 'support_I_S' in the constructor
      of Opt_trace_stmt, let the object be born with I_S enabled, and let the caller
      disable I_S with the existing Opt_trace_stmt::disable_I_S() function. It makes
      one place less in Opt_trace_stmt class where Opt_trace_stmt::I_S_disabled is changed.
      
      5) Unit test for Opt_trace_context::disable_I_S_for_this_and_children()
      
      6) Unit test for optimization in common case where all statements
      and substatements ask neither for I_S nor for DBUG.
     @ mysql-test/t/optimizer_trace_debug.test
        Name of debug symbol has changed: due to refactoring, when tracing is off
        Opt_trace_context::start() will be called but no Opt_trace_stmt
        should be created.
        Test that when in a substatement, tracing is disabled by the caller
        then we don't create an Opt_trace_stmt (this is also tested in a unit test).
     @ sql/handler.cc
        thd->opt_trace is not a pointer anymore
     @ sql/item_subselect.cc
        thd->opt_trace is not a pointer anymore
     @ sql/mysqld.cc
        thd->opt_trace is not a pointer anymore
     @ sql/opt_range.cc
        thd->opt_trace is not a pointer anymore
     @ sql/opt_trace.cc
        - Opt_trace_stmt::disable_I_S_for_this_and_children() is renamed
        to disable_I_S() because it can only disable for itself, not for children
        as it cannot access Opt_trace_context::I_S_disabled. So there is now
        a master function, Opt_trace_context::disable_I_S_for_this_and_children(),
        which sets Opt_trace_context::I_S_disabled and calls
        Opt_trace_stmt::disable_I_S(). Same for restore_I_S().
        - Many Opt_trace_context functions access data from "impl", which may
        or not be NULL depending on the function (so sometimes we have to test
        for NULL sometimes not).
        - Some logic of opt_trace_start() (server-specific function) moves
        into Opt_trace_context::start(),
        because certain optimizations done in opt_trace_start() were not
        its province. Opt_trace_context::start() is now informed of whether it needs
        to disable tracing of children statements, and if --debug output is
        needed.
        - Opt_trace_context::start() allocates Opt_trace_context_impl if needed.
        - Because disabling of tracing is now done by
        Opt_trace_context::disable_I_S_for_this_and_children(),
        Opt_trace_disable_I_S keeps a pointer to the context instead
        of to the statement.
     @ sql/opt_trace.h
        * Declaration of Opt_trace_context_impl: it steals most of what Opt_trace_context had.
        Opt_trace_context owns only I_S_disabled.
        * New class Opt_trace_start: see comments of opt_trace2server.cc
     @ sql/opt_trace2server.cc
        * opt_trace_start() now only does server-specific checks (is this SQL command
        trace-able?), to decide what kind of tracing it wants:
         ** debug / not debug
         ** I_S / no I_S for this statement / no I_S for this statement and its children,
        and then just forwards the request to Opt_trace_context::start().
        * Callers of opt_trace_start() needed to remember the return value, to
        pass it to opt_trace_end(), so that opt_trace_end() knows whether
        there is something to end at all. Now they also need to remember
        whether opt_trace_start() requested no tracing for children. To make
        it clean, opt_trace_start() is replaced with an Opt_trace_start class,
        used with "RAII": you instantiate it to call opt_trace_start(),
        it keeps an internal memory of what opt_trace_start() did,
        and the destructor automatically calls opt_trace_end(). Actually
        opt_trace_start() is part of the constructor and opt_trace_end()
        part of the destructor.
        - Because opt_trace_start() was always followed by opt_trace_set_query(),
        the latter is incorporated into the constructor of Opt_trace_start.
        - In fill_optimizer_trace_info(), indentation changes because thd->opt_trace
        is not a pointer anymore.
        ******
        Consequence of the change to opt_trace.cc: in Opt_trace_start::Opt_trace_start(),
        even though we ask for I_S support, Opt_trace_context::start() may decide
        to not provide it (if tracing is disabled by the caller). Then, we should
        not call set_query().
     @ sql/sql_class.cc
        thd->opt_trace is not a pointer anymore
     @ sql/sql_class.h
        thd->opt_trace is not a pointer anymore
     @ sql/sql_delete.cc
        thd->opt_trace is not a pointer anymore
     @ sql/sql_help.cc
        thd->opt_trace is not a pointer anymore
     @ sql/sql_parse.cc
        New class Opt_trace_start simplifies things:
        - no variables which callers must remember on the stack and pass
        to opt_trace_end()
        - call to opt_trace_end() is implicit, cannot be forgotten
        even if multiple exit paths exist ("ok:", "error:", "error2:" etc).
        - because destructors run at end, we had to call
           trace_command.end(); // must be closed before trace is ended below
           opt_trace_end(thd, started_optimizer_trace);
        but now that opt_trace_end() is in a destructor, the order
        of destructor calls makes ".end()" calls unneeded.
        - set_query() is now done by Opt_trace_start.
     @ sql/sql_prepare.cc
        see comment of sql_parse.cc. We don't need the final "end" label anymore,
        so can revert certain code lines to how they are in trunk
     @ sql/sql_select.cc
        thd->opt_trace is not a pointer anymore
     @ sql/sql_update.cc
        thd->opt_trace is not a pointer anymore
     @ sql/sql_view.cc
        thd->opt_trace is not a pointer anymore
     @ sql/sys_vars.cc
        thd->opt_trace is not a pointer anymore
     @ unittest/gunit/opt_trace-t.cc
        Update to new prototypes: Opt_trace_context::start() gets "false" for "support_dbug".
        New unit tests.

    modified:
      mysql-test/r/optimizer_trace_debug.result
      mysql-test/t/optimizer_trace_debug.test
      sql/handler.cc
      sql/item_subselect.cc
      sql/mysqld.cc
      sql/opt_range.cc
      sql/opt_trace.cc
      sql/opt_trace.h
      sql/opt_trace2server.cc
      sql/sql_class.cc
      sql/sql_class.h
      sql/sql_delete.cc
      sql/sql_help.cc
      sql/sql_parse.cc
      sql/sql_prepare.cc
      sql/sql_select.cc
      sql/sql_update.cc
      sql/sql_view.cc
      sql/sys_vars.cc
      unittest/gunit/opt_trace-t.cc
=== modified file 'mysql-test/r/optimizer_trace_debug.result'
--- a/mysql-test/r/optimizer_trace_debug.result	2011-03-08 07:18:14 +0000
+++ b/mysql-test/r/optimizer_trace_debug.result	2011-03-31 13:41:41 +0000
@@ -5,7 +5,7 @@
 # outside of any tricky stored-routine scenarios tested
 # in optimizer_trace2.test, are optimized, i.e. no trace is
 # created, even a dummy internal one invisible in I_S.
-set debug="d,opt_trace_should_not_start";
+set debug="d,no_new_opt_trace_stmt";
 select 1;
 1
 1
@@ -19,7 +19,7 @@ set debug="default";
 select 2;
 2
 2
-set debug="d,opt_trace_should_not_start";
+set debug="d,no_new_opt_trace_stmt";
 select * from information_schema.OPTIMIZER_TRACE;
 QUERY	TRACE	MISSING_BYTES_BEYOND_MAX_MEM_SIZE
 select 2	{
@@ -85,4 +85,19 @@ select 2	{
     }
   ]
 }	0
-set optimizer_trace=default;
+set optimizer_trace="enabled=on";
+
+# Test that if top command is not trace-able,
+# substatements don't create an Opt_trace_stmt either.
+
+create function f1() returns int
+begin
+declare b int;
+select 48 into b from dual;
+return 36;
+end|
+set optimizer_trace_offset=0, optimizer_trace_limit=100;
+do f1();
+select * from information_schema.OPTIMIZER_TRACE;
+QUERY	TRACE	MISSING_BYTES_BEYOND_MAX_MEM_SIZE
+drop function f1;

=== modified file 'mysql-test/t/optimizer_trace_debug.test'
--- a/mysql-test/t/optimizer_trace_debug.test	2011-03-08 07:18:14 +0000
+++ b/mysql-test/t/optimizer_trace_debug.test	2011-03-31 13:41:41 +0000
@@ -13,7 +13,7 @@
 --echo # created, even a dummy internal one invisible in I_S.
 
 # make server crash if it creates a trace, even a dummy internal one
-set debug="d,opt_trace_should_not_start";
+set debug="d,no_new_opt_trace_stmt";
 
 # tracable but tracing is off
 select 1;
@@ -26,7 +26,7 @@ set optimizer_trace="enabled=on";
 select * from information_schema.OPTIMIZER_TRACE;
 set debug="default";
 select 2;
-set debug="d,opt_trace_should_not_start";
+set debug="d,no_new_opt_trace_stmt";
 select * from information_schema.OPTIMIZER_TRACE;
 set @a=25;
 set optimizer_trace="enabled=off";
@@ -36,4 +36,21 @@ select 3;
 # should see only trace of "select 2"
 select * from information_schema.OPTIMIZER_TRACE;
 
-set optimizer_trace=default;
+set optimizer_trace="enabled=on";
+
+--echo
+--echo # Test that if top command is not trace-able,
+--echo # substatements don't create an Opt_trace_stmt either.
+--echo
+delimiter |;
+create function f1() returns int
+begin
+  declare b int;
+  select 48 into b from dual;
+  return 36;
+end|
+delimiter ;|
+set optimizer_trace_offset=0, optimizer_trace_limit=100;
+do f1(); # DO is never traced
+select * from information_schema.OPTIMIZER_TRACE;
+drop function f1;

=== modified file 'sql/handler.cc'
--- a/sql/handler.cc	2011-03-21 17:55:41 +0000
+++ b/sql/handler.cc	2011-03-31 13:41:41 +0000
@@ -4514,7 +4514,7 @@ handler::multi_range_read_info_const(uin
   *bufsz= 0;
 
   seq_it= seq->init(seq_init_param, n_ranges, *flags);
-  Opt_trace_array ota(thd->opt_trace, "ranges");
+  Opt_trace_array ota(&thd->opt_trace, "ranges");
   while (!seq->next(seq_it, &range, &ota))
   {
     if (unlikely(thd->killed != 0))

=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc	2011-03-21 17:55:41 +0000
+++ b/sql/item_subselect.cc	2011-03-31 13:41:41 +0000
@@ -307,11 +307,10 @@ bool Item_subselect::exec()
     2) REPEATED_SUBSELECT is disabled
   */
 #ifdef OPTIMIZER_TRACE  
-  Opt_trace_context * const trace= thd->opt_trace;
-  const bool repeated_trace_enabled= trace ? 
-    trace->feature_enabled(Opt_trace_context::REPEATED_SUBSELECT) :
-    false;
-  const bool disable_trace= (traced_before && !repeated_trace_enabled);
+  Opt_trace_context * const trace= &thd->opt_trace;
+  const bool disable_trace=
+    traced_before &&
+    !trace->feature_enabled(Opt_trace_context::REPEATED_SUBSELECT);
   Opt_trace_disable_I_S disable_trace_wrapper(trace, disable_trace);
   traced_before= true;
 
@@ -1144,7 +1143,7 @@ Item_in_subselect::single_value_transfor
 	!(select_lex->next_select()) &&
         select_lex->table_list.elements)
     {
-      OPT_TRACE_TRANSFORM(thd->opt_trace, oto0, oto1,
+      OPT_TRACE_TRANSFORM(&thd->opt_trace, oto0, oto1,
                           select_lex->select_number,
                           "> ALL/ANY (SELECT)", "SELECT(MIN)");
       oto1.add("chosen", true);
@@ -1193,7 +1192,7 @@ Item_in_subselect::single_value_transfor
     }
     else
     {
-      OPT_TRACE_TRANSFORM(thd->opt_trace, oto0, oto1,
+      OPT_TRACE_TRANSFORM(&thd->opt_trace, oto0, oto1,
                           select_lex->select_number,
                           "> ALL/ANY (SELECT)", "MIN (SELECT)");
       oto1.add("chosen", true);
@@ -1300,7 +1299,7 @@ Item_in_subselect::single_value_in_to_ex
   SELECT_LEX *select_lex= join->select_lex;
   DBUG_ENTER("Item_in_subselect::single_value_in_to_exists_transformer");
 
-  OPT_TRACE_TRANSFORM(thd->opt_trace, oto0, oto1, select_lex->select_number,
+  OPT_TRACE_TRANSFORM(&thd->opt_trace, oto0, oto1, select_lex->select_number,
                       "IN (SELECT)", "EXISTS (CORRELATED SELECT)");
   oto1.add("chosen", true);
 
@@ -1575,7 +1574,7 @@ Item_in_subselect::row_value_in_to_exist
                         !select_lex->table_list.elements);
 
   DBUG_ENTER("Item_in_subselect::row_value_in_to_exists_transformer");
-  OPT_TRACE_TRANSFORM(thd->opt_trace, oto0, oto1, select_lex->select_number,
+  OPT_TRACE_TRANSFORM(&thd->opt_trace, oto0, oto1, select_lex->select_number,
                       "IN (SELECT)", "EXISTS (CORRELATED SELECT)");
   oto1.add("chosen", true);
 
@@ -1963,7 +1962,7 @@ bool Item_in_subselect::setup_engine()
   subselect_single_select_engine *old_engine_derived=
     static_cast<subselect_single_select_engine*>(old_engine);
 
-  OPT_TRACE_TRANSFORM(thd->opt_trace, oto0, oto1,
+  OPT_TRACE_TRANSFORM(&thd->opt_trace, oto0, oto1,
                       old_engine_derived->join->select_lex->select_number,
                       "IN (SELECT)", "materialization");
   oto1.add("chosen", true);

=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc	2011-03-21 17:55:41 +0000
+++ b/sql/mysqld.cc	2011-03-31 13:41:41 +0000
@@ -2487,10 +2487,10 @@ the thread stack. Please read http://dev
     fprintf(stderr, "Connection ID (thread ID): %lu\n", (ulong) thd->thread_id);
     fprintf(stderr, "Status: %s\n", kreason);
 #ifdef OPTIMIZER_TRACE
-    if ((thd->opt_trace != NULL) && thd->opt_trace->is_started())
+    if (thd->opt_trace.is_started())
     {
       const size_t max_print_len= 4096; // print those final bytes
-      const char *tail= thd->opt_trace->get_tail(max_print_len);
+      const char *tail= thd->opt_trace.get_tail(max_print_len);
       fprintf(stderr, "Tail of Optimizer trace (%p): ", tail);
       my_safe_print_str(tail, max_print_len);
     }

=== modified file 'sql/opt_range.cc'
--- a/sql/opt_range.cc	2011-03-21 20:50:40 +0000
+++ b/sql/opt_range.cc	2011-03-31 13:41:41 +0000
@@ -2062,7 +2062,7 @@ void TRP_RANGE::trace_basic_info(const P
   trace_object->add_alnum("type", "range_scan").
     add_utf8("index", cur_key.name).add("records", records);
 
-  Opt_trace_array trace_range(param->thd->opt_trace, "ranges");
+  Opt_trace_array trace_range(&param->thd->opt_trace, "ranges");
 
   const SEL_ARG *current_range= key;
   // navigate from root to first interval in the interval tree 
@@ -2143,7 +2143,7 @@ void TRP_ROR_INTERSECT::trace_basic_info
     add("covering", is_covering).
     add("clustered_pk_scan", cpk_scan != NULL);
 
-  Opt_trace_context *trace_ctx= param->thd->opt_trace;
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
   Opt_trace_array ota(trace_ctx, "intersect_of");
   for (st_ror_scan_info **cur_scan= first_scan;
        cur_scan != last_scan;
@@ -2202,13 +2202,14 @@ void TRP_ROR_UNION::trace_basic_info(con
                                      Opt_trace_object *trace_object) const
 {
 #ifdef OPTIMIZER_TRACE
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
   trace_object->add_alnum("type", "index_roworder_union");
-  Opt_trace_array ota(param->thd->opt_trace, "union_of");
+  Opt_trace_array ota(trace_ctx, "union_of");
   for (TABLE_READ_PLAN **current= first_ror;
        current != last_ror;
        current++)
   {
-    Opt_trace_object trp_info(param->thd->opt_trace);
+    Opt_trace_object trp_info(trace_ctx);
     (*current)->trace_basic_info(param, &trp_info);
   }
 #endif
@@ -2238,13 +2239,14 @@ void TRP_INDEX_MERGE::trace_basic_info(c
                                        Opt_trace_object *trace_object) const
 {
 #ifdef OPTIMIZER_TRACE
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
   trace_object->add_alnum("type", "index_merge");
-  Opt_trace_array ota(param->thd->opt_trace, "index_merge_of");
+  Opt_trace_array ota(trace_ctx, "index_merge_of");
   for (TRP_RANGE **current= range_scans;
        current != range_scans_end;
        current++)
   {
-    Opt_trace_object trp_info(param->thd->opt_trace);
+    Opt_trace_object trp_info(trace_ctx);
     (*current)->trace_basic_info(param, &trp_info);
   }
 #endif
@@ -2335,16 +2337,16 @@ void TRP_GROUP_MIN_MAX::trace_basic_info
     add("cost", read_cost);
 
   const KEY_PART_INFO *key_part= index_info->key_part;
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
   {
-    Opt_trace_array trace_keyparts(param->thd->opt_trace,
-                                   "key_parts_used_for_access");
+    Opt_trace_array trace_keyparts(trace_ctx, "key_parts_used_for_access");
     for (uint partno= 0; partno < used_key_parts; partno++)
     {
       const KEY_PART_INFO *cur_key_part= key_part + partno;
       trace_keyparts.add_utf8(cur_key_part->field->field_name);
     }
   }
-  Opt_trace_array trace_range(param->thd->opt_trace, "ranges");
+  Opt_trace_array trace_range(trace_ctx, "ranges");
 
 
   const SEL_ARG *current_range= index_tree;
@@ -2510,7 +2512,7 @@ int SQL_SELECT::test_quick_select(THD *t
   else if (read_time <= 2.0 && !force_quick_range)
     DBUG_RETURN(0);				/* No need for quick select */
 
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   Opt_trace_object trace_range(trace, "range_analysis");
   Opt_trace_object(trace, "table_scan").
     add("records", head->file->stats.records).
@@ -4159,7 +4161,7 @@ TABLE_READ_PLAN *get_best_disjunct_quick
   DBUG_ENTER("get_best_disjunct_quick");
   DBUG_PRINT("info", ("Full table scan cost: %g", read_time));
 
-  Opt_trace_context * const trace= param->thd->opt_trace;
+  Opt_trace_context * const trace= &param->thd->opt_trace;
   Opt_trace_object trace_best_disjunct(trace);
   if (!(range_scans= (TRP_RANGE**)alloc_root(param->mem_root,
                                              sizeof(TRP_RANGE*)*
@@ -4957,10 +4959,10 @@ TRP_ROR_INTERSECT *get_best_ror_intersec
 {
   uint idx;
   double min_cost= DBL_MAX;
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
   DBUG_ENTER("get_best_ror_intersect");
 
-  Opt_trace_object trace_ror(param->thd->opt_trace, 
-                             "analyzing_roworder_intersect");
+  Opt_trace_object trace_ror(trace_ctx, "analyzing_roworder_intersect");
 
   if ((tree->n_ror_scans < 2) || !param->table->file->stats.records ||
       !param->thd->optimizer_switch_flag(OPTIMIZER_SWITCH_INDEX_MERGE_INTERSECT))
@@ -5042,11 +5044,10 @@ TRP_ROR_INTERSECT *get_best_ror_intersec
     Note: trace_isect_idx.end() is called to close this object after
     this while-loop.
   */
-  Opt_trace_array trace_isect_idx(param->thd->opt_trace,
-                                  "intersecting_indices");
+  Opt_trace_array trace_isect_idx(trace_ctx, "intersecting_indices");
   while (cur_ror_scan != tree->ror_scans_end && !intersect->is_covering)
   {
-    Opt_trace_object trace_idx(param->thd->opt_trace);
+    Opt_trace_object trace_idx(trace_ctx);
     char *idx_name= param->table->key_info[(*cur_ror_scan)->keynr].name;
     trace_idx.add_utf8("index", idx_name);
     /* S= S + first(R);  R= R - first(R); */
@@ -5099,7 +5100,7 @@ TRP_ROR_INTERSECT *get_best_ror_intersec
     covering, it doesn't make sense to add CPK scan.
   */
   { // Scope for trace object
-    Opt_trace_object trace_cpk(param->thd->opt_trace, "clustered_pk");
+    Opt_trace_object trace_cpk(trace_ctx, "clustered_pk");
     if (cpk_scan && !intersect->is_covering)
     {
       if (ror_intersect_add(intersect, cpk_scan, TRUE) &&
@@ -5202,7 +5203,7 @@ TRP_ROR_INTERSECT *get_best_covering_ror
   DBUG_ENTER("get_best_covering_ror_intersect");
 
   // None of our tests enter this function
-  Opt_trace_object (param->thd->opt_trace).
+  Opt_trace_object (&param->thd->opt_trace).
     add("get_best_covering_roworder_intersect", true).
     add("untested_code", true).
     add("need_tracing",true);
@@ -5359,6 +5360,7 @@ static TRP_RANGE *get_key_scans_params(P
   DBUG_ENTER("get_key_scans_params");
   LINT_INIT(best_mrr_flags); /* protected by key_to_read */
   LINT_INIT(best_buf_size); /* protected by key_to_read */
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
   /*
     Note that there may be trees that have type SEL_TREE::KEY but contain no
     key reads at all, e.g. tree for expression "key1 is not null" where key1
@@ -5366,7 +5368,7 @@ static TRP_RANGE *get_key_scans_params(P
   */
   DBUG_EXECUTE("info", print_sel_tree(param, tree, &tree->keys_map,
                                       "tree scans"););
-  Opt_trace_array ota(param->thd->opt_trace, "range_scan_alternatives");
+  Opt_trace_array ota(trace_ctx, "range_scan_alternatives");
 
   tree->ror_scans_map.clear_all();
   tree->n_ror_scans= 0;
@@ -5386,7 +5388,7 @@ static TRP_RANGE *get_key_scans_params(P
       bool read_index_only= index_read_must_be_used ? TRUE :
                             (bool) param->table->covering_keys.is_set(keynr);
 
-      Opt_trace_object trace_idx(param->thd->opt_trace);
+      Opt_trace_object trace_idx(trace_ctx);
       trace_idx.add_utf8("index", param->table->key_info[keynr].name);
 
       found_records= check_quick_select(param, idx, read_index_only, *key,
@@ -6598,8 +6600,8 @@ get_mm_leaf(RANGE_OPT_PARAM *param, Item
 end:
   if (impossible_cond_cause)
   {
-    Opt_trace_object wrapper (param->thd->opt_trace);
-    Opt_trace_object (param->thd->opt_trace, "impossible_condition",
+    Opt_trace_object wrapper (&param->thd->opt_trace);
+    Opt_trace_object (&param->thd->opt_trace, "impossible_condition",
                       Opt_trace_context::RANGE_OPTIMIZER).
       add_alnum("cause", impossible_cond_cause);
   }
@@ -8126,9 +8128,8 @@ uint sel_arg_range_seq_next(range_seq_t 
   String *active_trace_ptr= NULL; 
 
 #ifdef OPTIMIZER_TRACE
-  if (unlikely(trace_array != NULL) &&
-      unlikely(seq->param->thd->opt_trace != NULL) &&
-      seq->param->thd->opt_trace->is_started())
+  if ((trace_array != NULL) &&
+      unlikely(seq->param->thd->opt_trace.is_started()))
   {
     key_range_trace.set_charset(system_charset_info);
     active_trace_ptr= &key_range_trace;
@@ -10190,10 +10191,11 @@ get_best_group_min_max(PARAM *param, SEL
   ha_rows best_quick_prefix_records= 0;
   uint best_param_idx= 0;
   List_iterator<Item> select_items_it;
+  Opt_trace_context * const trace_ctx= &param->thd->opt_trace;
 
   DBUG_ENTER("get_best_group_min_max");
 
-  Opt_trace_object trace_group(thd->opt_trace, "group_index_range",
+  Opt_trace_object trace_group(trace_ctx, "group_index_range",
                                Opt_trace_context::RANGE_OPTIMIZER);
   const char* cause= NULL;
 
@@ -10298,12 +10300,11 @@ get_best_group_min_max(PARAM *param, SEL
   SEL_ARG *cur_index_tree= NULL;
   ha_rows cur_quick_prefix_records= 0;
   uint cur_param_idx= MAX_KEY;
-  Opt_trace_array trace_indices(thd->opt_trace,
-                                "potential_group_range_indices");
+  Opt_trace_array trace_indices(trace_ctx, "potential_group_range_indices");
   for (uint cur_index= 0 ; cur_index_info != cur_index_info_end ;
        cur_index_info++, cur_index++)
   {
-    Opt_trace_object trace_idx(thd->opt_trace);
+    Opt_trace_object trace_idx(trace_ctx);
     trace_idx.add_utf8("index", cur_index_info->name);
     KEY_PART_INFO *cur_part;
     KEY_PART_INFO *end_part; /* Last part for loops. */

=== modified file 'sql/opt_trace.cc'
--- a/sql/opt_trace.cc	2011-03-24 10:33:38 +0000
+++ b/sql/opt_trace.cc	2011-03-31 13:41:41 +0000
@@ -26,8 +26,6 @@
 
 #ifdef OPTIMIZER_TRACE
 
-int glob_recursive_disable_I_S= 0;
-
 /**
   @class Opt_trace_stmt
 
@@ -40,12 +38,10 @@ class Opt_trace_stmt
 {
 public:
   /**
-     Constructor, starts a trace
+     Constructor, starts a trace for information_schema and dbug.
      @param  ctx_arg          context
-     @param  support_I_S_arg  should trace be in information_schema
   */
-  Opt_trace_stmt(Opt_trace_context *ctx_arg,
-                 enum enum_support_I_S support_I_S_arg);
+  Opt_trace_stmt(Opt_trace_context *ctx_arg);
 
   /**
      Ends a trace; destruction may not be possible immediately as we may have
@@ -150,8 +146,6 @@ public:
   /// @returns 'size' last bytes of the trace buffer
   const char *trace_buffer_tail(size_t size);
 
-  enum enum_support_I_S get_support_I_S() const { return support_I_S; }
-
   /// @returns total memory used by this trace
   size_t alloced_length() const
   { return trace_buffer.alloced_length() + query_buffer.alloced_length(); }
@@ -159,12 +153,34 @@ public:
   void assert_current_struct(const Opt_trace_struct *s) const
   { DBUG_ASSERT(current_struct == s); }
 
+  bool support_I_S() const { return I_S_disabled == 0; }
+
+  /// Temporarily disables I_S.
+  void disable_I_S() { ++I_S_disabled; }
+
+  /**
+     Restores I_S support to what it was before the previous call
+     to disable_I_S().
+  */
+  void restore_I_S() { --I_S_disabled; }
+
 private:
 
   bool ended;           ///< Whether @c end() has been called on this instance
 
-  /// Should this trace be in information_schema
-  enum enum_support_I_S support_I_S;
+  /**
+    0 <=> this trace should be in information_schema.
+    In the life of an Opt_trace_stmt, support for I_S may be temporarily
+    disabled (using Opt_trace_disable_I_S or on a per-feature basis).
+    Once disabled, it must stay disabled until re-enabled at the same stack
+    frame. This:
+    Opt_trace_object1 // disables I_S
+       Opt_trace_object2 // re-enables I_S
+    is impossible (the top object wins).
+    So it is sufficient, to keep track of the current state, to have a counter
+    incremented each time we get a request to disable I_S.
+  */
+  int I_S_disabled;
 
   Opt_trace_context *ctx;                       ///< context
   Opt_trace_struct *current_struct;             ///< current open structure
@@ -192,48 +208,6 @@ private:
   */
   Dynamic_array<Struct_desc> stack_of_current_structs;
 
-  /**
-     When we temporarily disable I_S (because of Opt_trace_disable_I_S, or
-     because we are entering a structure belonging to a not-traced optimizer
-     feature), we need to remember the pre-disabling state, to restore it
-     later.
-  */
-  Dynamic_array<enum enum_support_I_S> stack_of_values_of_support_I_S;
-
-  /**
-     Temporarily disables I_S. This is private because only our friend
-     Opt_trace_disable_I_S is trusted enough to use it.
-     @retval false ok
-     @retval true  error (function had no effect, no disabling was done)
-  */
-  bool disable_I_S_for_this_and_children()
-  {
-    if (unlikely(stack_of_values_of_support_I_S.append(support_I_S)))
-    {
-      /*
-        Note that append() above calls my_error() if it fails, so user is
-        informed.
-      */
-      return true;
-    }
-    support_I_S= NO_FOR_THIS;
-    ++glob_recursive_disable_I_S;
-    return false;
-  }
-  /**
-     Restores I_S support to what it was before the previous successful call
-     to disable_I_S_for_this_and_children().
-     @note we said "_successful_": indeed if we failed to append() to the
-     dynamic array before, a pop() now would be wrong: it would pop a wrong
-     cell, or if no cell, would dereference a NULL pointer (@sa pop_dynamic())
-     and crash.
-  */
-  void restore_I_S()
-  {
-    support_I_S= stack_of_values_of_support_I_S.pop();
-    --glob_recursive_disable_I_S;
-  }
-
   /// A wrapper of class String, for storing query or trace.
   class Buffer
   {
@@ -281,8 +255,6 @@ private:
 
   Buffer trace_buffer;                    ///< Where the trace is accumulated
   Buffer query_buffer;                    ///< Where the original query is put
-
-  friend class Opt_trace_disable_I_S;
 };
 
 
@@ -484,10 +456,8 @@ const char *Opt_trace_struct::check_key(
 
 // Implementation of Opt_trace_stmt class
 
-Opt_trace_stmt::Opt_trace_stmt(Opt_trace_context *ctx_arg,
-                               enum enum_support_I_S support_I_S_arg):
-  ended(false), support_I_S(support_I_S_arg), ctx(ctx_arg),
-  current_struct(NULL)
+Opt_trace_stmt::Opt_trace_stmt(Opt_trace_context *ctx_arg) :
+  ended(false), I_S_disabled(0), ctx(ctx_arg), current_struct(NULL)
 {
   // Trace is always in UTF8, it's "all" that JSON accepts
   trace_buffer.set_charset(system_charset_info);
@@ -498,7 +468,6 @@ Opt_trace_stmt::Opt_trace_stmt(Opt_trace
 void Opt_trace_stmt::end()
 {
   DBUG_ASSERT(stack_of_current_structs.elements() == 0);
-  DBUG_ASSERT(stack_of_values_of_support_I_S.elements() == 0);
   ended= true;
   /*
     Because allocation is done in big chunks, buffer->Ptr[str_length]
@@ -536,7 +505,7 @@ bool Opt_trace_stmt::set_query(const cha
   // Should be called only once per statement.
   DBUG_ASSERT(query_buffer.ptr() == NULL);
   query_buffer.set_charset(charset);
-  if (support_I_S != YES_FOR_THIS)
+  if (!support_I_S())
   {
     /*
       Query won't be read, don't waste resources storing it. Still we have set
@@ -565,7 +534,7 @@ bool Opt_trace_stmt::open_struct(const c
                                  char opening_bracket)
 {
   bool has_disabled_I_S= false;
-  if (support_I_S == YES_FOR_THIS)
+  if (support_I_S())
   {
     if (!ctx->feature_enabled(feature))
     {
@@ -586,10 +555,7 @@ bool Opt_trace_stmt::open_struct(const c
         else
           current_struct->add_alnum("...");
       }
-      if (unlikely(stack_of_values_of_support_I_S.append(support_I_S)))
-        goto err;
-      support_I_S= NO_FOR_THIS;
-      ++glob_recursive_disable_I_S;
+      ctx->disable_I_S_for_this_and_children();
       has_disabled_I_S= true;
     }
     else
@@ -624,13 +590,11 @@ bool Opt_trace_stmt::open_struct(const c
     DBUG_EXECUTE_IF("opt_trace_oom_in_open_struct",
                     DBUG_SET("-d,simulate_out_of_memory"););
     if (unlikely(rc))
-      goto err;
+      return true;
   }
   current_struct= struc;
   current_struct_empty= true;
   return false;
-err:
-  return true;
 }
 
 
@@ -641,7 +605,7 @@ void Opt_trace_stmt::close_struct(const 
   current_struct= d.current_struct;
   current_struct_empty= d.current_struct_empty;
 
-  if (support_I_S == YES_FOR_THIS)
+  if (support_I_S())
   {
     next_line();
     trace_buffer.append(closing_bracket);
@@ -653,16 +617,13 @@ void Opt_trace_stmt::close_struct(const 
     }
   }
   if (d.has_disabled_I_S)
-  {
-    support_I_S= stack_of_values_of_support_I_S.pop();
-    --glob_recursive_disable_I_S;
-  }
+    ctx->restore_I_S();
 }
 
 
 void Opt_trace_stmt::separator()
 {
-  DBUG_ASSERT(support_I_S == YES_FOR_THIS);
+  DBUG_ASSERT(support_I_S());
   // Put a comma first, if we have already written an object at this level.
   if (current_struct != NULL)
   {
@@ -703,7 +664,7 @@ void Opt_trace_stmt::next_line()
 void Opt_trace_stmt::add(const char *key, const char *val, size_t val_length,
                          bool quotes, bool escape)
 {
-  if (support_I_S != YES_FOR_THIS)
+  if (!support_I_S())
     return;
   separator();
   if (key != NULL)
@@ -731,7 +692,7 @@ void Opt_trace_stmt::add(const char *key
 void Opt_trace_stmt::syntax_error(const char *key)
 {
   DBUG_PRINT("opt", ("syntax error key: %s", key));
-  DBUG_ASSERT(support_I_S == YES_FOR_THIS);
+  DBUG_ASSERT(support_I_S());
 #ifndef DBUG_OFF
   bool no_assert_on_syntax_error= false;
   DBUG_EXECUTE_IF("opt_trace_no_assert_on_syntax_error",
@@ -982,53 +943,74 @@ Opt_trace_context::default_features=
                                    Opt_trace_context::REPEATED_SUBSELECT);
 
 
-Opt_trace_context::Opt_trace_context() :
-  current_stmt_in_gen(NULL), since_offset_0(0)
-{}
-
-
 Opt_trace_context::~Opt_trace_context()
 {
-  /* There may well be some few ended traces left: */
-  purge_stmts(true);
-  /* All should have moved to 'del' list: */
-  DBUG_ASSERT(all_stmts_for_I_S.elements() == 0);
-  /* All of 'del' list should have been deleted: */
-  DBUG_ASSERT(all_stmts_to_del.elements() == 0);
+  DBUG_ASSERT(I_S_disabled == 0);
+  if (unlikely(impl != NULL))
+  {
+    /* There may well be some few ended traces left: */
+    purge_stmts(true);
+    /* All should have moved to 'del' list: */
+    DBUG_ASSERT(impl->all_stmts_for_I_S.elements() == 0);
+    /* All of 'del' list should have been deleted: */
+    DBUG_ASSERT(impl->all_stmts_to_del.elements() == 0);
+    delete impl;
+  }
 }
 
 
-bool Opt_trace_context::start(enum enum_support_I_S support_I_S_arg,
+bool Opt_trace_context::start(bool support_I_S_arg,
+                              bool support_dbug,
                               bool end_marker_arg, bool one_line_arg,
                               long offset_arg, long limit_arg,
                               ulong max_mem_size_arg, ulonglong features_arg)
 {
-  /*
-    Decide whether to-be-created trace should support I_S.
-    Sometimes the parent rules, sometimes not. If the parent
-    trace was disabled due to being "before offset" (case of a positive
-    offset), we don't want the new trace to inherit and be disabled (for
-    example it may be 'after offset').
-  */
-  enum enum_support_I_S new_stmt_support_I_S;
-  bool rc;
   DBUG_ENTER("Opt_trace_context::start");
 
-  /*
-    Tracing may already be started when we come here, for example if we are
-    starting execution of a sub-statement of a stored routine (CALL has
-    tracing enabled too).
-  */
-  if (glob_recursive_disable_I_S != 0)
+  if (I_S_disabled != 0)
+  {
+    DBUG_PRINT("opt", ("opt_trace is already disabled"));
+    support_I_S_arg= false;
+  }
+
+  // Decide on optimizations possible to realize the requested support.
+  if (!support_I_S_arg && !support_dbug)
   {
+    // The statement will not do tracing.
+    if (likely(impl == NULL) || impl->current_stmt_in_gen == NULL)
+    {
+      /*
+        This should be the most commonly taken branch in a release binary,
+        when the connection rarely has optimizer tracing runtime-enabled.
+        It's thus important that it's optimized: we can short-cut the creation
+        and starting of Opt_trace_stmt, unlike in the next "else" branch.
+      */
+      DBUG_RETURN(false);
+    }
     /*
-      Tracing is strictly disabled by the caller. Thus don't listen to any
-      request from the user for enabling tracing or changing settings (offset
-      etc). Doing otherwise would surely bring a problem.
+      If we come here, there is a parent statement which has a trace.
+      Imagine that we don't create a trace for the child statement
+      here. Then trace structures of the child will be accidentally attached
+      to the parent's trace (as it is still 'current_stmt_in_gen', which
+      constructors of Opt_trace_struct will use); thus the child's trace
+      will be visible (as a chunk of the parent's trace). That would be
+      incorrect. To avoid this, we create a trace for the child but with I_S
+      output disabled; this changes 'current_stmt_in_gen', thus this child's
+      trace structures will be attached to the child's trace and thus not be
+      visible.
     */
-    new_stmt_support_I_S= NO_FOR_THIS;
   }
-  else
+
+  DBUG_EXECUTE_IF("no_new_opt_trace_stmt", DBUG_ASSERT(0););
+
+  if (impl == NULL)
+    impl= new Opt_trace_context_impl(); // OOM-unsafe new.
+
+  /*
+    If tracing is disabled by some caller, then don't change settings (offset
+    etc). Doing otherwise would surely bring a problem.
+  */
+  if (I_S_disabled == 0)
   {
     /*
       Here we allow a stored routine's sub-statement to enable/disable
@@ -1036,83 +1018,88 @@ bool Opt_trace_context::start(enum enum_
       be some 'SET OPTIMIZER_TRACE="enabled=[on|off]"' to trace only certain
       sub-statements.
     */
-    new_stmt_support_I_S= support_I_S_arg;
-    end_marker= end_marker_arg;
-    one_line= one_line_arg;
-    offset= offset_arg;
-    limit= limit_arg;
-    max_mem_size= max_mem_size_arg;
+    impl->end_marker= end_marker_arg;
+    impl->one_line= one_line_arg;
+    impl->offset= offset_arg;
+    impl->limit= limit_arg;
+    impl->max_mem_size= max_mem_size_arg;
     // MISC always on
-    features= Opt_trace_context::feature_value(features_arg |
-                                               Opt_trace_context::MISC);
+    impl->features= Opt_trace_context::feature_value(features_arg |
+                                                     Opt_trace_context::MISC);
   }
-  if (new_stmt_support_I_S == YES_FOR_THIS && offset >= 0)
+  if (support_I_S_arg && impl->offset >= 0)
   {
     /* If outside the offset/limit window, no need to support I_S */
-    if (since_offset_0 < offset)
+    if (impl->since_offset_0 < impl->offset)
     {
       DBUG_PRINT("opt", ("disabled: since_offset_0(%ld) < offset(%ld)",
-                         since_offset_0, offset));
-      new_stmt_support_I_S= NO_FOR_THIS;
+                         impl->since_offset_0, impl->offset));
+      support_I_S_arg= false;
     }
-    else if (since_offset_0 >= (offset + limit))
+    else if (impl->since_offset_0 >= (impl->offset + impl->limit))
     {
       DBUG_PRINT("opt", ("disabled: since_offset_0(%ld) >="
                          " offset(%ld) + limit(%ld)",
-                         since_offset_0, offset, limit));
-      new_stmt_support_I_S= NO_FOR_THIS;
+                         impl->since_offset_0, impl->offset, impl->limit));
+      support_I_S_arg= false;
     }
-    since_offset_0++;
+    impl->since_offset_0++;
   }
+  {
+    /*
+      OOM-unsafe "new".
+      We don't allocate it in THD's MEM_ROOT as it must survive until a next
+      statement (SELECT) reads the trace.
+    */
+    Opt_trace_stmt *stmt= new Opt_trace_stmt(this);
 
-  // OOM-unsafe "new".
-  Opt_trace_stmt *stmt= new Opt_trace_stmt(this, new_stmt_support_I_S);
-
-  DBUG_PRINT("opt",("new stmt %p support_I_S %d", stmt,
-                    new_stmt_support_I_S));
+    DBUG_PRINT("opt",("new stmt %p support_I_S %d", stmt, support_I_S_arg));
 
-  if (unlikely(stack_of_current_stmts.append(current_stmt_in_gen)))
-  {
-    // append() above called my_error()
-    goto err;
-  }
+    if (unlikely(impl->stack_of_current_stmts
+                 .append(impl->current_stmt_in_gen)))
+      goto err;                            // append() above called my_error()
 
-  if (new_stmt_support_I_S == YES_FOR_THIS)
-    rc= all_stmts_for_I_S.append(stmt);
-  else
-  {
     /*
       If sending only to DBUG, don't show to the user.
       Same if tracing was temporarily disabled at higher layers with
       Opt_trace_disable_I_S.
       So we just link it to the 'del' list for purging when ended.
     */
-    rc= all_stmts_to_del.append(stmt);
-  }
+    Dynamic_array<Opt_trace_stmt *> *list;
+    if (support_I_S_arg)
+      list= &impl->all_stmts_for_I_S;
+    else
+    {
+      stmt->disable_I_S();           // no need to fill a not-shown JSON trace
+      list= &impl->all_stmts_to_del;
+    }
 
-  if (unlikely(rc))
-    goto err;
+    if (unlikely(list->append(stmt)))
+        goto err;
 
-  current_stmt_in_gen= stmt;
+    impl->current_stmt_in_gen= stmt;
 
-  // As we just added one trace, maybe the previous ones are unneeded now
-  purge_stmts(false);
-  // This purge may have freed space, compute max allowed size:
-  stmt->set_allowed_mem_size(allowed_mem_size_for_current_stmt());
-  DBUG_RETURN(false);
+    // As we just added one trace, maybe the previous ones are unneeded now
+    purge_stmts(false);
+    // This purge may have freed space, compute max allowed size:
+    stmt->set_allowed_mem_size(allowed_mem_size_for_current_stmt());
+    DBUG_RETURN(false);
 err:
-  delete stmt;
-  DBUG_RETURN(true);
+    delete stmt;
+    DBUG_RETURN(true);
+  }
 }
 
 
 void Opt_trace_context::end()
 {
-  if (current_stmt_in_gen != NULL)
+  if (likely(impl == NULL))
+    return;
+  if (impl->current_stmt_in_gen != NULL)
   {
-    current_stmt_in_gen->end();
-    Opt_trace_stmt * const parent= stack_of_current_stmts.pop();
-    current_stmt_in_gen= parent;
+    impl->current_stmt_in_gen->end();
+    Opt_trace_stmt * const parent= impl->stack_of_current_stmts.pop();
+    impl->current_stmt_in_gen= parent;
     if (parent != NULL)
     {
       /*
@@ -1121,35 +1108,35 @@ void Opt_trace_context::end()
       */
       parent->set_allowed_mem_size(allowed_mem_size_for_current_stmt());
     }
+    /*
+      Purge again. Indeed when we are here, compared to the previous start()
+      we have one more ended trace, so can potentially free more. Consider
+      offset=-1 and:
+         top_stmt, started
+           sub_stmt, starts: can't free top_stmt as it is not ended yet
+           sub_stmt, ends: won't free sub_stmt (as user will want to see it),
+           can't free top_stmt as not ended yet
+         top_stmt, continued
+         top_stmt, ends: free top_stmt as it's not last and is ended, keep only
+         sub_stmt.
+      Still the purge is done in ::start() too, as an optimization, for this
+      case:
+         sub_stmt, started
+         sub_stmt, ended
+         sub_stmt, starts: can free above sub_stmt, will save memory compared
+         to free-ing it only when the new sub_stmt ends.
+    */
+    purge_stmts(false);
   }
   else
-    DBUG_ASSERT(stack_of_current_stmts.elements() == 0);
-  /*
-    Purge again. Indeed when we are here, compared to the previous start() we
-    have one more ended trace, so can potentially free more. Consider
-    offset=-1 and:
-       top_stmt, started
-         sub_stmt, starts: can't free top_stmt as it is not ended yet
-         sub_stmt, ends: won't free sub_stmt (as user will want to see it),
-         can't free top_stmt as not ended yet
-       top_stmt, continued
-       top_stmt, ends: free top_stmt as it's not last and is ended, keep only
-       sub_stmt.
-    Still the purge is done in ::start() too, as an optimization, for this
-    case:
-       sub_stmt, started
-       sub_stmt, ended
-       sub_stmt, starts: can free above sub_stmt, will save memory compared to
-       free-ing it only when the new sub_stmt ends.
-  */
-  purge_stmts(false);
+    DBUG_ASSERT(impl->stack_of_current_stmts.elements() == 0);
 }
 
 
 void Opt_trace_context::purge_stmts(bool purge_all)
 {
   DBUG_ENTER("Opt_trace_context::purge_stmts");
-  if (!purge_all && offset >= 0)
+  if (!purge_all && impl->offset >= 0)
   {
     /* This case is managed in @c Opt_trace_context::start() */
     DBUG_VOID_RETURN;
@@ -1163,9 +1150,10 @@ void Opt_trace_context::purge_stmts(bool
     incremented to 1, which is past the array's end, so break out of the loop:
     cell 0 (old cell 1) was not deleted, wrong).
   */
-  for (idx= (all_stmts_for_I_S.elements() - 1) ; idx >= 0 ; idx--)
+  for (idx= (impl->all_stmts_for_I_S.elements() - 1) ; idx >= 0 ; idx--)
   {
-    if (!purge_all && ((all_stmts_for_I_S.elements() + offset) <= idx))
+    if (!purge_all &&
+        ((impl->all_stmts_for_I_S.elements() + impl->offset) <= idx))
     {
       /* OFFSET mandates that this trace should be kept; move to previous */
     }
@@ -1177,8 +1165,9 @@ void Opt_trace_context::purge_stmts(bool
       */
       DBUG_EXECUTE_IF("opt_trace_oom_in_purge",
                       DBUG_SET("+d,simulate_out_of_memory"););
-      if (likely(!all_stmts_to_del.append(all_stmts_for_I_S.at(idx))))
-        all_stmts_for_I_S.del(idx);
+      if (likely(!impl->all_stmts_to_del
+                 .append(impl->all_stmts_for_I_S.at(idx))))
+        impl->all_stmts_for_I_S.del(idx);
       else
       {
         /*
@@ -1192,9 +1181,9 @@ void Opt_trace_context::purge_stmts(bool
     }
   }
   /* Examine list of "to be freed" traces and free what can be */
-  for (idx= (all_stmts_to_del.elements() - 1) ; idx >= 0 ; idx--)
+  for (idx= (impl->all_stmts_to_del.elements() - 1) ; idx >= 0 ; idx--)
   {
-    Opt_trace_stmt *stmt= all_stmts_to_del.at(idx);
+    Opt_trace_stmt *stmt= impl->all_stmts_to_del.at(idx);
 #ifndef DBUG_OFF
     bool skip_del= false;
     DBUG_EXECUTE_IF("opt_trace_oom_in_purge", skip_del= true;);
@@ -1229,7 +1218,7 @@ void Opt_trace_context::purge_stmts(bool
     }
     else
     {
-      all_stmts_to_del.del(idx);
+      impl->all_stmts_to_del.del(idx);
       delete stmt;
     }
   }
@@ -1239,46 +1228,64 @@ void Opt_trace_context::purge_stmts(bool
 
 size_t Opt_trace_context::allowed_mem_size_for_current_stmt() const
 {
-  DBUG_ENTER("Opt_trace_context::allowed_mem_size");
   size_t mem_size= 0;
   int idx;
-  for (idx= (all_stmts_for_I_S.elements() - 1) ; idx >= 0 ; idx--)
+  for (idx= (impl->all_stmts_for_I_S.elements() - 1) ; idx >= 0 ; idx--)
   {
-    const Opt_trace_stmt *stmt= all_stmts_for_I_S.at(idx);
+    const Opt_trace_stmt *stmt= impl->all_stmts_for_I_S.at(idx);
     mem_size+= stmt->alloced_length();
   }
   // Even to-be-deleted traces use memory, so consider them in sum
-  for (idx= (all_stmts_to_del.elements() - 1) ; idx >= 0 ; idx--)
+  for (idx= (impl->all_stmts_to_del.elements() - 1) ; idx >= 0 ; idx--)
   {
-    const Opt_trace_stmt *stmt= all_stmts_to_del.at(idx);
+    const Opt_trace_stmt *stmt= impl->all_stmts_to_del.at(idx);
     mem_size+= stmt->alloced_length();
   }
   /* The current statement is in exactly one of the two lists above */
-  mem_size-= current_stmt_in_gen->alloced_length();
-  size_t rc= (mem_size <= max_mem_size) ? (max_mem_size - mem_size) : 0;
+  mem_size-= impl->current_stmt_in_gen->alloced_length();
+  size_t rc= (mem_size <= impl->max_mem_size) ?
+    (impl->max_mem_size - mem_size) : 0;
   DBUG_PRINT("opt", ("rc %llu max_mem_size %llu",
-                     (ulonglong)rc, (ulonglong)max_mem_size));
-  DBUG_RETURN(rc);
+                     (ulonglong)rc, (ulonglong)impl->max_mem_size));
+  return rc;
 }
 
 
 bool Opt_trace_context::set_query(const char *query, size_t length,
                                   const CHARSET_INFO *charset)
 {
-  return current_stmt_in_gen->set_query(query, length, charset);
+  return impl->current_stmt_in_gen->set_query(query, length, charset);
 }
 
 
 const char *Opt_trace_context::get_tail(size_t size)
 {
-  return current_stmt_in_gen->trace_buffer_tail(size);
+  return (impl == NULL) ? "" :
+    impl->current_stmt_in_gen->trace_buffer_tail(size);
 }
 
 
 void Opt_trace_context::reset()
 {
+  if (impl == NULL)
+    return;
   purge_stmts(true);
-  since_offset_0= 0;
+  impl->since_offset_0= 0;
+}
+
+
+void Opt_trace_context::
+Opt_trace_context_impl::disable_I_S_for_this_and_children()
+{
+  if (current_stmt_in_gen != NULL)
+    current_stmt_in_gen->disable_I_S();
+}
+
+
+void Opt_trace_context::Opt_trace_context_impl::restore_I_S()
+{
+  if (current_stmt_in_gen != NULL)
+    current_stmt_in_gen->restore_I_S();
 }
 
 
@@ -1286,19 +1293,20 @@ const Opt_trace_stmt
 *Opt_trace_context::get_next_stmt_for_I_S(long *got_so_far) const
 {
   const Opt_trace_stmt *p;
-  if (*got_so_far >= limit)
-    p= NULL;
-  else if (*got_so_far >= all_stmts_for_I_S.elements())
+  if ((impl == NULL) ||
+      (*got_so_far >= impl->limit) ||
+      (*got_so_far >= impl->all_stmts_for_I_S.elements()))
     p= NULL;
   else
   {
-    p= all_stmts_for_I_S.at(*got_so_far);
+    p= impl->all_stmts_for_I_S.at(*got_so_far);
     DBUG_ASSERT(p != NULL);
     (*got_so_far)++;
   }
   return p;
 }
 
+
 // Implementation of class Opt_trace_iterator
 
 Opt_trace_iterator::Opt_trace_iterator(Opt_trace_context *ctx_arg) :
@@ -1327,23 +1335,16 @@ Opt_trace_disable_I_S::Opt_trace_disable
 {
   if (disable)
   {
-    if (ctx_arg != NULL)
-    {
-      stmt= ctx_arg->get_current_stmt_in_gen();
-      if (stmt != NULL)
-        if (unlikely(stmt->disable_I_S_for_this_and_children()))
-          disable= false; // failed to disable: be a dummy object
-    }
-    else
-      stmt= NULL;
+    ctx= ctx_arg;
+    ctx->disable_I_S_for_this_and_children();
   }
 }
 
 
 Opt_trace_disable_I_S::~Opt_trace_disable_I_S()
 {
-  if (disable && (stmt != NULL))
-    stmt->restore_I_S();
+  if (disable)
+    ctx->restore_I_S();
 }
 
 #endif // OPTIMIZER_TRACE

=== modified file 'sql/opt_trace.h'
--- a/sql/opt_trace.h	2011-03-24 10:33:38 +0000
+++ b/sql/opt_trace.h	2011-03-31 13:41:41 +0000
@@ -26,8 +26,6 @@ struct st_schema_table;
 struct TABLE_LIST;
 struct TABLE;
 
-extern int glob_recursive_disable_I_S;
-
 /**
    @file
    API for the Optimizer trace (WL#5257)
@@ -337,26 +335,14 @@ extern int glob_recursive_disable_I_S;
   As we don't support exceptions, we need new(nothrow) in order to be able to
   handle OOM.
   But "nothrow" is in the standard C++ library, which we don't link with.
-  So we have two calls to "new" (one to create Opt_trace_context, one to
+  So we have two calls to "new" (one to create Opt_trace_context_impl, one to
   create Opt_trace_stmt), which may crash. When we have nothrow we should
-  change them new(nothrow).
+  change them to new(nothrow).
 */
 
 class Opt_trace_struct;
 class Opt_trace_stmt;           // implementation detail local to opt_trace.cc
 
-/**
-   The different ways a trace output can be sent to
-   INFORMATION_SCHEMA.OPTIMIZER_TRACE.
-   Note that a trace may also go to DBUG, independently of the values below.
-*/
-enum enum_support_I_S
-{
-  YES_FOR_THIS= 0,                       ///< sent to I_S
-  NO_FOR_THIS= 1,                        ///< not sent, undefined for children
-  NO_FOR_THIS_AND_CHILDREN= 2            ///< not sent, and children not sent
-};
-
 
 /**
   @class Opt_trace_context
@@ -407,12 +393,16 @@ enum enum_support_I_S
 class Opt_trace_context
 {
 public:
-  Opt_trace_context();
+
+  Opt_trace_context() : impl(NULL), I_S_disabled(0) {}
   ~Opt_trace_context();
 
   /**
      Starts a new trace.
-     @param  support_I_S      Should trace be in information_schema
+     @param  support_I_S      Whether this statement should have its trace in
+                              information_schema
+     @param  need_it_for_dbug Whether this statement should have its trace in
+                              the dbug log (--debug)
      @param  end_marker       For a key/(object|array) pair, should the key be
                               repeated in a comment when the object|array
                               closes? like
@@ -439,15 +429,18 @@ public:
                               destructor is permitted on it; any other
                               member function has undefined effects.
   */
-  bool start(enum enum_support_I_S support_I_S,
+  bool start(bool support_I_S,
+             bool need_it_for_dbug,
              bool end_marker, bool one_line,
              long offset, long limit, ulong max_mem_size,
              ulonglong features);
   /// Ends the current (=open, unfinished, being-generated) trace.
   void end();
 
+
   /// Returns whether there is a current trace
-  bool is_started() const { return current_stmt_in_gen != NULL; }
+  bool is_started() const
+  { return unlikely(impl != NULL) && impl->current_stmt_in_gen != NULL; }
 
   /**
      Set the "original" query (not transformed, as sent by client) for the
@@ -476,9 +469,9 @@ public:
   void reset();
 
   /// @sa parameters of Opt_trace_context::start()
-  bool get_end_marker() const { return end_marker; }
+  bool get_end_marker() const { return impl->end_marker; }
   /// @sa parameters of Opt_trace_context::start()
-  bool get_one_line() const { return one_line; }
+  bool get_one_line() const { return impl->one_line; }
 
   /**
      Names of flags for @@@@optimizer_trace variable of @c sys_vars.cc :
@@ -533,7 +526,7 @@ public:
      @param  f  feature
   */
   bool feature_enabled (Opt_trace_context::feature_value f) const
-  { return features & f; }
+  { return unlikely(impl != NULL) && (impl->features & f); }
 
   /**
      Opt_trace_struct is passed Opt_trace_context*, and needs to know
@@ -544,7 +537,8 @@ public:
      opt_trace.h is ignorant of the layout of the pointed instance so cannot
      use it).
   */
-  Opt_trace_stmt *get_current_stmt_in_gen() { return current_stmt_in_gen; }
+  Opt_trace_stmt *get_current_stmt_in_gen()
+  { return impl->current_stmt_in_gen; }
 
   /**
      @returns the next statement to show in I_S.
@@ -554,78 +548,126 @@ public:
    */
   const Opt_trace_stmt *get_next_stmt_for_I_S(long *got_so_far) const;
 
+  /// Temporarily disables I_S for this trace and its children.
+  void disable_I_S_for_this_and_children()
+  {
+    ++I_S_disabled;
+    if (unlikely(impl != NULL))
+      impl->disable_I_S_for_this_and_children();
+  }
+
+  /**
+     Restores I_S support to what it was before the previous call to
+     disable_I_S_for_this_and_children().
+  */
+  void restore_I_S()
+  {
+    --I_S_disabled;
+    if (unlikely(impl != NULL))
+      impl->restore_I_S();
+  }
+
 private:
 
   /**
-     Trace which is currently being generated, where structures are being
-     added. "in_gen" stands for "in generation", being-generated.
+     To have the smallest impact on THD's size, most of the implementation is
+     moved to a separate class Opt_trace_context_impl which is instantiated on
+     the heap when really needed. So if a connection never sets
+     @@@@optimizer_trace to "enabled=on" and does not use --debug, this heap
+     allocation never happens.
+     This class is declared here so that frequently called functions like
+     Opt_trace_context::is_started() can be inlined.
+  */
+  class Opt_trace_context_impl
+  {
+  public:
+    Opt_trace_context_impl() : current_stmt_in_gen(NULL),
+      features(feature_value(0)), offset(0), limit(0), since_offset_0(0)
+    {}
+
+    void disable_I_S_for_this_and_children();
+    void restore_I_S();
+
+    /**
+       Trace which is currently being generated, where structures are being
+       added. "in_gen" stands for "in generation", being-generated.
 
-     In simple cases it is equal to the last element of array
-     all_stmts_for_I_S. But it can be prior to it, for example when calling a
-     stored routine:
-@verbatim
-     CALL statement starts executing
-       create trace of CALL (call it "trace #1")
-       add structure to trace #1
-       add structure to trace #1
-       First sub-statement executing
-         create trace of sub-statement (call it "trace #2")
-         add structure to trace #2
-         add structure to trace #2
-       First sub-statement ends
-       add structure to trace #1
-@endverbatim
-     In the beginning, the CALL statement's trace is the newest and current;
-     when the first sub-statement is executing, that sub-statement's trace is
-     the newest and current; when the first sub-statement ends, it is still
-     the newest but it's not the current anymore: the current is then again
-     the CALL's one, where structures will be added, until a second
-     sub-statement is executed.
-     Another case is when the current statement sends only to DBUG:
-     all_stmts_for_I_S lists only traces shown in OPTIMIZER_TRACE.
-  */
-  Opt_trace_stmt *current_stmt_in_gen;
-
-  /**
-     To keep track of what is the current statement, as execution goes into a
-     sub-statement, and back to the upper statement, we have a stack of
-     successive values of current_stmt_in_gen:
-     when in a statement we enter a substatement (a new trace), we push the
-     statement's trace on the stack and change current_stmt_in_gen to the
-     substatement's trace; when leaving the substatement we pop from the stack
-     and set current_stmt_in_gen to the popped value.
-  */
-  Dynamic_array<Opt_trace_stmt *> stack_of_current_stmts;
-
-  /**
-     List of remembered traces for putting into the OPTIMIZER_TRACE
-     table. Element 0 is the one created first, will be first row of
-     OPTIMIZER_TRACE table. The array structure fullfills those needs:
-     - to output traces "oldest first" in OPTIMIZER_TRACE
-     - to preserve traces "newest first" when @@@@optimizer_trace_offset<0
-     - to delete a trace in the middle of the list when it is permanently out
+       In simple cases it is equal to the last element of array
+       all_stmts_for_I_S. But it can be prior to it, for example when calling a
+       stored routine:
+@verbatim
+       CALL statement starts executing
+         create trace of CALL (call it "trace #1")
+         add structure to trace #1
+         add structure to trace #1
+         First sub-statement executing
+           create trace of sub-statement (call it "trace #2")
+           add structure to trace #2
+           add structure to trace #2
+         First sub-statement ends
+         add structure to trace #1
+@endverbatim
+       In the beginning, the CALL statement's trace is the newest and current;
+       when the first sub-statement is executing, that sub-statement's trace
+       is the newest and current; when the first sub-statement ends, it is
+       still the newest but it's not the current anymore: the current is then
+       again the CALL's one, where structures will be added, until a second
+       sub-statement is executed.
+       Another case is when the current statement sends only to DBUG:
+       all_stmts_for_I_S lists only traces shown in OPTIMIZER_TRACE.
+    */
+    Opt_trace_stmt *current_stmt_in_gen;
+
+    /**
+       To keep track of what is the current statement, as execution goes into a
+       sub-statement, and back to the upper statement, we have a stack of
+       successive values of current_stmt_in_gen:
+       when in a statement we enter a substatement (a new trace), we push the
+       statement's trace on the stack and change current_stmt_in_gen to the
+       substatement's trace; when leaving the substatement we pop from the stack
+       and set current_stmt_in_gen to the popped value.
+    */
+    Dynamic_array<Opt_trace_stmt *> stack_of_current_stmts;
+
+    /**
+       List of remembered traces for putting into the OPTIMIZER_TRACE
+       table. Element 0 is the one created first, will be first row of
+       OPTIMIZER_TRACE table. The array structure fullfills those needs:
+       - to output traces "oldest first" in OPTIMIZER_TRACE
+       - to preserve traces "newest first" when @@@@optimizer_trace_offset<0
+       - to delete a trace in the middle of the list when it is permanently out
        of the offset/limit showable window.
-  */
-  Dynamic_array<Opt_trace_stmt *> all_stmts_for_I_S;
-  /**
-     List of traces which are unneeded because of OFFSET/LIMIT, and scheduled
-     for deletion from memory.
-  */
-  Dynamic_array<Opt_trace_stmt *> all_stmts_to_del;
+    */
+    Dynamic_array<Opt_trace_stmt *> all_stmts_for_I_S;
+    /**
+       List of traces which are unneeded because of OFFSET/LIMIT, and scheduled
+       for deletion from memory.
+    */
+    Dynamic_array<Opt_trace_stmt *> all_stmts_to_del;
 
-  bool end_marker;          ///< copy of parameter of Opt_trace_context::start
-  bool one_line;            ///< copy of parameter of Opt_trace_context::start
-  /// copy of parameter of Opt_trace_context::start
-  Opt_trace_context::feature_value features;
-  long offset;              ///< copy of parameter of Opt_trace_context::start
-  long limit;               ///< copy of parameter of Opt_trace_context::start
-  size_t max_mem_size;      ///< copy of parameter of Opt_trace_context::start
+    bool end_marker;          ///< copy of parameter of Opt_trace_context::start
+    bool one_line;            ///< copy of parameter of Opt_trace_context::start
+    /// copy of parameter of Opt_trace_context::start
+    Opt_trace_context::feature_value features;
+    long offset;              ///< copy of parameter of Opt_trace_context::start
+    long limit;               ///< copy of parameter of Opt_trace_context::start
+    size_t max_mem_size;      ///< copy of parameter of Opt_trace_context::start
+
+    /**
+       Number of statements traced so far since "offset 0", for comparison with
+       OFFSET and LIMIT, when OFFSET >= 0.
+    */
+    long since_offset_0;
+  };
+
+  Opt_trace_context_impl *impl;
 
   /**
-     Number of statements traced so far since "offset 0", for comparison with
-     OFFSET and LIMIT, when OFFSET >= 0.
+    <>0 <=> any to-be-created statement's trace should not be in
+    information_schema. This applies to next statements, their substatements,
+    etc.
   */
-  long since_offset_0;
+  int I_S_disabled;
 
   /**
      Find and delete unneeded traces.
@@ -756,7 +798,7 @@ protected:
     started(false)
   {
     // A first inlined test
-    if (unlikely(ctx_arg != NULL) && ctx_arg->is_started())
+    if (unlikely(ctx_arg->is_started()))
     {
       // Tracing enabled: must fully initialize the structure.
       do_construct(ctx_arg, requires_key_arg, key, feature);
@@ -1149,7 +1191,7 @@ public:
 
 private:
   bool disable;              ///< whether this instance really does disabling.
-  Opt_trace_stmt *stmt;      ///< statement where disabling happens
+  Opt_trace_context *ctx;
   Opt_trace_disable_I_S(const Opt_trace_disable_I_S&); // not defined
   Opt_trace_disable_I_S& operator=(const Opt_trace_disable_I_S&);//not defined
 };
@@ -1163,27 +1205,36 @@ private:
 //@{
 
 /**
-  Start tracing a THD's actions (generally at a statement's start).
-  @param  thd  the THD
-  @param  tbl  list of tables read/written by the statement.
+  Instantiate this class to start tracing a THD's actions (generally at a
+  statement's start), and to set the "original" query (not transformed, as
+  sent by client) for the new trace. Destructor will end the trace.
+
+  @param  thd          the THD
+  @param  tbl          list of tables read/written by the statement.
   @param  sql_command  SQL command being prepared or executed
-  @returns whether this function decided to trace (and thus the corresponding
-  opt_trace_end() should end the trace).
-  @note if tracing was already started by a top statement above the present
-  sub-statement in the call chain, and this function decides to trace
-  (respectively not trace) the sub-statement, it returns "true"
-  (resp. false). Each sub-statement is responsible for ending the trace which it
-  has started.
+  @param  query        query
+  @param  length       query's length
+  @param  charset      charset which was used to encode this query
+
+  @note Each sub-statement is responsible for ending the trace which it
+  has started; this class achieves this by keeping some memory inside (two
+  booleans).
 */
-bool opt_trace_start(THD *thd, const TABLE_LIST *tbl,
-                     enum enum_sql_command sql_command);
+class Opt_trace_start
+{
+public:
+  Opt_trace_start(THD *thd, const TABLE_LIST *tbl,
+                  enum enum_sql_command sql_command,
+                  const char *query, size_t query_length,
+                  const CHARSET_INFO *query_charset);
+  ~Opt_trace_start();
+private:
+  Opt_trace_context * const ctx;
+  bool error; ///< whether trace start() had an error
+  /// whether trace start() disabled tracing for children statements
+  bool has_disabled_I_S_in_ctx;
+};
 
-/**
-  Stop tracing a THD's actions (generally at statement's end).
-  @param  thd  the THD
-  @param  started  whether this frame did tracing
-*/
-void opt_trace_end(THD *thd, bool started);
 
 class st_select_lex;
 /**
@@ -1202,19 +1253,6 @@ void opt_trace_print_expanded_query(THD 
 */
 void opt_trace_add_select_number(Opt_trace_struct *s,
                                  uint select_number);
-/**
-   Set the "original" query (not transformed, as sent by client) for the
-   current trace.
-   @param   trace    trace context
-   @param   query    query
-   @param   length   query's length
-   @param   charset  charset which was used to encode this query
-   @retval  false    ok
-   @retval  true     error
-*/
-bool opt_trace_set_query(Opt_trace_context *trace, const char *query,
-                         size_t query_length,
-                         const CHARSET_INFO *query_charset);
 
 /**
    Fills information_schema.OPTIMIZER_TRACE with rows (one per trace)
@@ -1316,11 +1354,17 @@ public:
   Opt_trace_disable_I_S(Opt_trace_context *ctx_arg, bool disable_arg) {}
 };
 
-#define opt_trace_start(thd, tbl, sql_command) (false)
-#define opt_trace_end(thd, started) do {} while (0)
+class Opt_trace_start
+{
+public:
+  Opt_trace_start(THD *thd, const TABLE_LIST *tbl,
+                  enum enum_sql_command sql_command,
+                  const char *query, size_t query_length,
+                  const CHARSET_INFO *query_charset) {}
+};
+
 #define opt_trace_print_expanded_query(thd, select_lex) do {} while (0)
 #define opt_trace_add_select_number(s, select_number) do {} while (0)
-#define opt_trace_set_query(trace,q,ql,cs) do {} while (0)
 
 #endif /* OPTIMIZER_TRACE */
 

=== modified file 'sql/opt_trace2server.cc'
--- a/sql/opt_trace2server.cc	2011-03-24 10:33:38 +0000
+++ b/sql/opt_trace2server.cc	2011-03-31 13:41:41 +0000
@@ -86,11 +86,13 @@ inline bool sql_command_can_be_traced(en
 
 } // namespace
 
-bool opt_trace_start(THD *thd, const TABLE_LIST *tbl,
-                     enum enum_sql_command sql_command)
+Opt_trace_start::Opt_trace_start(THD *thd, const TABLE_LIST *tbl,
+                                 enum enum_sql_command sql_command,
+                                 const char *query, size_t query_length,
+                                 const CHARSET_INFO *query_charset) :
+  ctx(&thd->opt_trace), has_disabled_I_S_in_ctx(false)
 {
   DBUG_ENTER("opt_trace_start");
-
   /*
     We need an optimizer trace:
     * if the user asked for it or
@@ -102,15 +104,13 @@ bool opt_trace_start(THD *thd, const TAB
       trace while reading it with SELECT).
   */
   const ulonglong var= thd->variables.optimizer_trace;
-  enum enum_support_I_S support_I_S= (var & Opt_trace_context::FLAG_ENABLED) ?
-    YES_FOR_THIS : NO_FOR_THIS;
-  bool need_it_for_dbug= false;
-  bool allocated_here= false;
+  bool support_I_S= var & Opt_trace_context::FLAG_ENABLED;
+  bool support_dbug= false;
 
   /* This will be triggered if --debug or --debug=d:opt_trace is used */
-  DBUG_EXECUTE("opt", need_it_for_dbug= true;);
+  DBUG_EXECUTE("opt", support_dbug= true;);
   // First step, decide on what type of I_S support we want
-  if (unlikely(support_I_S == YES_FOR_THIS) &&
+  if (unlikely(support_I_S) &&
       (!sql_command_can_be_traced(sql_command) ||
        list_has_optimizer_trace_table(tbl)))
   {
@@ -133,78 +133,44 @@ bool opt_trace_start(THD *thd, const TAB
       (scanning the list of all used tables, doing checks on their names) but
       we call it only if @@optimizer_trace has enabled=on.
     */
-    support_I_S= NO_FOR_THIS;
-    ++glob_recursive_disable_I_S;
+    support_I_S= false;
+    has_disabled_I_S_in_ctx= true;
   }
-  /*
-    Second step, decide on optimizations possible to realize this I_S support.
-    DBUG support requires tracing, then we have no choice.
-  */
-  if (support_I_S != YES_FOR_THIS && !need_it_for_dbug)
+  error= ctx->start(support_I_S, support_dbug,
+                    (var & Opt_trace_context::FLAG_END_MARKER),
+                    (var & Opt_trace_context::FLAG_ONE_LINE),
+                    thd->variables.optimizer_trace_offset,
+                    thd->variables.optimizer_trace_limit,
+                    thd->variables.optimizer_trace_max_mem_size,
+                    thd->variables.optimizer_trace_features);
+  if (likely(!error))
   {
-    // The statement will not do tracing.
-    if (thd->opt_trace == NULL || !thd->opt_trace->is_started())
-    {
-      /*
-        This should be the most commonly taken branch in a release binary,
-        when the connection rarely has optimizer tracing runtime-enabled.
-        It's thus important that it's optimized: we can short-cut the creation
-        and starting of Opt_trace_context, unlike in the next "else" branch.
-      */
-      DBUG_RETURN(false);
-    }
-    else
-    {
-      /*
-        If we come here, there is a parent statement which has a trace.
-        Imagine that we don't create a trace for the child statement
-        here. Then trace structures of the child will be accidentally attached
-        to the parent's trace (as it is still 'current_stmt_in_gen', which
-        constructors of Opt_trace_struct will use); thus the child's trace
-        will be visible (as a chunk of the parent's trace). That would be
-        incorrect. To avoid this, we create a trace for the child but with I_S
-        output disabled; this changes 'current_stmt_in_gen', thus this child's
-        trace structures will be attached to the child's trace and thus not be
-        visible.
-      */
-    }
+    if (unlikely(ctx->is_started()))
+      ctx->set_query(query, query_length, query_charset);
   }
-
-  DBUG_EXECUTE_IF("opt_trace_should_not_start", DBUG_ASSERT(0););
-  /*
-    We don't allocate it in THD's MEM_ROOT as it must survive until a next
-    statement (SELECT) reads the trace.
-  */
-  if (thd->opt_trace == NULL)
-  {
-    thd->opt_trace= new Opt_trace_context;      // OOM-unsafe "new".
-    allocated_here= true;
-  }
-
-  if (thd->opt_trace->start(support_I_S,
-                            (var & Opt_trace_context::FLAG_END_MARKER),
-                            (var & Opt_trace_context::FLAG_ONE_LINE),
-                            thd->variables.optimizer_trace_offset,
-                            thd->variables.optimizer_trace_limit,
-                            thd->variables.optimizer_trace_max_mem_size,
-                            thd->variables.optimizer_trace_features))
+  else
+    DBUG_ASSERT(0);
+  if (has_disabled_I_S_in_ctx)
   {
-    if (allocated_here)
-    {
-      delete thd->opt_trace;
-      thd->opt_trace= NULL;
-    }
-    DBUG_RETURN(false);
+    /*
+      Even if there was an error and no Opt_trace_stmt was created, we still
+      have to honour this request, or the statement would actually be traced
+      but its trace structures would be put in the parent's, not disabled
+      trace (as it is still current_stmt_in_gen).
+    */
+    ctx->disable_I_S_for_this_and_children();
   }
-  DBUG_RETURN(true); // started all ok
+  DBUG_VOID_RETURN;
 }
 
 
-void opt_trace_end(THD *thd, bool started)
+Opt_trace_start::~Opt_trace_start()
 {
-  DBUG_ENTER("opt_trace_end");
-  if (started)
-    thd->opt_trace->end();
+  DBUG_ENTER("~opt_trace_start");
+  if (has_disabled_I_S_in_ctx)
+    ctx->restore_I_S();
+  if (likely(!error))
+    ctx->end();
   DBUG_VOID_RETURN;
 }
 
@@ -212,8 +178,8 @@ void opt_trace_end(THD *thd, bool starte
 void opt_trace_print_expanded_query(THD *thd, st_select_lex *select_lex)
 
 {
-  Opt_trace_context * const trace= thd->opt_trace;
-  if (likely(trace == NULL || !trace->is_started()))
+  Opt_trace_context * const trace= &thd->opt_trace;
+  if (likely(!trace->is_started()))
     return;
   char buff[1024];
   String str(buff,(uint32) sizeof(buff), system_charset_info);
@@ -243,50 +209,38 @@ void opt_trace_add_select_number(Opt_tra
 }
 
 
-bool opt_trace_set_query(Opt_trace_context *trace, const char *query,
-                         size_t query_length,
-                         const CHARSET_INFO *query_charset)
-{
-  return trace->set_query(query, query_length, query_charset);
-}
-
-
 int fill_optimizer_trace_info(THD *thd, TABLE_LIST *tables, Item *cond)
 {
 #ifdef OPTIMIZER_TRACE
-  DBUG_ENTER("fill_optimizer_trace_info");
-  if (thd->opt_trace != NULL)
+  TABLE *table= tables->table;
+  Opt_trace_info info;
+  /*
+    The list must not change during the iterator's life time. This is ok as
+    the life time is only the present block which cannot change the list.
+  */
+  for (Opt_trace_iterator it(&thd->opt_trace) ; !it.at_end() ; it.next())
   {
-    TABLE *table= tables->table;
-    Opt_trace_info info;
+    it.get_value(&info);
+    restore_record(table, s->default_values);
     /*
-      The list must not change during the iterator's life time. This is ok as
-      the life time is only the present block which cannot change the list.
+      We will put the query, which is in character_set_client, into a column
+      using character_set_client; this is better than UTF8 (see BUG#57306).
+      When literals with introducers are used, see "LiteralsWithIntroducers"
+      in this file.
     */
-    for (Opt_trace_iterator it(thd->opt_trace) ; !it.at_end() ; it.next())
-    {
-      it.get_value(&info);
-      restore_record(table, s->default_values);
-      /*
-        We will put the query, which is in character_set_client, into a column
-        using character_set_client; this is better than UTF8 (see BUG#57306).
-        When literals with introducers are used, see "LiteralsWithIntroducers"
-        in this file.
-      */
-      table->field[0]->store(info.query_ptr, info.query_length,
-                             info.query_charset);
-      table->field[1]->store(info.trace_ptr, info.trace_length,
-                             system_charset_info);
-      table->field[2]->store(info.missing_bytes, true);
-      if (schema_table_store_record(thd, table))
-        DBUG_RETURN(1);
-    }
+    table->field[0]->store(info.query_ptr, info.query_length,
+                           info.query_charset);
+    table->field[1]->store(info.trace_ptr, info.trace_length,
+                           system_charset_info);
+    table->field[2]->store(info.missing_bytes, true);
+    if (schema_table_store_record(thd, table))
+      return 1;
   }
 
-  DBUG_RETURN(0);
+  return 0;
 #else
   my_error(ER_FEATURE_DISABLED, MYF(0), "optimizer trace",
-           "-DOPTIMIZER_TRACE=1 or --with-optimizer-trace");
+           "-DOPTIMIZER_TRACE=1");
   return 1;
 #endif
 }

=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sql_class.cc	2011-03-31 13:41:41 +0000
@@ -56,7 +56,6 @@
 #include "sp_cache.h"
 #include "transaction.h"
 #include "debug_sync.h"
-#include "opt_trace.h"
 #include "sql_parse.h"                          // is_update_query
 #include "sql_callback.h"
 #include "lock.h"
@@ -529,7 +528,6 @@ THD::THD(bool enable_plugins)
    debug_sync_control(0),
 #endif /* defined(ENABLED_DEBUG_SYNC) */
    m_enable_plugins(enable_plugins),
-   opt_trace(NULL),
    main_warning_info(0)
 {
   ulong tmp;
@@ -1126,7 +1124,6 @@ THD::~THD()
   
   mysql_audit_free_thd(this);
 #endif
-  delete opt_trace;
   free_root(&main_mem_root, MYF(0));
   DBUG_VOID_RETURN;
 }

=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h	2011-03-21 17:55:41 +0000
+++ b/sql/sql_class.h	2011-03-31 13:41:41 +0000
@@ -38,7 +38,7 @@
 #include "violite.h"              /* vio_is_connected */
 #include "thr_lock.h"             /* thr_lock_type, THR_LOCK_DATA,
                                      THR_LOCK_INFO */
-
+#include "opt_trace.h"            /* Opt_trace_context */
 
 class Reprepare_observer;
 class Relay_log_info;
@@ -1473,8 +1473,6 @@ private:
 
 extern "C" void my_message_sql(uint error, const char *str, myf MyFlags);
 
-class Opt_trace_context;
-
 /**
   @class THD
   For each client connection we create a separate thread with THD serving as
@@ -2822,7 +2820,7 @@ public:
   */
   Internal_error_handler *pop_internal_handler();
 
-  Opt_trace_context *opt_trace; ///< optimizer trace of current statement
+  Opt_trace_context opt_trace; ///< optimizer trace of current statement
   /**
     Raise an exception condition.
     @param code the MYSQL_ERRNO error code of the error

=== modified file 'sql/sql_delete.cc'
--- a/sql/sql_delete.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sql_delete.cc	2011-03-31 13:41:41 +0000
@@ -193,7 +193,7 @@ bool mysql_delete(THD *thd, TABLE_LIST *
     DBUG_RETURN(TRUE);
 
   { // Enter scope for optimizer trace wrapper
-    Opt_trace_object wrapper(thd->opt_trace);
+    Opt_trace_object wrapper(&thd->opt_trace);
     wrapper.add_utf8_table(table);
 
     if ((select && select->check_quick(thd, safe_update, limit)) || !limit)

=== modified file 'sql/sql_help.cc'
--- a/sql/sql_help.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sql_help.cc	2011-03-31 13:41:41 +0000
@@ -580,7 +580,7 @@ SQL_SELECT *prepare_simple_select(THD *t
   SQL_SELECT *res= make_select(table, 0, 0, cond, 0, error);
 
   // Wrapper for correct JSON in optimizer trace
-  Opt_trace_object wrapper(thd->opt_trace);
+  Opt_trace_object wrapper(&thd->opt_trace);
   if (*error || (res && res->check_quick(thd, 0, HA_POS_ERROR)) ||
       (res && res->quick && res->quick->reset()))
   {

=== modified file 'sql/sql_parse.cc'
--- a/sql/sql_parse.cc	2011-03-24 10:33:38 +0000
+++ b/sql/sql_parse.cc	2011-03-31 13:41:41 +0000
@@ -2019,15 +2019,12 @@ mysql_execute_command(THD *thd)
 
   status_var_increment(thd->status_var.com_stat[lex->sql_command]);
 
-  const int saved_glob_rec= glob_recursive_disable_I_S;
-  const bool started_optimizer_trace= opt_trace_start(thd, all_tables,
-                                                      lex->sql_command);
-  if (started_optimizer_trace)
-    opt_trace_set_query(thd->opt_trace, thd->query(), thd->query_length(),
-                        thd->variables.character_set_client);
+  Opt_trace_start ots(thd, all_tables, lex->sql_command,
+                      thd->query(), thd->query_length(),
+                      thd->variables.character_set_client);
 
-  Opt_trace_object trace_command(thd->opt_trace);
-  Opt_trace_array trace_command_steps(thd->opt_trace, "steps");
+  Opt_trace_object trace_command(&thd->opt_trace);
+  Opt_trace_array trace_command_steps(&thd->opt_trace, "steps");
 
   DBUG_ASSERT(thd->transaction.stmt.modified_non_trans_table == FALSE);
 
@@ -4486,11 +4483,6 @@ finish:
     thd->mdl_context.release_statement_locks();
   }
 
-  trace_command_steps.end();
-  trace_command.end(); // must be closed before trace is ended below
-  opt_trace_end(thd, started_optimizer_trace);
-  glob_recursive_disable_I_S= saved_glob_rec;
-
   DBUG_RETURN(res || thd->is_error());
 }
 

=== modified file 'sql/sql_prepare.cc'
--- a/sql/sql_prepare.cc	2011-03-24 10:33:38 +0000
+++ b/sql/sql_prepare.cc	2011-03-31 13:41:41 +0000
@@ -1969,16 +1969,12 @@ static bool check_prepared_statement(Pre
     For the optimizer trace, this is the symmetric, for statement preparation,
     of what is done at statement execution (in mysql_execute_command()).
   */
-  const int saved_glob_rec= glob_recursive_disable_I_S;
-  const bool started_optimizer_trace= opt_trace_start(thd, tables,
-                                                      sql_command);
-  if (started_optimizer_trace)
-    opt_trace_set_query(thd->opt_trace, thd->query(), thd->query_length(),
-                        thd->variables.character_set_client);
+  Opt_trace_start ots(thd, tables, sql_command,
+                      thd->query(), thd->query_length(),
+                      thd->variables.character_set_client);
 
-
-  Opt_trace_object trace_command(thd->opt_trace);
-  Opt_trace_array trace_command_steps(thd->opt_trace, "steps");
+  Opt_trace_object trace_command(&thd->opt_trace);
+  Opt_trace_array trace_command_steps(&thd->opt_trace, "steps");
 
   switch (sql_command) {
   case SQLCOM_REPLACE:
@@ -2022,8 +2018,7 @@ static bool check_prepared_statement(Pre
     if (res == 2)
     {
       /* Statement and field info has already been sent */
-      res= FALSE;
-      goto end;
+      DBUG_RETURN(FALSE);
     }
     break;
   case SQLCOM_CREATE_TABLE:
@@ -2034,8 +2029,7 @@ static bool check_prepared_statement(Pre
     if (lex->create_view_mode == VIEW_ALTER)
     {
       my_message(ER_UNSUPPORTED_PS, ER(ER_UNSUPPORTED_PS), MYF(0));
-      res= true;
-      goto end;
+      goto error;
     }
     res= mysql_test_create_view(stmt);
     break;
@@ -2108,21 +2102,15 @@ static bool check_prepared_statement(Pre
     {
       /* All other statements are not supported yet. */
       my_message(ER_UNSUPPORTED_PS, ER(ER_UNSUPPORTED_PS), MYF(0));
-      res= true;
-      goto end;
+      goto error;
     }
     break;
   }
   if (res == 0)
-    res= stmt->is_sql_prepare() ?
-      FALSE : (send_prep_stmt(stmt, 0) || thd->protocol->flush());
-end:
-  trace_command_steps.end();
-  trace_command.end(); // must be closed before trace is ended below
-
-  glob_recursive_disable_I_S= saved_glob_rec;
-  opt_trace_end(thd, started_optimizer_trace);
-  DBUG_RETURN(res);
+    DBUG_RETURN(stmt->is_sql_prepare() ?
+                FALSE : (send_prep_stmt(stmt, 0) || thd->protocol->flush()));
+error:
+  DBUG_RETURN(TRUE);
 }
 
 /**

=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sql_select.cc	2011-03-31 13:41:41 +0000
@@ -562,7 +562,7 @@ JOIN::prepare(Item ***rref_pointer_array
   if (thd->derived_tables_processing)
     select_lex->exclude_from_table_unique_test= TRUE;
 
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   Opt_trace_object trace_wrapper(trace);
   Opt_trace_object trace_prepare(trace, "join_preparation");
   opt_trace_add_select_number(&trace_prepare, select_lex->select_number);
@@ -997,7 +997,7 @@ bool resolve_subquery(THD *thd, JOIN *jo
     }
   }
 
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
   if (subq_predicate_substype == Item_subselect::IN_SUBS)
   {
     {
@@ -1822,7 +1822,7 @@ JOIN::optimize()
 
   thd_proc_info(thd, "optimizing");
 
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   Opt_trace_object trace_wrapper(trace);
   Opt_trace_object trace_optimize(trace, "join_optimization");
   opt_trace_add_select_number(&trace_optimize, select_lex->select_number);
@@ -2824,7 +2824,7 @@ JOIN::save_join_tab()
 void
 JOIN::exec()
 {
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   Opt_trace_object trace_wrapper(trace);
   Opt_trace_object trace_exec(trace, "join_execution");
   opt_trace_add_select_number(&trace_exec, select_lex->select_number);
@@ -3465,8 +3465,7 @@ JOIN::exec()
   */
   if (items0 && ((thd->lex->describe & DESCRIBE_EXTENDED)
 #ifdef OPTIMIZER_TRACE
-                 || (thd->opt_trace != NULL &&
-                     thd->opt_trace->is_started())
+                 || trace->is_started()
 #endif
                  ) &&
       (select_lex->linkage == DERIVED_TABLE_TYPE ||
@@ -4154,7 +4153,7 @@ bool JOIN::flatten_subqueries()
   Item_exists_subselect **subq;
   Item_exists_subselect **subq_end;
   bool outer_join_objection= FALSE;
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   DBUG_ENTER("JOIN::flatten_subqueries");
 
   if (sj_subselects.elements() == 0)
@@ -4475,7 +4474,7 @@ bool pull_out_semijoin_tables(JOIN *join
     DBUG_RETURN(FALSE);
 
   List_iterator<TABLE_LIST> sj_list_it(join->select_lex->sj_nests);
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
 
   if (join->select_lex->sj_nests.elements == 0)
     DBUG_RETURN(0);
@@ -4722,7 +4721,7 @@ make_join_statistics(JOIN *join, TABLE_L
   table_map outer_join=0;
   SARGABLE_PARAM *sargables= 0;
   JOIN_TAB *stat_vector[MAX_TABLES+1];
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
   DBUG_ENTER("make_join_statistics");
 
   table_count=join->tables;
@@ -5276,7 +5275,7 @@ static bool optimize_semijoin_nests(JOIN
   DBUG_ENTER("optimize_semijoin_nests");
   List_iterator<TABLE_LIST> sj_list_it(join->select_lex->sj_nests);
   TABLE_LIST *sj_nest;
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
 
   while ((sj_nest= sj_list_it++))
   {
@@ -6515,7 +6514,7 @@ update_ref_and_keys(THD *thd, DYNAMIC_AR
     (void) set_dynamic(keyuse, &key_end, i);
     keyuse->elements=i;
   }
-  print_keyuse_array(thd->opt_trace, keyuse);
+  print_keyuse_array(&thd->opt_trace, keyuse);
   return FALSE;
 }
 
@@ -6704,7 +6703,7 @@ add_group_and_distinct_keys(JOIN *join, 
   if (!possible_keys.is_clear_all() && 
       !(possible_keys == join_tab->const_keys))
   {
-    trace_indices_added_group_distinct(join->thd->opt_trace, join_tab,
+    trace_indices_added_group_distinct(&join->thd->opt_trace, join_tab,
                                        possible_keys, cause);
     join_tab->const_keys.merge(possible_keys);
   }
@@ -6729,7 +6728,7 @@ static void trace_indices_added_group_di
                                                const char* cause)
 {
 #ifdef OPTIMIZER_TRACE
-  if (likely(trace == NULL) || !trace->is_started())
+  if (likely(!trace->is_started()))
     return;
 
   KEY *key_info= join_tab->table->key_info;
@@ -7152,7 +7151,7 @@ best_access_path(JOIN      *join,
   table_map best_ref_depends_map= 0;
   double tmp;
   bool best_uses_jbuf= FALSE;
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   Loose_scan_opt loose_scan_opt;
   DBUG_ENTER("best_access_path");
 
@@ -7806,8 +7805,8 @@ choose_plan(JOIN *join, table_map join_t
             jtab_sort_func, (void*)join->emb_sjm_nest);
   join->cur_sj_inner_tables= 0;
 
-  Opt_trace_object wrapper(join->thd->opt_trace);
-  Opt_trace_array trace_plan(join->thd->opt_trace, "considered_execution_plans",
+  Opt_trace_object wrapper(&join->thd->opt_trace);
+  Opt_trace_array trace_plan(&join->thd->opt_trace, "considered_execution_plans",
                              Opt_trace_context::GREEDY_SEARCH);
   if (straight_join)
     optimize_straight_join(join, join_tables);
@@ -8017,7 +8016,7 @@ optimize_straight_join(JOIN *join, table
   double    read_time=    0.0;
   POSITION  loose_scan_pos;
  
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
   for (JOIN_TAB **pos= join->best_ref + idx ; (s= *pos) ; pos++)
   {
     Opt_trace_object trace_table(trace);
@@ -8488,7 +8487,7 @@ best_extension_by_limited_search(JOIN   
   DBUG_ENTER("best_extension_by_limited_search");
 
   THD *thd= join->thd;
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   if (thd->killed)  // Abort
     DBUG_RETURN(TRUE);
 
@@ -8862,7 +8861,7 @@ static bool fix_semijoin_strategies_for_
 {
   table_map remaining_tables= 0;
   table_map handled_tabs= 0;
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
   DBUG_ENTER("fix_semijoin_strategies_for_picked_join_order");
 
   if (join->select_lex->sj_nests.is_empty())
@@ -9956,7 +9955,7 @@ static bool make_join_select(JOIN *join,
     */
     table_map used_tables= 0;
     table_map save_used_tables= 0;
-    Opt_trace_context * const trace= thd->opt_trace;
+    Opt_trace_context * const trace= &thd->opt_trace;
     Opt_trace_object trace_wrapper(trace);
     Opt_trace_object trace_conditions(trace,
                                       "attaching_conditions_to_tables");
@@ -10146,10 +10145,9 @@ static bool make_join_select(JOIN *join,
 	       join->best_positions[i].records_read &&
 	       !(join->select_options & OPTION_FOUND_ROWS)))
 	  {
-            Opt_trace_object trace_one_table(thd->opt_trace);
+            Opt_trace_object trace_one_table(trace);
             trace_one_table.add_utf8_table(tab->table);
-            Opt_trace_object trace_table(join->thd->opt_trace, 
-                                         "rechecking_index_usage");
+            Opt_trace_object trace_table(trace, "rechecking_index_usage");
 
 	    /* Join with outer join condition */
 	    Item *orig_cond=sel->cond;
@@ -10274,7 +10272,7 @@ static bool make_join_select(JOIN *join,
     for (uint i= join->const_tables ; i < join->tables ; i++)
     {
       const JOIN_TAB *tab= join->join_tab+i;
-      Opt_trace_object trace_one_table(thd->opt_trace);
+      Opt_trace_object trace_one_table(trace);
       trace_one_table.add_utf8_table(tab->table);
       trace_one_table.add("attached", tab->select_cond);
     }
@@ -11474,7 +11472,7 @@ make_join_readinfo(JOIN *join, ulonglong
   uint last_sjm_table= MAX_TABLES;
   DBUG_ENTER("make_join_readinfo");
 
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
   Opt_trace_object wrapper(trace);
   Opt_trace_array trace_refine_plan(trace, "refine_plan");
 
@@ -14148,7 +14146,7 @@ void optimize_wo_join_buffering(JOIN *jo
   double cost, outer_fanout, inner_fanout= 1.0;
   table_map reopt_remaining_tables= last_remaining_tables;
   uint i;
-  Opt_trace_context * const trace= join->thd->opt_trace;
+  Opt_trace_context * const trace= &join->thd->opt_trace;
   DBUG_ENTER("optimize_wo_join_buffering");
 
   Opt_trace_object trace_recompute(trace, "recompute_best_access_paths");
@@ -14273,7 +14271,7 @@ void advance_sj_state(JOIN *join, table_
   TABLE_LIST *emb_sj_nest= new_join_tab->emb_sj_nest;
   POSITION *pos= join->positions + idx;
   THD *thd= join->thd;
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
 
   /* Add this table to the join prefix */
   remaining_tables &= ~new_join_tab->table->map;
@@ -14893,7 +14891,7 @@ optimize_cond(JOIN *join, Item *conds, L
               bool build_equalities, Item::cond_result *cond_value)
 {
   THD *thd= join->thd;
-  Opt_trace_context * const trace= thd->opt_trace;
+  Opt_trace_context * const trace= &thd->opt_trace;
   DBUG_ENTER("optimize_cond");
 
   if (conds)
@@ -18789,12 +18787,10 @@ join_init_quick_read_record(JOIN_TAB *ta
   */
 
 #ifdef OPTIMIZER_TRACE  
-  Opt_trace_context * const trace= tab->join->thd->opt_trace;
-  const bool repeated_trace_enabled= trace ? 
-    trace->feature_enabled(Opt_trace_context::DYNAMIC_RANGE) :
-    false;
-  const bool disable_trace= 
-    (tab->select->traced_before && !repeated_trace_enabled);
+  Opt_trace_context * const trace= &tab->join->thd->opt_trace;
+  const bool disable_trace=
+    tab->select->traced_before &&
+    !trace->feature_enabled(Opt_trace_context::DYNAMIC_RANGE);
   Opt_trace_disable_I_S disable_trace_wrapper(trace, disable_trace);
 
   tab->select->traced_before= true;
@@ -20441,6 +20437,8 @@ test_if_skip_sort_order(JOIN_TAB *tab,OR
     ref_key_parts= select->quick->used_key_parts;
   }
 
+  Opt_trace_context * const trace= &tab->join->thd->opt_trace;
+
   if (ref_key >= 0)
   {
     /*
@@ -20507,7 +20505,6 @@ test_if_skip_sort_order(JOIN_TAB *tab,OR
           key_map new_ref_key_map;  // Force the creation of quick select
           new_ref_key_map.set_bit(new_ref_key); // only for new_ref_key.
 
-          Opt_trace_context * const trace= tab->join->thd->opt_trace;
           Opt_trace_object trace_wrapper(trace);
           Opt_trace_object trace_recest(trace, 
                                     "records_estimation_for_index_ordering");
@@ -20562,7 +20559,6 @@ test_if_skip_sort_order(JOIN_TAB *tab,OR
     {
       if (table->quick_keys.is_set(best_key) && best_key != ref_key)
       {
-        Opt_trace_context * const trace= join->thd->opt_trace;
         Opt_trace_object trace_wrapper(trace);
         Opt_trace_object trace_recest(trace, 
                                      "records_estimation_for_index_ordering");

=== modified file 'sql/sql_update.cc'
--- a/sql/sql_update.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sql_update.cc	2011-03-31 13:41:41 +0000
@@ -405,7 +405,7 @@ int mysql_update(THD *thd,
   select= make_select(table, 0, 0, conds, 0, &error);
 
   { // Enter scope for optimizer trace wrapper
-    Opt_trace_object wrapper(thd->opt_trace);
+    Opt_trace_object wrapper(&thd->opt_trace);
     wrapper.add_utf8_table(table);
 
     if (error || !limit ||

=== modified file 'sql/sql_view.cc'
--- a/sql/sql_view.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sql_view.cc	2011-03-31 13:41:41 +0000
@@ -1133,8 +1133,8 @@ bool mysql_make_view(THD *thd, File_pars
   table->definer.user.str= table->definer.host.str= 0;
   table->definer.user.length= table->definer.host.length= 0;
 
-  Opt_trace_object trace_wrapper(thd->opt_trace);
-  Opt_trace_object trace_view(thd->opt_trace, "view");
+  Opt_trace_object trace_wrapper(&thd->opt_trace);
+  Opt_trace_object trace_view(&thd->opt_trace, "view");
   // When reading I_S.VIEWS, table->alias may be NULL
   trace_view.add_utf8("database", table->db, table->db_length).
     add_utf8("view", table->alias ? table->alias : table->table_name).

=== modified file 'sql/sys_vars.cc'
--- a/sql/sys_vars.cc	2011-03-21 17:55:41 +0000
+++ b/sql/sys_vars.cc	2011-03-31 13:41:41 +0000
@@ -1530,8 +1530,7 @@ static Sys_var_flagset Sys_optimizer_tra
 static bool optimizer_trace_update(sys_var *self, THD *thd,
                                    enum_var_type type)
 {
-  if (thd->opt_trace != NULL)
-    thd->opt_trace->reset();
+  thd->opt_trace.reset();
   return false;
 }
 

=== modified file 'unittest/gunit/opt_trace-t.cc'
--- a/unittest/gunit/opt_trace-t.cc	2011-03-21 17:55:41 +0000
+++ b/unittest/gunit/opt_trace-t.cc	2011-03-31 13:41:41 +0000
@@ -109,8 +109,8 @@ TEST_F(TraceContentTest, ConstructAndDes
 /** Test empty trace */
 TEST_F(TraceContentTest, Empty)
 {
-  ASSERT_FALSE(trace.start(YES_FOR_THIS, false, false, -1, 1, ULONG_MAX,
-                           all_features));
+  ASSERT_FALSE(trace.start(true, false, false, false, -1, 1,
+                           ULONG_MAX, all_features));
   /*
     Add at least an object to it. A really empty trace ("") is not
     JSON-compliant, at least Python's JSON module raises an exception.
@@ -144,8 +144,8 @@ TEST_F(TraceContentTest, Empty)
 /** Test normal usage */
 TEST_F(TraceContentTest, NormalUsage)
 {
-  ASSERT_FALSE(trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                           all_features));
+  ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                           ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     {
@@ -205,8 +205,8 @@ TEST_F(TraceContentTest, NormalUsage)
 */
 TEST_F(TraceContentTest, Tail)
 {
-  ASSERT_FALSE(trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                           all_features));
+  ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                           ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     {
@@ -266,8 +266,8 @@ TEST_F(TraceContentTest, Tail)
 /** Test reaction to malformed JSON (object with value without key) */
 TEST_F(TraceContentTest, BuggyObject)
 {
-  ASSERT_FALSE(trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                           all_features));
+  ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                           ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     {
@@ -323,8 +323,8 @@ TEST_F(TraceContentTest, BuggyObject)
 /** Test reaction to malformed JSON (array with value with key) */
 TEST_F(TraceContentTest, BuggyArray)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, true, false, -1, 1,
+                               ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     {
@@ -367,10 +367,10 @@ TEST_F(TraceContentTest, BuggyArray)
 
 
 /** Test Opt_trace_disable_I_S */
-TEST_F(TraceContentTest, DisableIS)
+TEST_F(TraceContentTest, DisableISWithObject)
 {
-  ASSERT_FALSE(trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                           all_features));
+  ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                           ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     {
@@ -387,6 +387,13 @@ TEST_F(TraceContentTest, DisableIS)
         /* don't disable... but above layer is stronger */
         Opt_trace_disable_I_S otd2(&trace, false);
         oto2.add("another key inside", 5LL);
+        // disabling should apply to substatements too:
+        ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                                 ULONG_MAX, all_features));
+        {
+          Opt_trace_object oto3(&trace);
+        }
+        trace.end();
       }
       ota.add_alnum("one string element");
       ota.add(true);
@@ -431,12 +438,66 @@ TEST_F(TraceContentTest, DisableIS)
   ASSERT_TRUE(it.at_end());
 }
 
+
+/** Test Opt_trace_context::disable_I_S_for_this_and_children */
+TEST_F(TraceContentTest, DisableISWithCall)
+{
+  // Test that it disables even before any start()
+  trace.disable_I_S_for_this_and_children();
+  ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                           ULONG_MAX, all_features));
+  {
+    Opt_trace_object oto(&trace);
+    {
+      Opt_trace_array ota(&trace, "one array");
+      ota.add(200.4);
+      {
+        Opt_trace_object oto1(&trace);
+        oto1.add_alnum("one key", "one value").
+          add("another key", 100LL);
+        oto1.add("a third key", false);
+        Opt_trace_object oto2(&trace, "a fourth key");
+        oto2.add("key inside", 1LL);
+        // disabling should apply to substatements too:
+        ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                                 ULONG_MAX, all_features));
+        {
+          Opt_trace_object oto3(&trace);
+        }
+        trace.end();
+        /* don't disable... but above layer is stronger */
+        Opt_trace_disable_I_S otd2(&trace, false);
+        oto2.add("another key inside", 5LL);
+        // disabling should apply to substatements too:
+        ASSERT_FALSE(trace.start(true, false, true, false, -1, 1,
+                                 ULONG_MAX, all_features));
+        {
+          Opt_trace_object oto4(&trace);
+        }
+        trace.end();
+      }
+      ota.add_alnum("one string element");
+      ota.add(true);
+    }
+    oto.add("yet another key", -1000LL);
+    {
+      Opt_trace_array ota(&trace, "another array");
+      ota.add(1LL).add(2LL).add(3LL).add(4LL);
+    }
+  }
+  trace.end();
+  trace.restore_I_S();
+  Opt_trace_iterator it(&trace);
+  ASSERT_TRUE(it.at_end());
+}
+
+
 /** Helper for Trace_settings_test.offset */
 void make_one_trace(Opt_trace_context *trace, const char *name,
                     long offset, long limit)
 {
-  ASSERT_FALSE(trace->start(YES_FOR_THIS, true, false, offset, limit,
-                            ULONG_MAX, all_features));
+  ASSERT_FALSE(trace->start(true, false, true, false, offset,
+                            limit, ULONG_MAX, all_features));
   {
     Opt_trace_object oto(trace);
     oto.add(name, 0LL);
@@ -536,8 +597,8 @@ TEST_F(TraceContentTest, Offset)
 /** Test truncation by max_mem_size */
 TEST_F(TraceContentTest, MaxMemSize)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -1, 1,
-                               1000 /* max_mem_size */, all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -1,
+                               1, 1000 /* max_mem_size */, all_features));
   /* make a "long" trace */
   {
     Opt_trace_object oto(&trace);
@@ -576,8 +637,8 @@ TEST_F(TraceContentTest, MaxMemSize)
 TEST_F(TraceContentTest, MaxMemSize2)
 {
   Opt_trace_context trace;
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -2, 2,
-                               21 /* max_mem_size */, all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -2,
+                               2, 21 /* max_mem_size */, all_features));
   /* make a "long" trace */
   {
     Opt_trace_object oto(&trace);
@@ -585,8 +646,8 @@ TEST_F(TraceContentTest, MaxMemSize2)
   }
   trace.end();
   /* A second similar trace */
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -2, 2, 21,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -2,
+                               2, 21, all_features));
   {
     Opt_trace_object oto(&trace);
     oto.add_alnum("some key2", "make it long");
@@ -611,8 +672,8 @@ TEST_F(TraceContentTest, MaxMemSize2)
     3rd trace; the first one should automatically be purged, thus the 3rd
     should have a bit of room.
   */
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -2, 2, 21,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -2,
+                               2, 21, all_features));
   {
     Opt_trace_object oto(&trace);
     oto.add_alnum("some key3", "make it long");
@@ -671,8 +732,8 @@ void open_object(uint count, Opt_trace_c
 /// Test reaction to out-of-memory condition in trace buffer
 TEST_F(TraceContentTest, OOMinBuffer)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -1, 1, ULONG_MAX,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -1,
+                               1, ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     {
@@ -698,8 +759,8 @@ TEST_F(TraceContentTest, OOMinBuffer)
 /// Test reaction to out-of-memory condition in book-keeping data structures
 TEST_F(TraceContentTest, OOMinBookKeeping)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -1, 1, ULONG_MAX,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -1,
+                               1, ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     open_object(100, &trace, true);
@@ -762,8 +823,8 @@ TEST_F(TraceContentTest, OOMinPurge)
 /** Test filtering by feature */
 TEST_F(TraceContentTest, FilteringByFeature)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -1, 1, ULONG_MAX,
-                               Opt_trace_context::MISC));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -1,
+                               1, ULONG_MAX, Opt_trace_context::MISC));
   {
     Opt_trace_object oto(&trace);
     {
@@ -824,8 +885,8 @@ TEST_F(TraceContentTest, FilteringByFeat
 /** Test escaping of characters */
 TEST_F(TraceContentTest, Escaping)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, true, false, -1, 1,
+                               ULONG_MAX, all_features));
   // All ASCII 0-127 chars are valid UTF8 encodings
   char all_chars[130];
   for (uint c= 0; c < sizeof(all_chars) - 2 ; c++)
@@ -867,8 +928,8 @@ TEST_F(TraceContentTest, Escaping)
 /** Test how the system handles non-UTF8 characters, a violation of its API */
 TEST_F(TraceContentTest, NonUtf8)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, true, false, -1, 1, ULONG_MAX,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, true, false, -1, 1,
+                               ULONG_MAX, all_features));
   /*
     A string which starts with invalid utf8 (the four first bytes are éèÄà in
     latin1).
@@ -921,8 +982,8 @@ TEST_F(TraceContentTest, NonUtf8)
 */
 TEST_F(TraceContentTest, Indent)
 {
-  ASSERT_EQ(false, trace.start(YES_FOR_THIS, false, false, -1, 1, ULONG_MAX,
-                               all_features));
+  ASSERT_EQ(false, trace.start(true, false, false, false, -1,
+                               1, ULONG_MAX, all_features));
   {
     Opt_trace_object oto(&trace);
     open_object(100, &trace, false);
@@ -968,6 +1029,29 @@ TEST_F(TraceContentTest, Indent)
   ASSERT_TRUE(it.at_end());
 }
 
+
+/**
+   Test an optimization: that no Opt_trace_stmt is created in common case
+   where all statements and substatements ask neither for I_S nor for DBUG.
+*/
+TEST_F(TraceContentTest, NoOptTraceStmt)
+{
+  ASSERT_FALSE(trace.start(false, false, false, false, -1, 1,
+                           ULONG_MAX, all_features));
+  EXPECT_FALSE(trace.is_started());
+  // one substatement:
+  ASSERT_FALSE(trace.start(false, false, false, false, -1, 1,
+                           ULONG_MAX, all_features));
+  EXPECT_FALSE(trace.is_started());
+  // another one deeper nested:
+  ASSERT_FALSE(trace.start(false, false, false, false, -1, 1,
+                           ULONG_MAX, all_features));
+  EXPECT_FALSE(trace.is_started());
+  trace.end();
+  trace.end();
+  trace.end();
+}
+
 }  // namespace
 
 #endif // OPTIMIZER_TRACE

Attachment: [text/bzr-bundle] bzr/guilhem.bichot@oracle.com-20110331134141-z6ptd4msc2jyz72z.bundle
Thread
bzr commit into mysql-trunk branch (guilhem.bichot:3286) Guilhem Bichot31 Mar