List:Commits« Previous MessageNext Message »
From:vasil.dimov Date:November 9 2011 11:44am
Subject:bzr push into mysql-trunk branch (vasil.dimov:3564 to 3568) Bug#11764622
View as plain text  
 3568 Vasil Dimov	2011-11-09
      Partial fix for Bug#11764622 57480: MEMORY LEAK WHEN HAVING 256+ TABLES
      
      In dict_load_foreigns() use onstack storage for the dtuple and foreign key
      id that are only used inside that function.
      
      Introduce a new function dtuple_create_from_mem() that initializes a
      dtuple from an already allocated memory chunk.
      
      This change removes two malloc() calls per one open table (for a table
      that does not have foreign keys!). It removes the following codepath
      (executed twice per open table):
      
      #0 malloc (size=376)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_create_func
      #4 dict_load_foreigns
      #5 dict_load_table
      #6 dict_table_open_on_name_low
      #7 dict_table_open_on_name
      #8 ha_innobase::open
      #9 handler::ha_open
      #10 ha_partition::open
      #11 handler::ha_open
      #12 open_table_from_share
      #13 open_table
      #14 open_and_process_table
      #15 open_tables
      
      and it causes the stack memory footprint of dict_load_foreigns() to grow
      with 504 bytes (DTUPLE_EST_ALLOC(1)==72 + 2*NAME_LEN+64==448 - removing
      two pointers, 8 bytes each).

    modified:
      storage/innobase/dict/dict0load.c
      storage/innobase/include/data0data.h
      storage/innobase/include/data0data.ic
 3567 Vasil Dimov	2011-11-09
      Partial fix for Bug#11764622 57480: MEMORY LEAK WHEN HAVING 256+ TABLES
      
      Make row_prebuilt_t::pcur and row_prebuilt_t::clust_pcur btr_pcur_t
      instead of btr_pcur_t* in the definition of row_prebuilt_t.
      
      It makes no sense to have pointers to objects that are always
      (unconditionally) allocated separately from the main object. It is better
      to allocate them along with the main object (row_prebuilt_t).
      
      This is possible because we do not do any of the followings in the code:
      1. Assign ::pcur or ::clust_pcur to some object allocated elsewhere
      2. Free and reallocate ::pcur or ::clust_pcur
      
      This change removes two malloc() calls per one open table. It removes the
      following codepath (executed twice per open table):
      
      #0 malloc (size=352)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_create_func
      #4 mem_alloc_func
      #5 btr_pcur_create_for_mysql
      #6 row_create_prebuilt
      #7 ha_innobase::open
      #8 handler::ha_open
      #9 ha_partition::open
      #10 handler::ha_open
      #11 open_table_from_share
      #12 open_table
      #13 open_and_process_table
      #14 open_tables
      #15 open_and_lock_tables
      
      and it changes this codepath to allocate 3552 instead of 3104 bytes:
      
      #0 malloc
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_create_func
      #4 row_create_prebuilt
      #5 ha_innobase::open
      #6 handler::ha_open
      #7 ha_partition::open
      #8 handler::ha_open
      #9 open_table_from_share
      #10 open_table
      #11 open_and_process_table
      #12 open_tables
      #13 open_and_lock_tables
      #14 open_and_lock_tables
      #15 execute_sqlcom_select

    modified:
      storage/innobase/btr/btr0pcur.c
      storage/innobase/include/btr0pcur.h
      storage/innobase/include/row0mysql.h
      storage/innobase/row/row0mysql.c
      storage/innobase/row/row0sel.c
 3566 Vasil Dimov	2011-11-08
      Partial fix for Bug#11764622 57480: MEMORY LEAK WHEN HAVING 256+ TABLES
      
      In row_create_prebuilt() try to estimate how many bytes will be allocated
      from the mem_heap_t object and create it with the appropriate size so that
      no further allocations are done by the code.
      
      The only exception we make is that we do not allocate mysql_row_len bytes
      if it is bigger than 256. This memory chunk is needed only in INSERTs and
      since we are not sure what is the prebuilt going to be used for and since
      mysql_row_len could be huge (as huge as e.g. 60KB) - we do not allocate
      a space for it in advance.
      
      This change removes at least this codepath:
      
      #0 malloc (size=1384)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_add_block
      #4 mem_heap_alloc
      #5 dtuple_create
      #6 row_create_prebuilt
      #7 ha_innobase::open
      #8 handler::ha_open
      #9 ha_partition::open
      #10 handler::ha_open
      #11 open_table_from_share
      #12 open_table
      #13 open_and_process_table
      #14 open_tables
      #15 open_and_lock_tables
      
      causing at least 1 less malloc() during open table.
      
      In exchange this codepath:
      
      #0 malloc
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_create_func
      #4 row_create_prebuilt
      #5 ha_innobase::open
      #6 handler::ha_open
      #7 ha_partition::open
      #8 handler::ha_open
      #9 open_table_from_share
      #10 open_table
      #11 open_and_process_table
      #12 open_tables
      #13 open_and_lock_tables
      #14 open_and_lock_tables
      #15 execute_sqlcom_select
      
      now allocates 3104 bytes instead of 632 (on 64 bit machine).

    modified:
      storage/innobase/handler/ha_innodb.cc
      storage/innobase/handler/handler0alter.cc
      storage/innobase/include/data0data.h
      storage/innobase/include/data0data.ic
      storage/innobase/include/row0mysql.h
      storage/innobase/row/row0mysql.c
 3565 Vasil Dimov	2011-11-07
      Partial fix for Bug#11764622 57480: MEMORY LEAK WHEN HAVING 256+ TABLES
      
      Increase the size of the heap used by pars_sql() in order to minimize the
      number of additional allocations. It is not clear exactly how many bytes
      will be allocated from this heap because it depends on the input data.
      
      In this test case:
      
        CREATE TABLE `ptest1` (
          `a` int(11) NOT NULL AUTO_INCREMENT,
          `b` varchar(64000) DEFAULT NULL,
          PRIMARY KEY (`a`)
        ) ENGINE=InnoDB DEFAULT CHARSET=latin1
        /*!50100 PARTITION BY KEY (a)
        PARTITIONS 200 */
      
        SELECT * FROM ptest1;
      
      the patch removes the following allocations done during the SELECT:
      
      (allocated size=1040, executed 200 times during SELECT)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_add_block
      #4 mem_heap_alloc
      #5 mem_heap_dup
      #6 pars_sql
      #7 que_eval_sql
      #8 dict_stats_fetch_from_ps
      #9 dict_stats_update
      #10 dict_table_open_on_name
      #11 ha_innobase::open
      #12 handler::ha_open
      #13 ha_partition::open
      #14 handler::ha_open
      #15 open_table_from_share
      
      (allocated size=2200, executed 200 times during SELECT)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_add_block
      #4 mem_heap_alloc
      #5 mem_heap_zalloc
      #6 sym_tab_add_id
      #7 yylex
      #8 yyparse
      #9 pars_sql
      #10 que_eval_sql
      #11 dict_stats_fetch_from_ps
      #12 dict_stats_update
      #13 dict_table_open_on_name
      #14 ha_innobase::open
      #15 handler::ha_open
      
      (allocated size=4520, executed 200 times during SELECT)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_add_block
      #4 mem_heap_alloc
      #5 mem_heap_zalloc
      #6 sym_tab_add_id
      #7 yylex
      #8 yyparse
      #9 pars_sql
      #10 que_eval_sql
      #11 dict_stats_fetch_from_ps
      #12 dict_stats_update
      #13 dict_table_open_on_name
      #14 ha_innobase::open
      #15 handler::ha_open
      
      (allocated size=8120, executed 200 times during SELECT)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_add_block
      #4 mem_heap_alloc
      #5 mem_heap_zalloc
      #6 sym_tab_add_id
      #7 yylex
      #8 yyparse
      #9 pars_sql
      #10 que_eval_sql
      #11 dict_stats_fetch_from_ps
      #12 dict_stats_update
      #13 dict_table_open_on_name
      #14 ha_innobase::open
      #15 handler::ha_open
      
      and in this call:
      
      (allocated size=376, executed 200 times during SELECT)
      #1 mem_area_alloc
      #2 mem_heap_create_block
      #3 mem_heap_create_func
      #4 pars_sql
      #5 que_eval_sql
      #6 dict_stats_fetch_from_ps
      #7 dict_stats_update
      #8 dict_table_open_on_name
      #9 ha_innobase::open
      #10 handler::ha_open
      #11 ha_partition::open
      #12 handler::ha_open
      #13 open_table_from_share
      #14 open_table
      #15 open_and_process_table
      
      it allocates 16120 instead of 376 bytes.

    modified:
      storage/innobase/pars/pars0pars.c
 3564 Jorgen Loland	2011-11-04
      BUG#12997905 Followup patch
      
      Hopefully PB stops barking about two VALGRIND errors after push
      of this bug.
     @ mysql-test/valgrind.supp
        Removed two valgrind supression patterns.

    modified:
      mysql-test/valgrind.supp
=== modified file 'storage/innobase/btr/btr0pcur.c'
--- a/storage/innobase/btr/btr0pcur.c	revid:jorgen.loland@stripped
+++ b/storage/innobase/btr/btr0pcur.c	revid:vasil.dimov@stripped
@@ -52,12 +52,13 @@ btr_pcur_create_for_mysql(void)
 }
 
 /**************************************************************//**
-Frees the memory for a persistent cursor object. */
+Resets a persistent cursor object, freeing ::old_rec_buf if it is
+allocated and resetting the other members to their initial values. */
 UNIV_INTERN
 void
-btr_pcur_free_for_mysql(
-/*====================*/
-	btr_pcur_t*	cursor)	/*!< in, own: persistent cursor */
+btr_pcur_reset(
+/*===========*/
+	btr_pcur_t*	cursor)	/*!< in, out: persistent cursor */
 {
 	if (cursor->old_rec_buf != NULL) {
 
@@ -66,6 +67,7 @@ btr_pcur_free_for_mysql(
 		cursor->old_rec_buf = NULL;
 	}
 
+	cursor->btr_cur.index = NULL;
 	cursor->btr_cur.page_cur.rec = NULL;
 	cursor->old_rec = NULL;
 	cursor->old_n_fields = 0;
@@ -73,7 +75,17 @@ btr_pcur_free_for_mysql(
 
 	cursor->latch_mode = BTR_NO_LATCHES;
 	cursor->pos_state = BTR_PCUR_NOT_POSITIONED;
+}
 
+/**************************************************************//**
+Frees the memory for a persistent cursor object. */
+UNIV_INTERN
+void
+btr_pcur_free_for_mysql(
+/*====================*/
+	btr_pcur_t*	cursor)	/*!< in, own: persistent cursor */
+{
+	btr_pcur_reset(cursor);
 	mem_free(cursor);
 }
 

=== modified file 'storage/innobase/dict/dict0load.c'
--- a/storage/innobase/dict/dict0load.c	revid:jorgen.loland@stripped
+++ b/storage/innobase/dict/dict0load.c	revid:vasil.dimov@stripped
@@ -44,6 +44,7 @@ Created 4/24/1996 Heikki Tuuri
 #include "dict0priv.h"
 #include "ha_prototypes.h" /* innobase_casedn_str() */
 
+#include "mysql_com.h" /* NAME_LEN */
 
 /** Following are six InnoDB system tables */
 static const char* SYSTEM_TABLE_NAME[] = {
@@ -2247,8 +2248,11 @@ dict_load_foreigns(
 	ibool		check_charsets)	/*!< in: TRUE=check charset
 					compatibility */
 {
+	char		tuple_buf[DTUPLE_EST_ALLOC(1)];
+	/* database name + table name + '/' + '\0' + space for prefixes
+	like #mysql50# */
+	char		fk_id[2 * NAME_LEN + 64];
 	btr_pcur_t	pcur;
-	mem_heap_t*	heap;
 	dtuple_t*	tuple;
 	dfield_t*	dfield;
 	dict_index_t*	sec_index;
@@ -2256,7 +2260,6 @@ dict_load_foreigns(
 	const rec_t*	rec;
 	const byte*	field;
 	ulint		len;
-	char*		id ;
 	ulint		err;
 	mtr_t		mtr;
 
@@ -2283,9 +2286,8 @@ dict_load_foreigns(
 	sec_index = dict_table_get_next_index(
 		dict_table_get_first_index(sys_foreign));
 start_load:
-	heap = mem_heap_create(256);
 
-	tuple  = dtuple_create(heap, 1);
+	tuple = dtuple_create_from_mem(tuple_buf, sizeof(tuple_buf), 1);
 	dfield = dtuple_get_nth_field(tuple, 0);
 
 	dfield_set_data(dfield, table_name, ut_strlen(table_name));
@@ -2339,7 +2341,9 @@ loop:
 
 	/* Now we get a foreign key constraint id */
 	field = rec_get_nth_field_old(rec, 1, &len);
-	id = mem_heap_strdupl(heap, (char*) field, len);
+	ut_a(len < sizeof(fk_id));
+	memcpy(fk_id, field, len);
+	fk_id[len] = '\0';
 
 	btr_pcur_store_position(&pcur, &mtr);
 
@@ -2347,11 +2351,10 @@ loop:
 
 	/* Load the foreign constraint definition to the dictionary cache */
 
-	err = dict_load_foreign(id, check_charsets, check_recursive);
+	err = dict_load_foreign(fk_id, check_charsets, check_recursive);
 
 	if (err != DB_SUCCESS) {
 		btr_pcur_close(&pcur);
-		mem_heap_free(heap);
 
 		return(err);
 	}
@@ -2367,7 +2370,6 @@ next_rec:
 load_next_index:
 	btr_pcur_close(&pcur);
 	mtr_commit(&mtr);
-	mem_heap_free(heap);
 
 	sec_index = dict_table_get_next_index(sec_index);
 

=== modified file 'storage/innobase/handler/ha_innodb.cc'
--- a/storage/innobase/handler/ha_innodb.cc	revid:jorgen.loland@stripped
+++ b/storage/innobase/handler/ha_innodb.cc	revid:vasil.dimov@stripped
@@ -4204,9 +4204,8 @@ table_opened:
 		DBUG_RETURN(HA_ERR_NO_SUCH_TABLE);
 	}
 
-	prebuilt = row_create_prebuilt(ib_table);
+	prebuilt = row_create_prebuilt(ib_table, table->s->reclength);
 
-	prebuilt->mysql_row_len = table->s->reclength;
 	prebuilt->default_rec = table->s->default_values;
 	ut_ad(prebuilt->default_rec);
 

=== modified file 'storage/innobase/handler/handler0alter.cc'
--- a/storage/innobase/handler/handler0alter.cc	revid:jorgen.loland@stripped
+++ b/storage/innobase/handler/handler0alter.cc	revid:vasil.dimov@stripped
@@ -1083,7 +1083,12 @@ ha_innobase::final_add_index(
 			trx_commit_for_mysql(prebuilt->trx);
 			row_prebuilt_free(prebuilt, TRUE);
 			error = row_merge_drop_table(trx, old_table);
-			prebuilt = row_create_prebuilt(add->indexed_table);
+			prebuilt = row_create_prebuilt(add->indexed_table,
+				0 /* XXX Do we know the mysql_row_len here?
+				Before the addition of this parameter to
+				row_create_prebuilt() the mysql_row_len
+				member was left 0 (from zalloc) in the
+				prebuilt object. */);
 		}
 
 		err = convert_error_code_to_mysql(

=== modified file 'storage/innobase/include/btr0pcur.h'
--- a/storage/innobase/include/btr0pcur.h	revid:jorgen.loland@stripped
+++ b/storage/innobase/include/btr0pcur.h	revid:vasil.dimov@stripped
@@ -53,6 +53,16 @@ UNIV_INTERN
 btr_pcur_t*
 btr_pcur_create_for_mysql(void);
 /*============================*/
+
+/**************************************************************//**
+Resets a persistent cursor object, freeing ::old_rec_buf if it is
+allocated and resetting the other members to their initial values. */
+UNIV_INTERN
+void
+btr_pcur_reset(
+/*===========*/
+	btr_pcur_t*	cursor);/*!< in, out: persistent cursor */
+
 /**************************************************************//**
 Frees the memory for a persistent cursor object. */
 UNIV_INTERN

=== modified file 'storage/innobase/include/data0data.h'
--- a/storage/innobase/include/data0data.h	revid:jorgen.loland@stripped
+++ b/storage/innobase/include/data0data.h	revid:vasil.dimov@stripped
@@ -231,6 +231,26 @@ dtuple_set_n_fields_cmp(
 	dtuple_t*	tuple,		/*!< in: tuple */
 	ulint		n_fields_cmp);	/*!< in: number of fields used in
 					comparisons in rem0cmp.* */
+
+/* Estimate the number of bytes that are going to be allocated when
+creating a new dtuple_t object */
+#define DTUPLE_EST_ALLOC(n_fields)	\
+	(sizeof(dtuple_t) + (n_fields) * sizeof(dfield_t))
+
+/**********************************************************//**
+Creates a data tuple from an already allocated chunk of memory.
+The size of the chunk must be at least DTUPLE_EST_ALLOC(n_fields).
+The default value for number of fields used in record comparisons
+for this tuple is n_fields.
+@return	created tuple (inside buf) */
+UNIV_INLINE
+dtuple_t*
+dtuple_create_from_mem(
+/*===================*/
+	void*	buf,		/*!< in, out: buffer to use */
+	ulint	buf_size,	/*!< in: buffer size */
+	ulint	n_fields);	/*!< in: number of fields */
+
 /**********************************************************//**
 Creates a data tuple to a memory heap. The default value for number
 of fields used in record comparisons for this tuple is n_fields.
@@ -240,7 +260,8 @@ dtuple_t*
 dtuple_create(
 /*==========*/
 	mem_heap_t*	heap,	/*!< in: memory heap where the tuple
-				is created */
+				is created, DTUPLE_EST_ALLOC(n_fields)
+				bytes will be allocated from this heap */
 	ulint		n_fields); /*!< in: number of fields */
 
 /**********************************************************//**

=== modified file 'storage/innobase/include/data0data.ic'
--- a/storage/innobase/include/data0data.ic	revid:jorgen.loland@stripped
+++ b/storage/innobase/include/data0data.ic	revid:vasil.dimov@stripped
@@ -348,23 +348,25 @@ dtuple_get_nth_field(
 #endif /* UNIV_DEBUG */
 
 /**********************************************************//**
-Creates a data tuple to a memory heap. The default value for number
-of fields used in record comparisons for this tuple is n_fields.
-@return	own: created tuple */
+Creates a data tuple from an already allocated chunk of memory.
+The size of the chunk must be at least DTUPLE_EST_ALLOC(n_fields).
+The default value for number of fields used in record comparisons
+for this tuple is n_fields.
+@return	created tuple (inside buf) */
 UNIV_INLINE
 dtuple_t*
-dtuple_create(
-/*==========*/
-	mem_heap_t*	heap,	/*!< in: memory heap where the tuple
-				is created */
-	ulint		n_fields) /*!< in: number of fields */
+dtuple_create_from_mem(
+/*===================*/
+	void*	buf,		/*!< in, out: buffer to use */
+	ulint	buf_size,	/*!< in: buffer size */
+	ulint	n_fields)	/*!< in: number of fields */
 {
 	dtuple_t*	tuple;
 
-	ut_ad(heap);
+	ut_ad(buf != NULL);
+	ut_a(buf_size >= DTUPLE_EST_ALLOC(n_fields));
 
-	tuple = (dtuple_t*) mem_heap_alloc(heap, sizeof(dtuple_t)
-					   + n_fields * sizeof(dfield_t));
+	tuple = (dtuple_t*) buf;
 	tuple->info_bits = 0;
 	tuple->n_fields = n_fields;
 	tuple->n_fields_cmp = n_fields;
@@ -386,9 +388,38 @@ dtuple_create(
 			dfield_get_type(field)->mtype = DATA_ERROR;
 		}
 	}
+#endif
+	return(tuple);
+}
+
+/**********************************************************//**
+Creates a data tuple to a memory heap. The default value for number
+of fields used in record comparisons for this tuple is n_fields.
+@return	own: created tuple */
+UNIV_INLINE
+dtuple_t*
+dtuple_create(
+/*==========*/
+	mem_heap_t*	heap,	/*!< in: memory heap where the tuple
+				is created, DTUPLE_EST_ALLOC(n_fields)
+				bytes will be allocated from this heap */
+	ulint		n_fields) /*!< in: number of fields */
+{
+	void*		buf;
+	ulint		buf_size;
+	dtuple_t*	tuple;
+
+	ut_ad(heap);
+
+	buf_size = DTUPLE_EST_ALLOC(n_fields);
+	buf = mem_heap_alloc(heap, buf_size);
 
+	tuple = dtuple_create_from_mem(buf, buf_size, n_fields);
+
+#ifdef UNIV_DEBUG
 	UNIV_MEM_INVALID(tuple->fields, n_fields * sizeof *tuple->fields);
 #endif
+
 	return(tuple);
 }
 

=== modified file 'storage/innobase/include/row0mysql.h'
--- a/storage/innobase/include/row0mysql.h	revid:jorgen.loland@stripped
+++ b/storage/innobase/include/row0mysql.h	revid:vasil.dimov@stripped
@@ -168,7 +168,9 @@ UNIV_INTERN
 row_prebuilt_t*
 row_create_prebuilt(
 /*================*/
-	dict_table_t*	table);	/*!< in: Innobase table handle */
+	dict_table_t*	table,		/*!< in: Innobase table handle */
+	ulint		mysql_row_len);	/*!< in: length in bytes of a row in
+					the MySQL format */
 /********************************************************************//**
 Free a prebuilt struct for a MySQL table handle. */
 UNIV_INTERN
@@ -681,9 +683,9 @@ struct row_prebuilt_struct {
 					in inserts */
 	que_fork_t*	upd_graph;	/*!< Innobase SQL query graph used
 					in updates or deletes */
-	btr_pcur_t*	pcur;		/*!< persistent cursor used in selects
+	btr_pcur_t	pcur;		/*!< persistent cursor used in selects
 					and updates */
-	btr_pcur_t*	clust_pcur;	/*!< persistent cursor used in
+	btr_pcur_t	clust_pcur;	/*!< persistent cursor used in
 					some selects and updates */
 	que_fork_t*	sel_graph;	/*!< dummy query graph used in
 					selects */

=== modified file 'storage/innobase/pars/pars0pars.c'
--- a/storage/innobase/pars/pars0pars.c	revid:jorgen.loland@stripped
+++ b/storage/innobase/pars/pars0pars.c	revid:vasil.dimov@stripped
@@ -1859,7 +1859,7 @@ pars_sql(
 
 	ut_ad(str);
 
-	heap = mem_heap_create(256);
+	heap = mem_heap_create(16000);
 
 	/* Currently, the parser is not reentrant: */
 	ut_ad(mutex_own(&(dict_sys->mutex)));

=== modified file 'storage/innobase/row/row0mysql.c'
--- a/storage/innobase/row/row0mysql.c	revid:jorgen.loland@stripped
+++ b/storage/innobase/row/row0mysql.c	revid:vasil.dimov@stripped
@@ -677,17 +677,52 @@ UNIV_INTERN
 row_prebuilt_t*
 row_create_prebuilt(
 /*================*/
-	dict_table_t*	table)	/*!< in: Innobase table handle */
+	dict_table_t*	table,		/*!< in: Innobase table handle */
+	ulint		mysql_row_len)	/*!< in: length in bytes of a row in
+					the MySQL format */
 {
 	row_prebuilt_t*	prebuilt;
 	mem_heap_t*	heap;
 	dict_index_t*	clust_index;
 	dtuple_t*	ref;
+	ulint		search_tuple_n_fields;
 	ulint		ref_len;
 
-	heap = mem_heap_create(sizeof *prebuilt + 128);
+	search_tuple_n_fields = 2 * dict_table_get_n_cols(table);
+
+	clust_index = dict_table_get_first_index(table);
+
+	ref_len = dict_index_get_n_unique(clust_index);
+
+	/* We allocate enough space for the objects that are likely to
+	be created later in order to minimize the number of malloc()
+	calls */
+	heap = mem_heap_create(sizeof(*prebuilt)
+			       /* allocd in this function */
+			       + DTUPLE_EST_ALLOC(search_tuple_n_fields)
+			       + DTUPLE_EST_ALLOC(ref_len)
+			       /* allocd in row_prebuild_sel_graph() */
+			       + sizeof(sel_node_t)
+			       + sizeof(que_fork_t)
+			       + sizeof(que_thr_t)
+			       /* allocd in row_get_prebuilt_update_vector() */
+			       + sizeof(upd_node_t)
+			       + sizeof(upd_t)
+			       + sizeof(upd_field_t)
+			         * dict_table_get_n_cols(table)
+			       + sizeof(que_fork_t)
+			       + sizeof(que_thr_t)
+			       /* allocd in row_get_prebuilt_insert_row() */
+			       + sizeof(ins_node_t)
+			       /* mysql_row_len could be huge and we are not
+			       sure if this prebuilt instance is going to be
+			       used in inserts */
+			       + (mysql_row_len < 256 ? mysql_row_len : 0)
+			       + DTUPLE_EST_ALLOC(dict_table_get_n_cols(table))
+			       + sizeof(que_fork_t)
+			       + sizeof(que_thr_t));
 
-	prebuilt = mem_heap_zalloc(heap, sizeof *prebuilt);
+	prebuilt = mem_heap_zalloc(heap, sizeof(*prebuilt));
 
 	prebuilt->magic_n = ROW_PREBUILT_ALLOCATED;
 	prebuilt->magic_n2 = ROW_PREBUILT_ALLOCATED;
@@ -697,24 +732,19 @@ row_create_prebuilt(
 	prebuilt->sql_stat_start = TRUE;
 	prebuilt->heap = heap;
 
-	prebuilt->pcur = btr_pcur_create_for_mysql();
-	prebuilt->clust_pcur = btr_pcur_create_for_mysql();
+	btr_pcur_reset(&prebuilt->pcur);
+	btr_pcur_reset(&prebuilt->clust_pcur);
 
 	prebuilt->select_lock_type = LOCK_NONE;
 	prebuilt->stored_select_lock_type = 99999999;
 	UNIV_MEM_INVALID(&prebuilt->stored_select_lock_type,
-			 sizeof prebuilt->stored_select_lock_type);
+			 sizeof(prebuilt->stored_select_lock_type));
 
-	prebuilt->search_tuple = dtuple_create(
-		heap, 2 * dict_table_get_n_cols(table));
-
-	clust_index = dict_table_get_first_index(table);
+	prebuilt->search_tuple = dtuple_create(heap, search_tuple_n_fields);
 
 	/* Make sure that search_tuple is long enough for clustered index */
 	ut_a(2 * dict_table_get_n_cols(table) >= clust_index->n_fields);
 
-	ref_len = dict_index_get_n_unique(clust_index);
-
 	ref = dtuple_create(heap, ref_len);
 
 	dict_index_copy_types(ref, clust_index, ref_len);
@@ -730,6 +760,8 @@ row_create_prebuilt(
 
 	prebuilt->autoinc_last_value = 0;
 
+	prebuilt->mysql_row_len = mysql_row_len;
+
 	return(prebuilt);
 }
 
@@ -765,8 +797,8 @@ row_prebuilt_free(
 	prebuilt->magic_n = ROW_PREBUILT_FREED;
 	prebuilt->magic_n2 = ROW_PREBUILT_FREED;
 
-	btr_pcur_free_for_mysql(prebuilt->pcur);
-	btr_pcur_free_for_mysql(prebuilt->clust_pcur);
+	btr_pcur_reset(&prebuilt->pcur);
+	btr_pcur_reset(&prebuilt->clust_pcur);
 
 	if (prebuilt->mysql_template) {
 		mem_free(prebuilt->mysql_template);
@@ -1421,11 +1453,11 @@ row_update_for_mysql(
 
 	clust_index = dict_table_get_first_index(table);
 
-	if (prebuilt->pcur->btr_cur.index == clust_index) {
-		btr_pcur_copy_stored_position(node->pcur, prebuilt->pcur);
+	if (prebuilt->pcur.btr_cur.index == clust_index) {
+		btr_pcur_copy_stored_position(node->pcur, &prebuilt->pcur);
 	} else {
 		btr_pcur_copy_stored_position(node->pcur,
-					      prebuilt->clust_pcur);
+					      &prebuilt->clust_pcur);
 	}
 
 	ut_a(node->pcur->rel_pos == BTR_PCUR_ON);
@@ -1529,8 +1561,8 @@ row_unlock_for_mysql(
 					clust_pcur, and we do not need
 					to reposition the cursors. */
 {
-	btr_pcur_t*	pcur		= prebuilt->pcur;
-	btr_pcur_t*	clust_pcur	= prebuilt->clust_pcur;
+	btr_pcur_t*	pcur		= &prebuilt->pcur;
+	btr_pcur_t*	clust_pcur	= &prebuilt->clust_pcur;
 	trx_t*		trx		= prebuilt->trx;
 
 	ut_ad(prebuilt && trx);

=== modified file 'storage/innobase/row/row0sel.c'
--- a/storage/innobase/row/row0sel.c	revid:jorgen.loland@stripped
+++ b/storage/innobase/row/row0sel.c	revid:vasil.dimov@stripped
@@ -3028,17 +3028,17 @@ row_sel_get_clust_rec_for_mysql(
 
 	btr_pcur_open_with_no_init(clust_index, prebuilt->clust_ref,
 				   PAGE_CUR_LE, BTR_SEARCH_LEAF,
-				   prebuilt->clust_pcur, 0, mtr);
+				   &prebuilt->clust_pcur, 0, mtr);
 
-	clust_rec = btr_pcur_get_rec(prebuilt->clust_pcur);
+	clust_rec = btr_pcur_get_rec(&prebuilt->clust_pcur);
 
-	prebuilt->clust_pcur->trx_if_known = trx;
+	prebuilt->clust_pcur.trx_if_known = trx;
 
 	/* Note: only if the search ends up on a non-infimum record is the
 	low_match value the real match to the search tuple */
 
 	if (!page_rec_is_user_rec(clust_rec)
-	    || btr_pcur_get_low_match(prebuilt->clust_pcur)
+	    || btr_pcur_get_low_match(&prebuilt->clust_pcur)
 	    < dict_index_get_n_unique(clust_index)) {
 
 		/* In a rare case it is possible that no clust rec is found
@@ -3086,7 +3086,7 @@ row_sel_get_clust_rec_for_mysql(
 		we set a LOCK_REC_NOT_GAP type lock */
 
 		err = lock_clust_rec_read_check_and_lock(
-			0, btr_pcur_get_block(prebuilt->clust_pcur),
+			0, btr_pcur_get_block(&prebuilt->clust_pcur),
 			clust_rec, clust_index, *offsets,
 			prebuilt->select_lock_type, LOCK_REC_NOT_GAP, thr);
 		switch (err) {
@@ -3164,7 +3164,7 @@ func_exit:
 		/* We may use the cursor in update or in unlock_row():
 		store its position */
 
-		btr_pcur_store_position(prebuilt->clust_pcur, mtr);
+		btr_pcur_store_position(&prebuilt->clust_pcur, mtr);
 	}
 
 err_exit:
@@ -3433,7 +3433,7 @@ row_sel_try_search_shortcut_for_mysql(
 {
 	dict_index_t*	index		= prebuilt->index;
 	const dtuple_t*	search_tuple	= prebuilt->search_tuple;
-	btr_pcur_t*	pcur		= prebuilt->pcur;
+	btr_pcur_t*	pcur		= &prebuilt->pcur;
 	trx_t*		trx		= prebuilt->trx;
 	const rec_t*	rec;
 
@@ -3604,7 +3604,7 @@ row_search_for_mysql(
 	dict_index_t*	index		= prebuilt->index;
 	ibool		comp		= dict_table_is_comp(index->table);
 	const dtuple_t*	search_tuple	= prebuilt->search_tuple;
-	btr_pcur_t*	pcur		= prebuilt->pcur;
+	btr_pcur_t*	pcur		= &prebuilt->pcur;
 	trx_t*		trx		= prebuilt->trx;
 	dict_index_t*	clust_index;
 	que_thr_t*	thr;

No bundle (reason: useless for push emails).
Thread
bzr push into mysql-trunk branch (vasil.dimov:3564 to 3568) Bug#11764622vasil.dimov11 Nov