List:Commits« Previous MessageNext Message »
From:vasil.dimov Date:June 8 2012 7:20am
Subject:bzr push into mysql-trunk branch (vasil.dimov:3968 to 3971)
View as plain text  
 3971 Vasil Dimov	2012-06-08
      Non-functional change - remove prototype of the private function
      dict_stats_snapshot_free() from the public dict0stats.h and make it static
      in dict0stats.cc and move it and dict_stats_snapshot_create() before
      their usage in dict0stats.cc.

    modified:
      storage/innobase/dict/dict0stats.cc
      storage/innobase/include/dict0stats.h
 3970 Vasil Dimov	2012-06-08
      Remove the macro PREPARE_PINFO_FOR_INDEX_SAVE() and use its code in the
      two places where it was used in order to get better line number repors
      about the Valgrind checks.
      
      Also deploy Valgrind checks when creating the table stats snapshot.

    modified:
      storage/innobase/dict/dict0stats.cc
 3969 Vasil Dimov	2012-06-08
      Followup to WL#6189 Turn InnoDB persistent statistics ON by default
      
      Adjust mtr tests, part 29.
      
      Persistent stats use a different sampling algorithm so it is possible
      that the stats numbers differ from transient stats.
      
      Also, persistent stats are updated less frequently or with a delay, so
      it is possible that persistent stats are not up to date as transient were
      even if both algorithms would return the same results.
      
      The order of the returned rows by a SELECT query depends on the stats and
      thus, if possible, we fix tests by prepending "-- sorted_result" to SELECTs
      that happen to return rows in different order.
      
      If possible, each failing test was fixed by manually running ANALYZE TABLE.
      This is doable if both transient and persistent sampling algorithms end up
      with the same numbers for the given table and its data.
      
      If persistent stats result in different stats, then test failures were fixed
      by forcing the usage of transient stats for the table by using
      CREATE TABLE ... STATS_PERSISTENT=0.
      
      Intentionally do not fix the tests by using persistent stats and adjustin
      the output of EXPLAIN in .result files because a different execution plan
      may cause a different code path to be executed, than the one originally
      intended in the test.

    modified:
      mysql-test/include/gis_generic.inc
 3968 Nisha Gopalakrishnan	2012-06-08
      Bug#13840553:64617: MYSQL MUST IMPROVE ERROR MESSAGES
      
      NOTE: This is a follow up patch to fix test failure:
            innodb.innodb_8k

    modified:
      mysql-test/suite/innodb/r/innodb_8k.result
      mysql-test/suite/innodb/t/innodb_8k.test
=== modified file 'mysql-test/include/gis_generic.inc'
--- a/mysql-test/include/gis_generic.inc	revid:nisha.gopalakrishnan@stripped
+++ b/mysql-test/include/gis_generic.inc	revid:vasil.dimov@stripped
@@ -72,6 +72,19 @@ INSERT into gis_geometry SELECT * FROM g
 INSERT into gis_geometry SELECT * FROM gis_multi_polygon;
 INSERT into gis_geometry SELECT * FROM gis_geometrycollection;
 
+-- disable_query_log
+-- disable_result_log
+ANALYZE TABLE gis_point;
+ANALYZE TABLE gis_line;
+ANALYZE TABLE gis_polygon;
+ANALYZE TABLE gis_multi_point;
+ANALYZE TABLE gis_multi_line;
+ANALYZE TABLE gis_multi_polygon;
+ANALYZE TABLE gis_geometrycollection;
+ANALYZE TABLE gis_geometry;
+-- enable_result_log
+-- enable_query_log
+
 SELECT fid, AsText(g) FROM gis_point ORDER by fid;
 SELECT fid, AsText(g) FROM gis_line ORDER by fid;
 SELECT fid, AsText(g) FROM gis_polygon ORDER by fid;

=== modified file 'storage/innobase/dict/dict0stats.cc'
--- a/storage/innobase/dict/dict0stats.cc	revid:nisha.gopalakrishnan@stripped
+++ b/storage/innobase/dict/dict0stats.cc	revid:vasil.dimov@stripped
@@ -142,48 +142,6 @@ typedef struct dict_stats_struct {
 } dict_stats_t;
 
 /*********************************************************************//**
-Duplicate the stats of a table and its indexes.
-This function creates a dummy dict_table_t object and copies the input
-table's stats into it. The returned table object is not in the dictionary
-cache, cannot be accessed by any other threads and has only the following
-members initialized:
-dict_table_t::id
-dict_table_t::heap
-dict_table_t::name
-dict_table_t::indexes<>
-dict_table_t::stat_initialized
-dict_table_t::stat_persistent
-dict_table_t::stat_n_rows
-dict_table_t::stat_clustered_index_size
-dict_table_t::stat_sum_of_other_index_sizes
-dict_table_t::stat_modified_counter
-dict_table_t::magic_n
-for each entry in dict_table_t::indexes, the following are initialized:
-dict_index_t::id
-dict_index_t::name
-dict_index_t::table_name
-dict_index_t::table (points to the above semi-initialized object)
-dict_index_t::type
-dict_index_t::n_uniq
-dict_index_t::fields[] (only first n_uniq and only fields[i].name)
-dict_index_t::indexes<>
-dict_index_t::stat_n_diff_key_vals[]
-dict_index_t::stat_n_sample_sizes[]
-dict_index_t::stat_n_non_null_key_vals[]
-dict_index_t::stat_index_size
-dict_index_t::stat_n_leaf_pages
-dict_index_t::magic_n
-The returned object should be freed with dict_stats_snapshot_free()
-when no longer needed.
-@return incomplete table object */
-static
-dict_table_t*
-dict_stats_snapshot_create(
-/*=======================*/
-	const dict_table_t*	table);		/*!< in: table whose stats
-						to copy */
-
-/*********************************************************************//**
 Checks whether the persistent statistics storage exists and that all
 tables have the proper structure.
 dict_stats_persistent_storage_check() @{
@@ -332,6 +290,214 @@ dict_stats_exec_sql(
 }
 
 /*********************************************************************//**
+Duplicate the stats of a table and its indexes.
+This function creates a dummy dict_table_t object and copies the input
+table's stats into it. The returned table object is not in the dictionary
+cache, cannot be accessed by any other threads and has only the following
+members initialized:
+dict_table_t::id
+dict_table_t::heap
+dict_table_t::name
+dict_table_t::indexes<>
+dict_table_t::stat_initialized
+dict_table_t::stat_persistent
+dict_table_t::stat_n_rows
+dict_table_t::stat_clustered_index_size
+dict_table_t::stat_sum_of_other_index_sizes
+dict_table_t::stat_modified_counter
+dict_table_t::magic_n
+for each entry in dict_table_t::indexes, the following are initialized:
+dict_index_t::id
+dict_index_t::name
+dict_index_t::table_name
+dict_index_t::table (points to the above semi-initialized object)
+dict_index_t::type
+dict_index_t::n_uniq
+dict_index_t::fields[] (only first n_uniq and only fields[i].name)
+dict_index_t::indexes<>
+dict_index_t::stat_n_diff_key_vals[]
+dict_index_t::stat_n_sample_sizes[]
+dict_index_t::stat_n_non_null_key_vals[]
+dict_index_t::stat_index_size
+dict_index_t::stat_n_leaf_pages
+dict_index_t::magic_n
+The returned object should be freed with dict_stats_snapshot_free()
+when no longer needed.
+@return incomplete table object */
+static
+dict_table_t*
+dict_stats_snapshot_create(
+/*=======================*/
+	const dict_table_t*	table)		/*!< in: table whose stats
+						to copy */
+{
+	size_t		heap_size;
+	dict_index_t*	index;
+
+	mutex_enter(&dict_sys->mutex);
+
+	dict_table_stats_lock(table, RW_X_LATCH);
+
+	/* Estimate the size needed for the table and all of its indexes */
+
+	heap_size = 0;
+	heap_size += sizeof(dict_table_t);
+	heap_size += strlen(table->name) + 1;
+
+	for (index = dict_table_get_first_index(table);
+	     index != NULL;
+	     index = dict_table_get_next_index(index)) {
+
+		ulint	n_uniq = dict_index_get_n_unique(index);
+
+		heap_size += sizeof(dict_index_t);
+		heap_size += strlen(index->name) + 1;
+		heap_size += n_uniq * sizeof(index->fields[0]);
+		for (ulint i = 0; i < n_uniq; i++) {
+			heap_size += strlen(index->fields[i].name) + 1;
+		}
+		heap_size += (n_uniq + 1)
+			* sizeof(index->stat_n_diff_key_vals[0]);
+		heap_size += (n_uniq + 1)
+			* sizeof(index->stat_n_sample_sizes[0]);
+		heap_size += (n_uniq + 1)
+			* sizeof(index->stat_n_non_null_key_vals[0]);
+	}
+
+	/* Allocate the memory and copy the members */
+
+	mem_heap_t*	heap;
+
+	heap = mem_heap_create(heap_size);
+
+	dict_table_t*	t;
+
+	t = (dict_table_t*) mem_heap_alloc(heap, sizeof(*t));
+
+	UNIV_MEM_ASSERT_RW(&table->id, sizeof(table->id));
+	t->id = table->id;
+
+	t->heap = heap;
+
+	UNIV_MEM_ASSERT_RW(table->name, strlen(table->name) + 1);
+	t->name = (char*) mem_heap_dup(
+		heap, table->name, strlen(table->name) + 1);
+
+	UT_LIST_INIT(t->indexes);
+
+	for (index = dict_table_get_first_index(table);
+	     index != NULL;
+	     index = dict_table_get_next_index(index)) {
+
+		dict_index_t*	idx;
+
+		idx = (dict_index_t*) mem_heap_alloc(heap, sizeof(*idx));
+
+		UNIV_MEM_ASSERT_RW(&index->id, sizeof(index->id));
+		idx->id = index->id;
+
+		UNIV_MEM_ASSERT_RW(index->name, strlen(index->name) + 1);
+		idx->name = (char*) mem_heap_dup(
+			heap, index->name, strlen(index->name) + 1);
+
+		idx->table_name = t->name;
+
+		idx->table = t;
+
+		UNIV_MEM_ASSERT_RW(&index->type, sizeof(index->type));
+		idx->type = index->type;
+
+		UNIV_MEM_ASSERT_RW(&index->n_uniq, sizeof(index->n_uniq));
+		idx->n_uniq = index->n_uniq;
+
+		idx->fields = (dict_field_t*) mem_heap_alloc(
+			heap, idx->n_uniq * sizeof(idx->fields[0]));
+
+		for (ulint i = 0; i < idx->n_uniq; i++) {
+			UNIV_MEM_ASSERT_RW(index->fields[i].name, strlen(index->fields[i].name) + 1);
+			idx->fields[i].name = (char*) mem_heap_dup(
+				heap, index->fields[i].name,
+				strlen(index->fields[i].name) + 1);
+		}
+
+		/* hook idx into t->indexes */
+		UT_LIST_ADD_LAST(indexes, t->indexes, idx);
+
+		UNIV_MEM_ASSERT_RW(index->stat_n_diff_key_vals, (idx->n_uniq + 1) * sizeof(idx->stat_n_diff_key_vals[0]));
+		idx->stat_n_diff_key_vals = (ib_uint64_t*) mem_heap_dup(
+			heap, index->stat_n_diff_key_vals,
+			(idx->n_uniq + 1)
+			* sizeof(idx->stat_n_diff_key_vals[0]));
+
+		UNIV_MEM_ASSERT_RW(index->stat_n_sample_sizes, (idx->n_uniq + 1) * sizeof(idx->stat_n_sample_sizes[0]));
+		idx->stat_n_sample_sizes = (ib_uint64_t*) mem_heap_dup(
+			heap, index->stat_n_sample_sizes,
+			(idx->n_uniq + 1)
+			* sizeof(idx->stat_n_sample_sizes[0]));
+
+		UNIV_MEM_ASSERT_RW(index->stat_n_non_null_key_vals, (idx->n_uniq + 1) * sizeof(idx->stat_n_non_null_key_vals[0]));
+		idx->stat_n_non_null_key_vals = (ib_uint64_t*) mem_heap_dup(
+			heap, index->stat_n_non_null_key_vals,
+			(idx->n_uniq + 1)
+			* sizeof(idx->stat_n_non_null_key_vals[0]));
+
+		UNIV_MEM_ASSERT_RW(&index->stat_index_size, sizeof(index->stat_index_size));
+		idx->stat_index_size = index->stat_index_size;
+
+		UNIV_MEM_ASSERT_RW(&index->stat_n_leaf_pages, sizeof(index->stat_n_leaf_pages));
+		idx->stat_n_leaf_pages = index->stat_n_leaf_pages;
+
+#ifdef UNIV_DEBUG
+		idx->magic_n = DICT_INDEX_MAGIC_N;
+#endif /* UNIV_DEBUG */
+	}
+
+	UNIV_MEM_ASSERT_RW(&table->stat_initialized, sizeof(table->stat_initialized));
+	t->stat_initialized = table->stat_initialized;
+	UNIV_MEM_ASSERT_RW(&table->stats_last_recalc, sizeof(table->stats_last_recalc));
+	t->stats_last_recalc = table->stats_last_recalc;
+	UNIV_MEM_ASSERT_RW(&table->stat_persistent, sizeof(table->stat_persistent));
+	t->stat_persistent = table->stat_persistent;
+	UNIV_MEM_ASSERT_RW(&table->stats_auto_recalc, sizeof(table->stats_auto_recalc));
+	t->stats_auto_recalc = table->stats_auto_recalc;
+	UNIV_MEM_ASSERT_RW(&table->stats_sample_pages, sizeof(table->stats_sample_pages));
+	t->stats_sample_pages = table->stats_sample_pages;
+	UNIV_MEM_ASSERT_RW(&table->stat_n_rows, sizeof(table->stat_n_rows));
+	t->stat_n_rows = table->stat_n_rows;
+	UNIV_MEM_ASSERT_RW(&table->stat_clustered_index_size, sizeof(table->stat_clustered_index_size));
+	t->stat_clustered_index_size = table->stat_clustered_index_size;
+	UNIV_MEM_ASSERT_RW(&table->stat_sum_of_other_index_sizes, sizeof(table->stat_sum_of_other_index_sizes));
+	t->stat_sum_of_other_index_sizes = table->stat_sum_of_other_index_sizes;
+	UNIV_MEM_ASSERT_RW(&table->stat_modified_counter, sizeof(table->stat_modified_counter));
+	t->stat_modified_counter = table->stat_modified_counter;
+	UNIV_MEM_ASSERT_RW(&table->stats_bg_flag, sizeof(table->stats_bg_flag));
+	t->stats_bg_flag = table->stats_bg_flag;
+#ifdef UNIV_DEBUG
+	t->magic_n = DICT_TABLE_MAGIC_N;
+#endif /* UNIV_DEBUG */
+
+	dict_table_stats_unlock(table, RW_X_LATCH);
+
+	mutex_exit(&dict_sys->mutex);
+
+	return(t);
+}
+
+/*********************************************************************//**
+Free the resources occupied by an object returned by
+dict_stats_snapshot_create().
+dict_stats_snapshot_free() @{ */
+static
+void
+dict_stats_snapshot_free(
+/*=====================*/
+	dict_table_t*	t)	/*!< in: dummy table object to free */
+{
+	mem_heap_free(t->heap);
+}
+/* @} */
+
+/*********************************************************************//**
 Write all zeros (or 1 where it makes sense) into a table and its indexes'
 statistics members. The resulting stats correspond to an empty table.
 dict_stats_empty_table() @{ */
@@ -1718,37 +1884,33 @@ dict_stats_save_index_stat(
 	ut_ad(rw_lock_own(&dict_operation_lock, RW_LOCK_EX));
 	ut_ad(mutex_own(&dict_sys->mutex));
 
-#define PREPARE_PINFO_FOR_INDEX_SAVE() \
-do { \
-	pinfo = pars_info_create(); \
-	UNIV_MEM_ASSERT_RW(index->table->name, dict_get_db_name_len(index->table->name)); \
-	pars_info_add_literal(pinfo, "database_name", index->table->name, \
-		dict_get_db_name_len(index->table->name), \
-		DATA_VARCHAR, 0); \
-	UNIV_MEM_ASSERT_RW(dict_remove_db_name(index->table->name), strlen(dict_remove_db_name(index->table->name))); \
-	pars_info_add_str_literal(pinfo, "table_name", \
-		dict_remove_db_name(index->table->name)); \
-	UNIV_MEM_ASSERT_RW(index->name, strlen(index->name)); \
-	pars_info_add_str_literal(pinfo, "index_name", index->name); \
-	UNIV_MEM_ASSERT_RW(&last_update, 4); \
-	pars_info_add_int4_literal(pinfo, "last_update", last_update); \
-	UNIV_MEM_ASSERT_RW(stat_name, strlen(stat_name)); \
-	pars_info_add_str_literal(pinfo, "stat_name", stat_name); \
-	UNIV_MEM_ASSERT_RW(&stat_value, 8); \
-	pars_info_add_ull_literal(pinfo, "stat_value", stat_value); \
-	if (sample_size != NULL) { \
-		UNIV_MEM_ASSERT_RW(sample_size, 8); \
-		pars_info_add_ull_literal(pinfo, "sample_size", *sample_size); \
-	} else { \
-		pars_info_add_literal(pinfo, "sample_size", NULL, \
-				      UNIV_SQL_NULL, DATA_FIXBINARY, 0); \
-	} \
-	UNIV_MEM_ASSERT_RW(stat_description, strlen(stat_description)); \
-	pars_info_add_str_literal(pinfo, "stat_description", \
-				  stat_description); \
-} while (0);
+	pinfo = pars_info_create();
+	UNIV_MEM_ASSERT_RW(index->table->name, dict_get_db_name_len(index->table->name));
+	pars_info_add_literal(pinfo, "database_name", index->table->name,
+		dict_get_db_name_len(index->table->name),
+		DATA_VARCHAR, 0);
+	UNIV_MEM_ASSERT_RW(dict_remove_db_name(index->table->name), strlen(dict_remove_db_name(index->table->name)));
+	pars_info_add_str_literal(pinfo, "table_name",
+		dict_remove_db_name(index->table->name));
+	UNIV_MEM_ASSERT_RW(index->name, strlen(index->name));
+	pars_info_add_str_literal(pinfo, "index_name", index->name);
+	UNIV_MEM_ASSERT_RW(&last_update, 4);
+	pars_info_add_int4_literal(pinfo, "last_update", last_update);
+	UNIV_MEM_ASSERT_RW(stat_name, strlen(stat_name));
+	pars_info_add_str_literal(pinfo, "stat_name", stat_name);
+	UNIV_MEM_ASSERT_RW(&stat_value, 8);
+	pars_info_add_ull_literal(pinfo, "stat_value", stat_value);
+	if (sample_size != NULL) {
+		UNIV_MEM_ASSERT_RW(sample_size, 8);
+		pars_info_add_ull_literal(pinfo, "sample_size", *sample_size);
+	} else {
+		pars_info_add_literal(pinfo, "sample_size", NULL,
+				      UNIV_SQL_NULL, DATA_FIXBINARY, 0);
+	}
+	UNIV_MEM_ASSERT_RW(stat_description, strlen(stat_description));
+	pars_info_add_str_literal(pinfo, "stat_description",
+				  stat_description);
 
-	PREPARE_PINFO_FOR_INDEX_SAVE();
 	ret = dict_stats_exec_sql(
 		pinfo,
 		"PROCEDURE INDEX_STATS_SAVE_INSERT () IS\n"
@@ -1768,7 +1930,34 @@ do { \
 		"END;");
 
 	if (ret == DB_DUPLICATE_KEY) {
-		PREPARE_PINFO_FOR_INDEX_SAVE();
+
+		pinfo = pars_info_create();
+		UNIV_MEM_ASSERT_RW(index->table->name, dict_get_db_name_len(index->table->name));
+		pars_info_add_literal(pinfo, "database_name", index->table->name,
+			dict_get_db_name_len(index->table->name),
+			DATA_VARCHAR, 0);
+		UNIV_MEM_ASSERT_RW(dict_remove_db_name(index->table->name), strlen(dict_remove_db_name(index->table->name)));
+		pars_info_add_str_literal(pinfo, "table_name",
+			dict_remove_db_name(index->table->name));
+		UNIV_MEM_ASSERT_RW(index->name, strlen(index->name));
+		pars_info_add_str_literal(pinfo, "index_name", index->name);
+		UNIV_MEM_ASSERT_RW(&last_update, 4);
+		pars_info_add_int4_literal(pinfo, "last_update", last_update);
+		UNIV_MEM_ASSERT_RW(stat_name, strlen(stat_name));
+		pars_info_add_str_literal(pinfo, "stat_name", stat_name);
+		UNIV_MEM_ASSERT_RW(&stat_value, 8);
+		pars_info_add_ull_literal(pinfo, "stat_value", stat_value);
+		if (sample_size != NULL) {
+			UNIV_MEM_ASSERT_RW(sample_size, 8);
+			pars_info_add_ull_literal(pinfo, "sample_size", *sample_size);
+		} else {
+			pars_info_add_literal(pinfo, "sample_size", NULL,
+					      UNIV_SQL_NULL, DATA_FIXBINARY, 0);
+		}
+		UNIV_MEM_ASSERT_RW(stat_description, strlen(stat_description));
+		pars_info_add_str_literal(pinfo, "stat_description",
+					  stat_description);
+
 		ret = dict_stats_exec_sql(
 			pinfo,
 			"PROCEDURE INDEX_STATS_SAVE_UPDATE () IS\n"
@@ -3193,192 +3382,6 @@ dict_stats_rename_table(
 }
 /* @} */
 
-/*********************************************************************//**
-Duplicate the stats of a table and its indexes.
-This function creates a dummy dict_table_t object and copies the input
-table's stats into it. The returned table object is not in the dictionary
-cache, cannot be accessed by any other threads and has only the following
-members initialized:
-dict_table_t::id
-dict_table_t::heap
-dict_table_t::name
-dict_table_t::indexes<>
-dict_table_t::stat_initialized
-dict_table_t::stat_persistent
-dict_table_t::stat_n_rows
-dict_table_t::stat_clustered_index_size
-dict_table_t::stat_sum_of_other_index_sizes
-dict_table_t::stat_modified_counter
-dict_table_t::magic_n
-for each entry in dict_table_t::indexes, the following are initialized:
-dict_index_t::id
-dict_index_t::name
-dict_index_t::table_name
-dict_index_t::table (points to the above semi-initialized object)
-dict_index_t::type
-dict_index_t::n_uniq
-dict_index_t::fields[] (only first n_uniq and only fields[i].name)
-dict_index_t::indexes<>
-dict_index_t::stat_n_diff_key_vals[]
-dict_index_t::stat_n_sample_sizes[]
-dict_index_t::stat_n_non_null_key_vals[]
-dict_index_t::stat_index_size
-dict_index_t::stat_n_leaf_pages
-dict_index_t::magic_n
-The returned object should be freed with dict_stats_snapshot_free()
-when no longer needed.
-@return incomplete table object */
-static
-dict_table_t*
-dict_stats_snapshot_create(
-/*=======================*/
-	const dict_table_t*	table)		/*!< in: table whose stats
-						to copy */
-{
-	size_t		heap_size;
-	dict_index_t*	index;
-
-	mutex_enter(&dict_sys->mutex);
-
-	dict_table_stats_lock(table, RW_X_LATCH);
-
-	/* Estimate the size needed for the table and all of its indexes */
-
-	heap_size = 0;
-	heap_size += sizeof(dict_table_t);
-	heap_size += strlen(table->name) + 1;
-
-	for (index = dict_table_get_first_index(table);
-	     index != NULL;
-	     index = dict_table_get_next_index(index)) {
-
-		ulint	n_uniq = dict_index_get_n_unique(index);
-
-		heap_size += sizeof(dict_index_t);
-		heap_size += strlen(index->name) + 1;
-		heap_size += n_uniq * sizeof(index->fields[0]);
-		for (ulint i = 0; i < n_uniq; i++) {
-			heap_size += strlen(index->fields[i].name) + 1;
-		}
-		heap_size += (n_uniq + 1)
-			* sizeof(index->stat_n_diff_key_vals[0]);
-		heap_size += (n_uniq + 1)
-			* sizeof(index->stat_n_sample_sizes[0]);
-		heap_size += (n_uniq + 1)
-			* sizeof(index->stat_n_non_null_key_vals[0]);
-	}
-
-	/* Allocate the memory and copy the members */
-
-	mem_heap_t*	heap;
-
-	heap = mem_heap_create(heap_size);
-
-	dict_table_t*	t;
-
-	t = (dict_table_t*) mem_heap_alloc(heap, sizeof(*t));
-
-	t->id = table->id;
-
-	t->heap = heap;
-
-	t->name = (char*) mem_heap_dup(
-		heap, table->name, strlen(table->name) + 1);
-
-	UT_LIST_INIT(t->indexes);
-
-	for (index = dict_table_get_first_index(table);
-	     index != NULL;
-	     index = dict_table_get_next_index(index)) {
-
-		dict_index_t*	idx;
-
-		idx = (dict_index_t*) mem_heap_alloc(heap, sizeof(*idx));
-
-		idx->id = index->id;
-
-		idx->name = (char*) mem_heap_dup(
-			heap, index->name, strlen(index->name) + 1);
-
-		idx->table_name = t->name;
-
-		idx->table = t;
-
-		idx->type = index->type;
-
-		idx->n_uniq = index->n_uniq;
-
-		idx->fields = (dict_field_t*) mem_heap_alloc(
-			heap, idx->n_uniq * sizeof(idx->fields[0]));
-
-		for (ulint i = 0; i < idx->n_uniq; i++) {
-			idx->fields[i].name = (char*) mem_heap_dup(
-				heap, index->fields[i].name,
-				strlen(index->fields[i].name) + 1);
-		}
-
-		/* hook idx into t->indexes */
-		UT_LIST_ADD_LAST(indexes, t->indexes, idx);
-
-		idx->stat_n_diff_key_vals = (ib_uint64_t*) mem_heap_dup(
-			heap, index->stat_n_diff_key_vals,
-			(idx->n_uniq + 1)
-			* sizeof(idx->stat_n_diff_key_vals[0]));
-
-		idx->stat_n_sample_sizes = (ib_uint64_t*) mem_heap_dup(
-			heap, index->stat_n_sample_sizes,
-			(idx->n_uniq + 1)
-			* sizeof(idx->stat_n_sample_sizes[0]));
-
-		idx->stat_n_non_null_key_vals = (ib_uint64_t*) mem_heap_dup(
-			heap, index->stat_n_non_null_key_vals,
-			(idx->n_uniq + 1)
-			* sizeof(idx->stat_n_non_null_key_vals[0]));
-
-		idx->stat_index_size = index->stat_index_size;
-
-		idx->stat_n_leaf_pages = index->stat_n_leaf_pages;
-
-#ifdef UNIV_DEBUG
-		idx->magic_n = DICT_INDEX_MAGIC_N;
-#endif /* UNIV_DEBUG */
-	}
-
-	t->stat_initialized = table->stat_initialized;
-	t->stats_last_recalc = table->stats_last_recalc;
-	t->stat_persistent = table->stat_persistent;
-	t->stats_auto_recalc = table->stats_auto_recalc;
-	t->stats_sample_pages = table->stats_sample_pages;
-	t->stat_n_rows = table->stat_n_rows;
-	t->stat_clustered_index_size = table->stat_clustered_index_size;
-	t->stat_sum_of_other_index_sizes = table->stat_sum_of_other_index_sizes;
-	t->stat_modified_counter = table->stat_modified_counter;
-	t->stats_bg_flag = table->stats_bg_flag;
-#ifdef UNIV_DEBUG
-	t->magic_n = DICT_TABLE_MAGIC_N;
-#endif /* UNIV_DEBUG */
-
-	dict_table_stats_unlock(table, RW_X_LATCH);
-
-	mutex_exit(&dict_sys->mutex);
-
-	return(t);
-}
-
-/*********************************************************************//**
-Free the resources occupied by an object returned by
-dict_stats_snapshot_create().
-dict_stats_snapshot_free() @{ */
-UNIV_INTERN
-void
-dict_stats_snapshot_free(
-/*=====================*/
-	dict_table_t*	t)	/*!< in: dummy table object to free */
-{
-	mem_heap_free(t->heap);
-}
-/* @} */
-
 /* tests @{ */
 #ifdef UNIV_COMPILE_TEST_FUNCS
 

=== modified file 'storage/innobase/include/dict0stats.h'
--- a/storage/innobase/include/dict0stats.h	revid:nisha.gopalakrishnan@stripped
+++ b/storage/innobase/include/dict0stats.h	revid:vasil.dimov@stripped
@@ -197,15 +197,6 @@ dict_stats_rename_table(
 					is returned */
 	size_t		errstr_sz);	/*!< in: errstr size */
 
-/*********************************************************************//**
-Free the resources occupied by an object returned by
-dict_stats_snapshot_create(). */
-UNIV_INTERN
-void
-dict_stats_snapshot_free(
-/*=====================*/
-	dict_table_t*	t);	/*!< in: dummy table object to free */
-
 #ifndef UNIV_NONINL
 #include "dict0stats.ic"
 #endif

No bundle (reason: useless for push emails).
Thread
bzr push into mysql-trunk branch (vasil.dimov:3968 to 3971) vasil.dimov9 Jun