List:Commits« Previous MessageNext Message »
From:marko.makela Date:March 29 2012 8:03am
Subject:bzr push into mysql-trunk-wl5854 branch (marko.makela:3867 to 3868) WL#5854
View as plain text  
 3868 Marko Mäkelä	2012-03-29
      WL#5854 cleanup: Explain why insert undo logging does will work as is.

    modified:
      storage/innobase/row/row0uins.cc
      storage/innobase/trx/trx0rec.cc
 3867 Marko Mäkelä	2012-03-27
      WL#5854 InnoDB: Online row format changes
      
      This is preparation for data dictionary changes. We will allow
      multiple versions of the clustered index to exist in the data
      dictionary. The head of table->indexes will be the newest clustered
      index. It may be followed by older clustered index versions (in
      descending order of index->id, or age), and finally by any secondary
      index definitions.
      
      dict_table_get_clustered_index(): An alias for dict_table_get_first_index()
      when we only want the clustered index.
      
      Review all loops that loop over table->indexes and try to ensure that
      they will skip old clustered index versions when necessary.
      
      Use dict_index_is_clust() instead of pointer comparisons to determine
      if an index is clustered, because there can be multiple clustered
      index versions in a table.
      
      dict_index_is_old_clust(): New predicate, for determining if an index
      is an old version of a clustered index.
      
      dict_index_is_newest_clust(): New predicate, for determining if an index
      is the newest version of a clustered index.
      
      api0api.cc: If the client handle is using a too old clustered index
      version, issue a DB_DATA_MISMATCH error.
      
      TODO: In undo log records, identify the table by the clustered index
      identifier instead of the table identifier.
      
      TODO: In code that fetches records from the clustered index or builds
      previous versions of records, choose the appropriate clustered index
      version.
      
      TODO: Implement the following HA_ALTER_FLAGS:
      
      ADD_COLUMN
      DROP_COLUMN, ALTER_COLUMN_ORDER (for columns outside the PRIMARY KEY)
      ALTER_COLUMN_TYPE (for non-indexed columns)
      
      TODO: Implement garbage collection of old clustered index versions.
      That is, rewrite the index_id on node pointer pages, and rewrite all
      leaf pages to use the newest clustered index version, and finally drop
      all old indexes from both the dictionary cache and SYS_INDEXES. This
      could be sped up by some data structure that keeps track of the page
      numbers that have not yet been updated, or page numbers that have
      already been updated.

    modified:
      .bzr-mysql/default.conf
      storage/innobase/api/api0api.cc
      storage/innobase/btr/btr0cur.cc
      storage/innobase/dict/dict0dict.cc
      storage/innobase/dict/dict0load.cc
      storage/innobase/dict/dict0stats.cc
      storage/innobase/fts/fts0fts.cc
      storage/innobase/handler/ha_innodb.cc
      storage/innobase/include/dict0dict.h
      storage/innobase/include/dict0dict.ic
      storage/innobase/include/row0mysql.h
      storage/innobase/include/row0mysql.ic
      storage/innobase/pars/pars0opt.cc
      storage/innobase/pars/pars0pars.cc
      storage/innobase/row/row0ins.cc
      storage/innobase/row/row0merge.cc
      storage/innobase/row/row0mysql.cc
      storage/innobase/row/row0purge.cc
      storage/innobase/row/row0row.cc
      storage/innobase/row/row0sel.cc
      storage/innobase/row/row0uins.cc
      storage/innobase/row/row0umod.cc
      storage/innobase/row/row0undo.cc
      storage/innobase/row/row0upd.cc
      storage/innobase/row/row0vers.cc
=== modified file 'storage/innobase/row/row0uins.cc'
--- a/storage/innobase/row/row0uins.cc	revid:marko.makela@stripped
+++ b/storage/innobase/row/row0uins.cc	revid:marko.makela@stripped-20120329080154-jfg0arya5gtrm48g
@@ -274,37 +274,31 @@ row_undo_ins_parse_undo_rec(
 	node->table = dict_table_open_on_id(table_id, dict_locked, FALSE);
 
 	/* Skip the UNDO if we can't find the table or the .ibd file. */
-	if (UNIV_UNLIKELY(node->table == NULL)) {
-	} else if (UNIV_UNLIKELY(node->table->ibd_file_missing)) {
-		dict_table_close(node->table, dict_locked, FALSE);
-		node->table = NULL;
-	} else {
-		clust_index = dict_table_get_clustered_index(node->table);
-
-		if (clust_index != NULL) {
-			trx_undo_rec_get_row_ref(
-				ptr, clust_index, &node->ref, node->heap);
-
-			if (!row_undo_search_clust_to_pcur(node)) {
-
-				dict_table_close(
-					node->table, dict_locked, FALSE);
-
-				node->table = NULL;
-			}
+	if (node->table == NULL) {
+		return;
+	} else if (node->table->ibd_file_missing) {
+		goto err_exit;
+	}
 
-		} else {
-			ut_print_timestamp(stderr);
-			fprintf(stderr, "  InnoDB: table ");
-			ut_print_name(stderr, node->trx, TRUE,
-				      node->table->name);
-			fprintf(stderr, " has no indexes, "
-				"ignoring the table\n");
+	clust_index = dict_table_get_clustered_index(node->table);
 
-			dict_table_close(node->table, dict_locked, FALSE);
+	if (clust_index != NULL) {
+		trx_undo_rec_get_row_ref(
+			ptr, clust_index, &node->ref, node->heap);
 
-			node->table = NULL;
+		if (!row_undo_search_clust_to_pcur(node)) {
+			goto err_exit;
 		}
+	} else {
+		ut_print_timestamp(stderr);
+		fprintf(stderr, "  InnoDB: table ");
+		ut_print_name(stderr, node->trx, TRUE,
+			      node->table->name);
+		fprintf(stderr, " has no indexes, "
+			"ignoring the table\n");
+err_exit:
+		dict_table_close(node->table, dict_locked, FALSE);
+		node->table = NULL;
 	}
 }
 

=== modified file 'storage/innobase/trx/trx0rec.cc'
--- a/storage/innobase/trx/trx0rec.cc	revid:marko.makela@oracle.com-20120327083203-disx90hs467fhegr
+++ b/storage/innobase/trx/trx0rec.cc	revid:marko.makela@stripped0329080154-jfg0arya5gtrm48g
@@ -246,8 +246,10 @@ trx_undo_page_report_insert(
 	ptr += mach_ull_write_much_compressed(ptr, trx->undo_no);
 	ptr += mach_ull_write_much_compressed(ptr, index->table->id);
 	/*----------------------------------------*/
-	/* Store then the fields required to uniquely determine the record
-	to be inserted in the clustered index */
+	/* Store then the fields required to uniquely determine the
+	record to be inserted in the clustered index. This will work
+	even with online row format changes, because we do not allow
+	any changes to PRIMARY KEY columns. */
 
 	for (i = 0; i < dict_index_get_n_unique(index); i++) {
 
@@ -302,7 +304,7 @@ trx_undo_rec_get_pars(
 
 	if (type_cmpl & TRX_UNDO_UPD_EXTERN) {
 		*updated_extern = TRUE;
-		type_cmpl -= TRX_UNDO_UPD_EXTERN;
+		type_cmpl &= ~TRX_UNDO_UPD_EXTERN;
 	} else {
 		*updated_extern = FALSE;
 	}

No bundle (reason: useless for push emails).
Thread
bzr push into mysql-trunk-wl5854 branch (marko.makela:3867 to 3868) WL#5854marko.makela29 Mar