List:Commits« Previous MessageNext Message »
From:marko.makela Date:January 24 2011 9:17am
Subject:bzr push into mysql-trunk-innodb branch (marko.makela:3451 to 3452) WL#5458
View as plain text  
 3452 Marko Mäkelä	2011-01-24
      Clean up trx_is_active() and row_vers_impl_x_locked().
      This is part of my clean-ups made during the WL#5458 review.
      
      trx_is_active_low(): Allow corrupt to be NULL and make the caller
      responsible for initializing *corrupt=FALSE.
      
      trx_is_active(): Add the parameter ibool* corrupt.
      
      lock_rec_queue_validate(): Pass NULL to trx_is_active_low(). Add
      comments regarding lock_sys->mutex. Remove an obsolete section of
      unused code.
      
      lock_rec_convert_impl_to_expl(): Adjust comments and pass corrupt=NULL
      to trx_is_active().
      
      row_vers_impl_x_locked_low(): New function, split from
      row_vers_impl_x_locked(). Contains most of the logic.
      Clarify that the caller will have to hold lock_sys->mutex while
      invoking trx_is_active().
      Postpone the acquisition of purge_sys->latch until after the first
      trx_is_active() check.
      Pass the correct record version to lock_report_trx_id_insanity().
      Remove the local variable trx_t* trx. It is unsafe to keep the pointer
      around, as it must not be dereferenced without holding
      lock_sys->mutex.
      Remove the local variable active_trx_id. We already have trx_id.
      Clarify comments.
      
      row_vers_impl_x_locked(): Do mtr_start(), latch the clust_rec, invoke
      row_vers_impl_x_locked(), and finally mtr_commit().
      
      rb://572 approved by Sunny Bains

    modified:
      storage/innobase/include/row0vers.h
      storage/innobase/include/trx0sys.h
      storage/innobase/include/trx0sys.ic
      storage/innobase/lock/lock0lock.c
      storage/innobase/row/row0vers.c
 3451 Vasil Dimov	2011-01-24 [merge]
      Merge mysql-5.5-innodb -> mysql-trunk-innodb

=== modified file 'storage/innobase/include/row0vers.h'
--- a/storage/innobase/include/row0vers.h	revid:vasil.dimov@stripped3iltp3p
+++ b/storage/innobase/include/row0vers.h	revid:marko.makela@stripped
@@ -41,7 +41,7 @@ index record.
 @return 0 if committed, else the active transaction id;
 NOTE that this function can return false positives but never false
 negatives. The caller must confirm all positive results by calling
-trx_is_active(). */
+trx_is_active() while holding lock_sys->mutex. */
 UNIV_INTERN
 trx_id_t
 row_vers_impl_x_locked(

=== modified file 'storage/innobase/include/trx0sys.h'
--- a/storage/innobase/include/trx0sys.h	revid:vasil.dimov@stripped
+++ b/storage/innobase/include/trx0sys.h	revid:marko.makela@strippedm-20110124082539-jxezr26rvg36hev7
@@ -297,22 +297,32 @@ trx_list_get_min_trx_id(void);
 /*=========================*/
 /****************************************************************//**
 Checks if a transaction with the given id is active. Caller must hold
-the trx_sys_t::lock in shared mode.
-@return	transaction instance if active or NULL */
+trx_sys->lock in shared mode. If the caller is not holding
+lock_sys->mutex, the transaction may already have been committed.
+@return	transaction instance if active, or NULL;
+the pointer must not be dereferenced unless lock_sys->mutex was
+acquired before calling this function and is still being held */
 UNIV_INLINE
 trx_t*
 trx_is_active_low(
 /*==============*/
 	trx_id_t	trx_id,		/*!< in: trx id of the transaction */
-	ibool*		corrupt);	/*!< out: TRUE if corrupt */
+	ibool*		corrupt);	/*!< in: NULL or pointer to a flag
+					that will be set if corrupt */
 /****************************************************************//**
-Checks if a transaction with the given id is active.
-@return	transaction instance if active */
+Checks if a transaction with the given id is active. If the caller is
+not holding lock_sys->mutex, the transaction may already have been
+committed.
+@return	transaction instance if active, or NULL;
+the pointer must not be dereferenced unless lock_sys->mutex was
+acquired before calling this function and is still being held */
 UNIV_INLINE
 trx_t*
 trx_is_active(
 /*==========*/
-	trx_id_t	trx_id);/*!< in: trx id of the transaction */
+	trx_id_t	trx_id,		/*!< in: trx id of the transaction */
+	ibool*		corrupt);	/*!< in: NULL or pointer to a flag
+					that will be set if corrupt */
 #ifdef UNIV_DEBUG
 /****************************************************************//**
 Checks that trx is in the trx list.

=== modified file 'storage/innobase/include/trx0sys.ic'
--- a/storage/innobase/include/trx0sys.ic	revid:vasil.dimov@stripped4080124-qt318pe7c3iltp3p
+++ b/storage/innobase/include/trx0sys.ic	revid:marko.makela@strippedezr26rvg36hev7
@@ -380,14 +380,18 @@ trx_list_get_min_trx_id(void)
 
 /****************************************************************//**
 Checks if a transaction with the given id is active. Caller must hold
-the trx_sys_t::lock in shared mode.
-@return	transaction instance if active */
+trx_sys->lock in shared mode. If the caller is not holding
+lock_sys->mutex, the transaction may already have been committed.
+@return	transaction instance if active, or NULL;
+the pointer must not be dereferenced unless lock_sys->mutex was
+acquired before calling this function and is still being held */
 UNIV_INLINE
 trx_t*
 trx_is_active_low(
 /*==============*/
 	trx_id_t	trx_id,		/*!< in: trx id of the transaction */
-	ibool*		corrupt)	/*!< out: TRUE if corrupt */
+	ibool*		corrupt)	/*!< in: NULL or pointer to a flag
+					that will be set if corrupt */
 {
 	trx_t*		trx;
 
@@ -395,18 +399,18 @@ trx_is_active_low(
 	ut_ad(rw_lock_own(&trx_sys->lock, RW_LOCK_SHARED));
 #endif /* UNIV_SYNC_DEBUG */
 
-	*corrupt = FALSE;
-
 	if (trx_id < trx_list_get_min_trx_id_low()) {
 
 		trx = NULL;
-	} else if (trx_id >= trx_sys->max_trx_id) {
+	} else if (UNIV_UNLIKELY(trx_id >= trx_sys->max_trx_id)) {
 
 		/* There must be corruption: we let the caller handle the
 	       	diagnostic prints in this case. */
 
 		trx = NULL;
-		*corrupt = TRUE;
+		if (corrupt != NULL) {
+			*corrupt = TRUE;
+		}
 	} else {
 		trx = trx_get_on_id_low(trx_id);
 
@@ -422,20 +426,25 @@ trx_is_active_low(
 }
 
 /****************************************************************//**
-Checks if a transaction with the given id is active.
-@return	TRUE if active */
+Checks if a transaction with the given id is active. If the caller is
+not holding lock_sys->mutex, the transaction may already have been
+committed.
+@return	transaction instance if active, or NULL;
+the pointer must not be dereferenced unless lock_sys->mutex was
+acquired before calling this function and is still being held */
 UNIV_INLINE
 trx_t*
 trx_is_active(
 /*==========*/
-	trx_id_t	trx_id)	/*!< in: trx id of the transaction */
+	trx_id_t	trx_id,		/*!< in: trx id of the transaction */
+	ibool*		corrupt)	/*!< in: NULL or pointer to a flag
+					that will be set if corrupt */
 {
 	trx_t*		trx;
-	ibool		corrupt;
 
 	rw_lock_s_lock(&trx_sys->lock);
 
-	trx = trx_is_active_low(trx_id, &corrupt);
+	trx = trx_is_active_low(trx_id, corrupt);
 
 	rw_lock_s_unlock(&trx_sys->lock);
 

=== modified file 'storage/innobase/lock/lock0lock.c'
--- a/storage/innobase/lock/lock0lock.c	revid:vasil.dimov@stripped124080124-qt318pe7c3iltp3p
+++ b/storage/innobase/lock/lock0lock.c	revid:marko.makela@strippedezr26rvg36hev7
@@ -5033,15 +5033,16 @@ lock_rec_queue_validate(
 	if (!index);
 	else if (dict_index_is_clust(index)) {
 		trx_id_t	trx_id;
-		ibool		corrupt;
 
 		/* Unlike the non-debug code, this invariant can only succeed
 		if the check and assertion are covered by the lock mutex. */
 
-		trx_id  = lock_clust_rec_some_has_impl(rec, index, offsets);
-
+		trx_id = lock_clust_rec_some_has_impl(rec, index, offsets);
+		impl_trx = trx_is_active_low(trx_id, NULL);
 
-		impl_trx = trx_is_active_low(trx_id, &corrupt);
+		ut_ad(lock_mutex_own());
+		/* impl_trx cannot be committed until lock_mutex_exit()
+		because lock_trx_release_locks() acquires lock_sys->mutex */
 
 		if (impl_trx != NULL
 		    && lock_rec_other_has_expl_req(LOCK_S, 0, LOCK_WAIT,
@@ -5050,45 +5051,6 @@ lock_rec_queue_validate(
 			ut_a(lock_rec_has_expl(LOCK_X | LOCK_REC_NOT_GAP,
 					       block, heap_no, impl_trx));
 		}
-#if 0
-	} else {
-
-		/* If this thread is holding the file space latch
-		(fil_space_t::latch), the following check WILL break
-		latching order and may cause a deadlock of threads. */
-
-		/* NOTE: This is a bogus check that would fail in the
-		following case: Our transaction is updating a
-		row. After it has updated the clustered index record,
-		it goes to a secondary index record and finds someone
-		else holding an explicit S- or X-lock on that
-		secondary index record, presumably from a locking
-		read. Our transaction cannot update the secondary
-		index immediately, but places a waiting X-lock request
-		on the secondary index record. There is nothing
-		illegal in this. The assertion is simply too strong. */
-
-		/* From the locking point of view, each secondary
-		index is a separate table. A lock that is held on
-		secondary index rec does not give any rights to modify
-		or read the clustered index rec. Therefore, we can
-		think of the sec index as a separate 'table' from the
-		clust index 'table'. Conversely, a transaction that
-		has acquired a lock on and modified a clustered index
-		record may need to wait for a lock on the
-		corresponding record in a secondary index. */
-
-		impl_trx = lock_sec_rec_some_has_impl(
-			rec, index, offsets);
-
-		if (impl_trx
-		    && lock_rec_other_has_expl_req(LOCK_S, 0, LOCK_WAIT,
-						   block, heap_no, impl_trx)) {
-
-			ut_a(lock_rec_has_expl(LOCK_X | LOCK_REC_NOT_GAP,
-					       block, heap_no, impl_trx));
-		}
-#endif
 	}
 
 	for (lock = lock_rec_get_first(block, heap_no);
@@ -5509,22 +5471,29 @@ lock_rec_convert_impl_to_expl(
 
 	if (dict_index_is_clust(index)) {
 		trx_id = lock_clust_rec_some_has_impl(rec, index, offsets);
+		/* The clustered index record was last modified by
+		this transaction. The transaction may have been
+		committed a long time ago. */
 	} else {
 		trx_id = lock_sec_rec_some_has_impl(rec, index, offsets);
+		/* The transaction can be committed before the
+		trx_is_active(trx_id, NULL) check below, because we are not
+		holding lock_mutex. */
 	}
 
-	/* The state of the transaction can change after the check above. */
-
 	if (trx_id != 0) {
 		trx_t*	impl_trx;
 		ulint	heap_no = page_rec_get_heap_no(rec);
 
 		lock_mutex_enter();
 
-		/* If the transaction has no explicit x-lock set on the
-		record, set one for it */
+		/* If the transaction is still active and has no
+		explicit x-lock set on the record, set one for it */
+
+		impl_trx = trx_is_active(trx_id, NULL);
 
-		impl_trx = trx_is_active(trx_id);
+		/* impl_trx cannot be committed until lock_mutex_exit()
+		because lock_trx_release_locks() acquires lock_sys->mutex */
 
 		if (impl_trx != NULL
 		    && !lock_rec_has_expl(LOCK_X | LOCK_REC_NOT_GAP, block,

=== modified file 'storage/innobase/row/row0vers.c'
--- a/storage/innobase/row/row0vers.c	revid:vasil.dimov@stripped
+++ b/storage/innobase/row/row0vers.c	revid:marko.makela@oracle.com-20110124082539-jxezr26rvg36hev7
@@ -52,62 +52,28 @@ index record.
 @return 0 if committed, else the active transaction id;
 NOTE that this function can return false positives but never false
 negatives. The caller must confirm all positive results by calling
-trx_is_active(). */
-UNIV_INTERN
+trx_is_active() while holding lock_sys->mutex. */
+UNIV_INLINE
 trx_id_t
-row_vers_impl_x_locked(
-/*===================*/
-	const rec_t*	rec,	/*!< in: record in a secondary index */
-	dict_index_t*	index,	/*!< in: the secondary index */
-	const ulint*	offsets)/*!< in: rec_get_offsets(rec, index) */
+row_vers_impl_x_locked_low(
+/*=======================*/
+	const rec_t*	clust_rec,	/*!< in: clustered index record */
+	dict_index_t*	clust_index,	/*!< in: the clustered index */
+	const rec_t*	rec,		/*!< in: secondary index record */
+	dict_index_t*	index,		/*!< in: the secondary index */
+	const ulint*	offsets,	/*!< in: rec_get_offsets(rec, index) */
+	mtr_t*		mtr)		/*!< in/out: mini-transaction */
 {
-	dict_index_t*	clust_index;
-	rec_t*		clust_rec;
-	ulint*		clust_offsets;
-	rec_t*		version;
 	trx_id_t	trx_id;
-	trx_id_t	active_trx_id = 0;
-	mem_heap_t*	heap;
-	dtuple_t*	row;
-	trx_t*		trx;
-	ulint		rec_del;
-	mtr_t		mtr;
-	ulint		comp;
 	ibool		corrupt;
+	ulint		comp;
+	ulint		rec_del;
+	const rec_t*	version;
 	rec_t*		prev_version = NULL;
+	ulint*		clust_offsets;
+	mem_heap_t*	heap;
 
-#ifdef UNIV_SYNC_DEBUG
-	ut_ad(!rw_lock_own(&(purge_sys->latch), RW_LOCK_SHARED));
-	ut_ad(!rw_lock_own(&trx_sys->lock, RW_LOCK_SHARED));
-#endif /* UNIV_SYNC_DEBUG */
-	ut_ad(!lock_mutex_own());
-
-	mtr_start(&mtr);
-
-	/* Search for the clustered index record The latch on the clustered
-       	index locks the top of the stack of versions. We also reserve
-       	purge_latch to lock the bottom of the version stack. */
-
-	clust_rec = row_get_clust_rec(
-		BTR_SEARCH_LEAF, rec, index, &clust_index, &mtr);
-
-	if (!clust_rec) {
-		/* In a rare case it is possible that no clust rec is found
-		for a secondary index record: if in row0umod.c
-		row_undo_mod_remove_clust_low() we have already removed the
-		clust rec, while purge is still cleaning and removing
-		secondary index records associated with earlier versions of
-		the clustered index record. In that case there cannot be
-		any implicit lock on the secondary index record, because
-		an active transaction which has modified the secondary index
-		record has also modified the clustered index record. And in
-		a rollback we always undo the modifications to secondary index
-		records before the clustered index record. */
-
-		mtr_commit(&mtr);
-
-		return(active_trx_id);
-	}
+	ut_ad(rec_offs_validate(rec, index, offsets));
 
 	heap = mem_heap_create(1024);
 
@@ -115,27 +81,18 @@ row_vers_impl_x_locked(
 		clust_rec, clust_index, NULL, ULINT_UNDEFINED, &heap);
 
 	trx_id = row_get_rec_trx_id(clust_rec, clust_index, clust_offsets);
+	corrupt = FALSE;
 
-	mtr_s_lock(&purge_sys->latch, &mtr);
-
-	rw_lock_s_lock(&trx_sys->lock);
-
-	trx = trx_is_active_low(trx_id, &corrupt);
-
-	rw_lock_s_unlock(&trx_sys->lock);
-
-	if (trx == NULL) {
-
+	if (!trx_is_active(trx_id, &corrupt)) {
 		/* The transaction that modified or inserted clust_rec is no
-		longer active or corrupt: no implicit lock on rec */
-
+		longer active, or it is corrupt: no implicit lock on rec */
 		if (corrupt) {
-		   	lock_report_trx_id_insanity(
+			lock_report_trx_id_insanity(
 				trx_id, clust_rec, clust_index, clust_offsets,
 				trx_sys_get_max_trx_id());
 		}
-
-		goto exit_func;
+		mem_heap_free(heap);
+		return(0);
 	}
 
 	comp = page_rec_is_comp(rec);
@@ -143,35 +100,38 @@ row_vers_impl_x_locked(
 	ut_ad(!!comp == dict_table_is_comp(index->table));
 	ut_ad(!comp == !page_rec_is_comp(clust_rec));
 
-	/* We look up if some earlier version, which was modified by the trx_id
-	transaction, of the clustered index record would require rec to be in
-	a different state (delete marked or unmarked, or have different field
-	values, or not existing). If there is such a version, then rec was
-	modified by the trx_id transaction, and it has an implicit x-lock on
-	rec. Note that if clust_rec itself would require rec to be in a
-	different state, then the trx_id transaction has not yet had time to
-	modify rec, and does not necessarily have an implicit x-lock on rec. */
-
 	rec_del = rec_get_deleted_flag(rec, comp);
 
-	for (version = clust_rec; version != NULL; version = prev_version) {
+	/* We look up if some earlier version, which was modified by
+	the trx_id transaction, of the clustered index record would
+	require rec to be in a different state (delete marked or
+	unmarked, or have different field values, or not existing). If
+	there is such a version, then rec was modified by the trx_id
+	transaction, and it has an implicit x-lock on rec. Note that
+	if clust_rec itself would require rec to be in a different
+	state, then the trx_id transaction has not yet had time to
+	modify rec, and does not necessarily have an implicit x-lock
+	on rec. */
+
+	mtr_s_lock(&purge_sys->latch, mtr);
 
+	for (version = clust_rec;; version = prev_version) {
 		ulint		err;
 		row_ext_t*	ext;
+		const dtuple_t*	row;
 		dtuple_t*	entry;
 		ulint		vers_del;
 		trx_id_t	prev_trx_id;
 		mem_heap_t*	old_heap = heap;
 
-		/* We have to check if the trx_id transaction is still active.
-	       	We keep the semaphore in mtr on the clust_rec page, so that no
-	       	other transaction can update it and get an implicit x-lock on
-	       	rec. */
+		/* We keep the semaphore in mtr on the clust_rec page, so
+		that no other transaction can update it and get an
+		implicit x-lock on rec until mtr_commit(mtr). */
 
 		heap = mem_heap_create(1024);
 
 		err = trx_undo_prev_version_build(
-			clust_rec, &mtr, version, clust_index, clust_offsets,
+			clust_rec, mtr, version, clust_index, clust_offsets,
 			heap, &prev_version);
 
  		/* Free version and clust_offsets. */
@@ -180,35 +140,31 @@ row_vers_impl_x_locked(
 
 		if (prev_version == NULL) {
 
-			/* If the transaction is still active, clust_rec must
-		       	be a fresh insert, because no previous version was
-		       	found. */
+			/* clust_rec must be a fresh insert, because
+			no previous version was found. */
 
 			ut_a(err == DB_SUCCESS);
-
-			/* The caller should check the tranasction status. */
-			active_trx_id = trx_id;
-
 			break;
 		}
 
 		clust_offsets = rec_get_offsets(
 			prev_version, clust_index, NULL, ULINT_UNDEFINED,
-		       	&heap);
+			&heap);
 
 		vers_del = rec_get_deleted_flag(prev_version, comp);
 
 		prev_trx_id = row_get_rec_trx_id(
 			prev_version, clust_index, clust_offsets);
 
-		/* If the trx_id and prev_trx_id are different and if
-		the prev_version is marked deleted then the
-		prev_trx_id must have already committed for the trx_id
-		to be able to modify the row. Therefore, prev_trx_id
-		cannot hold any implicit lock. */
+		/* If trx_id differs from prev_trx_id and if the
+		prev_version is marked deleted then the prev_trx_id
+		must have already committed for the trx_id to be able
+		to modify the row. Therefore, prev_trx_id cannot hold
+		any implicit lock. */
 
 		if (vers_del && trx_id != prev_trx_id) {
 
+			trx_id = 0;
 			break;
 		}
 
@@ -228,26 +184,34 @@ row_vers_impl_x_locked(
 
 		ut_a(entry != NULL);
 
-		/* If we get here, we know that the trx_id transaction is
-		still active and it has modified prev_version. Let us check
-		if prev_version would require rec to be in a different
-		state. */
+		/* If we get here, we know that the trx_id transaction
+		modified prev_version. Let us check if prev_version
+		would require rec to be in a different state. */
 
 		/* The previous version of clust_rec must be
-		accessible, because the transaction is still active
-		and clust_rec was not a fresh insert. */
+		accessible, because clust_rec was not a fresh insert
+		and we are holding purge_sys->latch. There is no
+		guarantee that the transaction is still active. */
 
-		ut_a(err == DB_SUCCESS);
+		ut_ad(err == DB_SUCCESS);
 
 		/* We check if entry and rec are identified in the alphabetical
 		ordering */
 
-		if (trx_is_active(trx_id) == NULL) {
-
-			/* Transaction no longer active: no implicit x-lock */
-
+		if (!trx_is_active(trx_id, &corrupt)) {
+			/* Transaction no longer active: no implicit
+			x-lock. This situation should only be possible
+			because we are not holding lock_sys->mutex. */
+			ut_ad(!lock_mutex_own());
+			if (corrupt) {
+				lock_report_trx_id_insanity(
+					trx_id,
+					prev_version, clust_index,
+					clust_offsets,
+					trx_sys_get_max_trx_id());
+			}
+			trx_id = 0;
 			break;
-
 		} else if (0 == cmp_dtuple_rec(entry, rec, offsets)) {
 			/* The delete marks of rec and prev_version should be
 			equal for rec to be in the state required by
@@ -255,7 +219,6 @@ row_vers_impl_x_locked(
 
 			if (rec_del != vers_del) {
 
-				active_trx_id = trx_id;
 				break;
 			}
 
@@ -276,24 +239,80 @@ row_vers_impl_x_locked(
 			/* The delete mark should be set in rec for it to be
 			in the state required by prev_version */
 
-			active_trx_id = trx_id;
 			break;
 		}
-		
+
 		if (trx_id != prev_trx_id) {
-			/* The versions modified by the trx_id transaction end
-			to prev_version: no implicit x-lock */
+			/* prev_version was the first version modified by
+			the trx_id transaction: no implicit x-lock */
 
+			trx_id = 0;
 			break;
 		}
 	}
 
-exit_func:
+	mem_heap_free(heap);
+	return(trx_id);
+}
+
+/*****************************************************************//**
+Finds out if an active transaction has inserted or modified a secondary
+index record.
+@return 0 if committed, else the active transaction id;
+NOTE that this function can return false positives but never false
+negatives. The caller must confirm all positive results by calling
+trx_is_active() while holding lock_sys->mutex. */
+UNIV_INTERN
+trx_id_t
+row_vers_impl_x_locked(
+/*===================*/
+	const rec_t*	rec,	/*!< in: record in a secondary index */
+	dict_index_t*	index,	/*!< in: the secondary index */
+	const ulint*	offsets)/*!< in: rec_get_offsets(rec, index) */
+{
+	dict_index_t*	clust_index;
+	const rec_t*	clust_rec;
+	trx_id_t	trx_id;
+	mtr_t		mtr;
+
+#ifdef UNIV_SYNC_DEBUG
+	ut_ad(!rw_lock_own(&purge_sys->latch, RW_LOCK_SHARED));
+	ut_ad(!rw_lock_own(&trx_sys->lock, RW_LOCK_SHARED));
+#endif /* UNIV_SYNC_DEBUG */
+	ut_ad(!lock_mutex_own());
+
+	mtr_start(&mtr);
+
+	/* Search for the clustered index record. The latch on the
+	page of clust_rec locks the top of the stack of versions. The
+	bottom of the version stack will be locked by
+	purge_sys->latch. */
+
+	clust_rec = row_get_clust_rec(
+		BTR_SEARCH_LEAF, rec, index, &clust_index, &mtr);
+
+	if (UNIV_UNLIKELY(!clust_rec)) {
+		/* In a rare case it is possible that no clust rec is found
+		for a secondary index record: if in row0umod.c
+		row_undo_mod_remove_clust_low() we have already removed the
+		clust rec, while purge is still cleaning and removing
+		secondary index records associated with earlier versions of
+		the clustered index record. In that case there cannot be
+		any implicit lock on the secondary index record, because
+		an active transaction which has modified the secondary index
+		record has also modified the clustered index record. And in
+		a rollback we always undo the modifications to secondary index
+		records before the clustered index record. */
+
+		trx_id = 0;
+	} else {
+		trx_id = row_vers_impl_x_locked_low(
+			clust_rec, clust_index, rec, index, offsets, &mtr);
+	}
 
 	mtr_commit(&mtr);
-	mem_heap_free(heap);
 
-	return(active_trx_id);
+	return(trx_id);
 }
 
 /*****************************************************************//**

Attachment: [text/bzr-bundle] bzr/marko.makela@oracle.com-20110124082539-jxezr26rvg36hev7.bundle
Thread
bzr push into mysql-trunk-innodb branch (marko.makela:3451 to 3452) WL#5458marko.makela24 Jan