List:Commits« Previous MessageNext Message »
From:Pekka Nousiainen Date:November 20 2011 12:39pm
Subject:bzr push into mysql-5.1-telco-7.0 branch (pekka.nousiainen:4646 to 4677)
View as plain text  
 4677 jonas oreland	2011-11-19
      ndb - this patch rename c_tableRecordPool and c_triggerRecordPool by adding a _ at the end. This is only to avoid "merge" errors with code that assumes that tableId == ptr.i (which is no longer true)

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
 4676 Pekka Nousiainen	2011-11-19
      bug#13407848 a01_patch.diff
      signed char in compute length bytes caused ndbrequire

    modified:
      mysql-test/suite/ndb/r/ndb_index_stat.result
      mysql-test/suite/ndb/t/ndb_index_stat.test
      storage/ndb/src/kernel/blocks/trix/Trix.cpp
 4675 Mauritz Sundell	2011-11-17
      ndb - separating table ptr-i-value from schema file id in dict
      
      including removal of pre-allocated array of table-records.

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
 4674 Mauritz Sundell	2011-11-17
      ndb - separating trigger ptr-i-value from id in dict.
      
      including removal of pre-allocated array of trigger-records.

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
 4673 Jonas Oreland	2011-11-17
      ndb - add counter for reads being sent to local node, LOCAL_READS

    modified:
      storage/ndb/src/common/debugger/EventLogger.cpp
      storage/ndb/src/kernel/blocks/dbtc/Dbtc.hpp
      storage/ndb/src/kernel/blocks/dbtc/DbtcMain.cpp
      storage/ndb/src/kernel/vm/Ndbinfo.hpp
      storage/ndb/tools/ndbinfo_sql.cpp
 4672 Jonas Oreland	2011-11-17
      ndb - set correct maxFragments also for ndbd

    modified:
      storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp
 4671 jonas oreland	2011-11-16
      ndb - yet another redo-parts fix. Since page 0 (file 0) is rewritten every time log wraps...also set ZPOS_NO_LOG_PARTS in initLogpage...

    modified:
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
 4670 Jonas Oreland	2011-11-16
      ndb - revert trpman...red in CluB...need to investigate why...(don't get red on local machine :-(

    modified:
      mysql-test/suite/ndb/r/ndbinfo.result
      mysql-test/suite/ndb/r/ndbinfo_dump.result
      storage/ndb/include/kernel/BlockNumbers.h
      storage/ndb/include/kernel/GlobalSignalNumbers.h
      storage/ndb/include/kernel/signaldata/CloseComReqConf.hpp
      storage/ndb/include/kernel/signaldata/DisconnectRep.hpp
      storage/ndb/include/kernel/signaldata/EnableCom.hpp
      storage/ndb/include/kernel/signaldata/RouteOrd.hpp
      storage/ndb/src/common/debugger/BlockNames.cpp
      storage/ndb/src/kernel/SimBlockList.cpp
      storage/ndb/src/kernel/blocks/CMakeLists.txt
      storage/ndb/src/kernel/blocks/Makefile.am
      storage/ndb/src/kernel/blocks/cmvmi/Cmvmi.cpp
      storage/ndb/src/kernel/blocks/cmvmi/Cmvmi.hpp
      storage/ndb/src/kernel/blocks/dbinfo/Dbinfo.cpp
      storage/ndb/src/kernel/blocks/qmgr/QmgrMain.cpp
      storage/ndb/src/kernel/vm/Configuration.cpp
      storage/ndb/src/kernel/vm/TransporterCallback.cpp
      storage/ndb/src/kernel/vm/mt.cpp
      storage/ndb/src/kernel/vm/mt_thr_config.cpp
 4669 Jonas Oreland	2011-11-16
      ndb - this patch moves transporter handling logic out from cmvmi into trpman (transporter manager)

    added:
      storage/ndb/src/kernel/blocks/trpman.cpp
      storage/ndb/src/kernel/blocks/trpman.hpp
    modified:
      mysql-test/suite/ndb/r/ndbinfo.result
      mysql-test/suite/ndb/r/ndbinfo_dump.result
      storage/ndb/include/kernel/BlockNumbers.h
      storage/ndb/include/kernel/GlobalSignalNumbers.h
      storage/ndb/include/kernel/signaldata/CloseComReqConf.hpp
      storage/ndb/include/kernel/signaldata/DisconnectRep.hpp
      storage/ndb/include/kernel/signaldata/EnableCom.hpp
      storage/ndb/include/kernel/signaldata/RouteOrd.hpp
      storage/ndb/src/common/debugger/BlockNames.cpp
      storage/ndb/src/kernel/SimBlockList.cpp
      storage/ndb/src/kernel/blocks/CMakeLists.txt
      storage/ndb/src/kernel/blocks/Makefile.am
      storage/ndb/src/kernel/blocks/cmvmi/Cmvmi.cpp
      storage/ndb/src/kernel/blocks/cmvmi/Cmvmi.hpp
      storage/ndb/src/kernel/blocks/dbinfo/Dbinfo.cpp
      storage/ndb/src/kernel/blocks/qmgr/QmgrMain.cpp
      storage/ndb/src/kernel/vm/Configuration.cpp
      storage/ndb/src/kernel/vm/TransporterCallback.cpp
      storage/ndb/src/kernel/vm/mt.cpp
      storage/ndb/src/kernel/vm/mt_thr_config.cpp
 4668 Jonas Oreland	2011-11-16
      ndb - further cleanups wrt to CFG_DB_NO_REDOLOG_PARTS

    modified:
      storage/ndb/include/kernel/ndb_limits.h
      storage/ndb/src/kernel/blocks/dblqh/DblqhInit.cpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
      storage/ndb/src/kernel/blocks/ndbfs/Ndbfs.cpp
      storage/ndb/src/kernel/ndbd.cpp
 4667 Jonas Oreland	2011-11-14
      ndb - fix bug in previous commit. Clearly separate between log-parts which a specific lqh-instance is responisble for. And the node-global log-part-count

    modified:
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
 4666 Ole John Aske	2011-11-14
      Fix: SPJ may leak RowMaps
      
      There is a minor memory leak of 'struct RowMap' objects.
      As the RowMap was only ::init()'ed when Dbspj::releaseNodeRows() has
      released all the mapped rows, its reference was effectively lost
      at that point and another RowMap will be allocated if needed
      
      ... NOTE: 'leak' in this context is not a true memory leak
      as all memory is managed within the request and released
      when ::cleanup() removes all objects related to this request. 

    modified:
      storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp
 4665 Jonas Oreland	2011-11-14
      ndb - lqh++ - this patch takes parts of mikaels patch to allow more then 4 lqh threads. Namely adds infrastructure to make redo-log-parts configurable

    modified:
      storage/ndb/include/kernel/ndb_limits.h
      storage/ndb/include/mgmapi/mgmapi_config_parameters.h
      storage/ndb/include/ndb_version.h.in
      storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp
      storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.cpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.hpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhInit.cpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
      storage/ndb/src/kernel/vm/GlobalData.hpp
 4664 Jonas Oreland	2011-11-14
      ndb - rename MAX_FRAG_PER_NODE to MAX_FRAG_PER_LQH...and remove the hard-coded usage of it

    modified:
      storage/ndb/src/kernel/blocks/dbacc/Dbacc.hpp
      storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp
      storage/ndb/src/kernel/blocks/dbdih/Dbdih.hpp
      storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp
      storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
      storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp
      storage/ndb/src/kernel/blocks/dbtup/DbtupDiskAlloc.cpp
      storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp
      storage/ndb/src/kernel/blocks/dbtup/DbtupGen.cpp
      storage/ndb/src/kernel/blocks/dbtup/DbtupIndex.cpp
      storage/ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp
      storage/ndb/src/kernel/blocks/dbtux/Dbtux.hpp
      storage/ndb/src/kernel/vm/pc.hpp
 4663 jonas oreland	2011-11-11
      ndb - fix incorrect #define, causing assert before first signal, when starting ndbmtd with 2 dbtc threads

    modified:
      storage/ndb/src/kernel/vm/mt.cpp
 4662 jonas oreland	2011-11-11
      ndb - fix some more compiler warnings (unused variables)

    modified:
      storage/ndb/src/kernel/blocks/dbtux/DbtuxScan.cpp
      storage/ndb/src/kernel/blocks/dbtux/DbtuxStat.cpp
 4661 jonas oreland	2011-11-11
      ndb - fix some compiler warnings in NdbPack

    modified:
      storage/ndb/src/common/util/NdbPack.cpp
 4660 jonas oreland	2011-11-11
      ndb - fix compiler warning that somehow only shows in 5.5

    modified:
      storage/ndb/test/ndbapi/flexAsynch.cpp
 4659 Mauritz Sundell	2011-11-11
      ndb - remove unused parameter minId to getFreeObjId()

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
 4658 Mauritz Sundell	2011-11-11
      ndb - Now at patch dyn-mem-dict--noofmetatables
      
      introduce c_noOfMetaTables instead of c_tableRecordPool.getSize() in dict

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
 4657 Mauritz Sundell	2011-11-11
      ndb - adding a schema file id hash table for DictObject
      
      And removing the current separate hash tables for File,
      Filegroup, and HashMapRecord.
      
      The id hash for DictObject replaces the current hash tables
      for File, Filegroup, HashMap and will also be used by Table
      and Index.  Note that temporary table objects created by
      alter table will be added in id hash table but not in
      name hash table, this since the way to reach tablerecords
      will be through dictobject and schema file id, not as now
      assume tableid is ptr-i-value in tablerecordpool.

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
 4656 Mauritz Sundell	2011-11-11
      ndb - use DLMHashTable instead of DLHashTable in Dbdict

    modified:
      storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp
      storage/ndb/src/kernel/vm/DLFifoList.hpp
 4655 Mauritz Sundell	2011-11-11
      ndb - new template for intrusive hash table, intended to the one
      
      There was several variants of DLHashTable such as DLHashTable2, KeyTable,
      KeyTable2 and KeyTable2c.  In time this new template should replace them
      all.
      
      The objective to introduce it now is to simplify the use of two hash-tables
      on the same object, now needed for DictObject.
      
      The M in DLMHashTable can mean Meta or Methods, since the template takes
      a Meta/Methods-class as parameter to tell the hash table how to get
      next/prev link, calculate hash-value and determine equalness of objects.
      
      The default Meta/method-class assumes the same interface that is already
      used in classes using DLHashTable.
      
      DLMHashTable logic are copied from DLHashTable.  Direct references like
      obj->nextHash are replaced with M::nextHash(i*obj) and the like for 
      prevHash, hashValue() and equal().

    modified:
      storage/ndb/src/kernel/vm/DLHashTable.hpp
      storage/ndb/src/kernel/vm/SimulatedBlock.hpp
 4654 Jonas Oreland	2011-11-10 [merge]
      ndb - merge 70-tip

    modified:
      mysql-test/suite/ndb/r/ndb_condition_pushdown.result
      mysql-test/suite/ndb/t/ndb_condition_pushdown.test
      sql/ha_ndbcluster_cond.cc
      sql/ha_ndbcluster_cond.h
 4653 Jonas Oreland	2011-11-10
      ndb - add missing deinit of ndb_index_stat_thread + fix a valgrind leak (once per startup/shutdown)

    modified:
      sql/ha_ndbcluster.cc
      storage/ndb/src/common/portlib/NdbThread.c
 4652 Jonas Oreland	2011-11-10
      ndb - start introducing components in ndb handler
        step 2 - Make index stat thread use initial Ndb_component base class
          This step does not in itself simplify that much
          It basically only removes a bunch of pthread_*-calls

    modified:
      sql/ha_ndb_index_stat.cc
      sql/ha_ndb_index_stat.h
      sql/ha_ndbcluster.cc
      sql/ha_ndbcluster.h
      sql/ha_ndbcluster_binlog.cc
      sql/ha_ndbcluster_binlog.h
 4651 Jonas Oreland	2011-11-10
      ndb - forgot to add new include files to dist :-(

    modified:
      sql/Makefile.am
 4650 Jonas Oreland	2011-11-10
      ndb - start introducing components in ndb handler
        step 1- Make util thread use initial Ndb_component base class
          This step does not in itself simplify that much
          It basically only removes a bunch of pthread_*-calls
          But it's a start

    added:
      sql/ndb_component.cc
      sql/ndb_component.h
      sql/ndb_util_thread.h
    modified:
      libmysqld/Makefile.am
      sql/Makefile.am
      sql/ha_ndbcluster.cc
      sql/ha_ndbcluster.h
      sql/ha_ndbcluster_binlog.cc
      sql/ha_ndbcluster_binlog.h
      storage/ndb/CMakeLists.txt
 4649 Jonas Oreland	2011-11-09
      ndb - include mikaels extensions for flexAsynch...

    modified:
      storage/ndb/test/ndbapi/flexAsynch.cpp
 4648 Ole John Aske	2011-11-09
      Fix for bug#13355055: CLUSTER INTERNALS FAILS TO TERMINATE BATCH AT MAX 'BATCHBYTESIZE'
      
        We have observed SCANREQs with a surprisingly small 'BatchSize' argument as part
      of debugging and tuning SPJ. Where we expected 'BatchSize=64' (Default) we
      have observed values around ~10. This directly translated into suboptimal performance.
      
      When debugging this, we found the root cause in NdbRecord::calculate_batch_size(), which
      returns the batchsize (#rows) and  arguments for the SCANREQ signal.
      It contained the following questionable logic:
      
       1) Calculate the worst case record length based on that *all columns* are selected
          from a table, and all varchar() columns being filled to their *max limit*.
      
       2) If that record length is such that 'batchsize * recLength' > ,
          reduce batchsize such that batchbytesize would never be exceeded.
      
      This effectively put ::calculate_batch_size() in control of the batchbytesize
      logic. The negative impact if that logic was that 'batchsize' could be severely
      restricted in cases where we could have delivered a lot more rows in that batch.
      
      However, there are logic in LQH+TUP which are intended to keep the delivered batches
      withing the batchsize limits. This is a much better place to control this as
      LQH & TUP knows the exact size of the TRANSID_AI payload being delivered, taking
      actual varchar length and only the selected columns into acount.
      
      Debugging that logic, it turned out that it contained bugs in how the produced
      batchsize was counted: Actually a mixup between whether the 'length' was in
      specified in number of bytes or Uint32. - So the above questionable
      ::calculate_batch_size() logic seems to have been invented only to
      circumvent this bug......
      
      Fixing that bug allowed us to now leave the entire batch control to
      the LQH block.
      
      - ::calculate_batch_size could then be significantly simplified.
      - The specified BatchSize & BatchByteSize arguments could be used as
        specified directly as args in SCANREQ signals.
      - Will likely give better performance (larger effective batches) when
        scanning a table with 'max record length > BatchByteSize / BatchSize'
        (~500 bytes with default config)
      
      
      Fix number of bytes/Uint32 mixup in how m_curr_batch_size_bytes is counted
      ******
      Fix number of bytes/Uint32 mixup in how the SPJ adaptive parallelism count m_totalBytes
      ******
      Simplify ::calculate_batch_size() as LQH now correctly will stay within the specified batch_size rows/bytes limits
      ******
      Remove NdbRecord::m_max_transid_ai_bytes which is now obsolete
      ******
      Remove unused args from NdbRecord::calculate_batch_size()
      ******
      Fix SPJs adaptive paralellism logic to also handle batchsize termination due to BatchByteSize being exhausted

    modified:
      storage/ndb/include/kernel/signaldata/ScanFrag.hpp
      storage/ndb/include/kernel/signaldata/TupKey.hpp
      storage/ndb/include/ndbapi/NdbReceiver.hpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
      storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp
      storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp
      storage/ndb/src/ndbapi/NdbQueryOperation.cpp
      storage/ndb/src/ndbapi/NdbReceiver.cpp
      storage/ndb/src/ndbapi/NdbRecord.hpp
      storage/ndb/src/ndbapi/NdbScanOperation.cpp
 4647 Pekka Nousiainen	2011-11-09 [merge]
      merge 7.0 to wl4124-new5

    modified:
      storage/ndb/src/kernel/blocks/dbspj/Dbspj.hpp
      storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp
 4646 Pekka Nousiainen	2011-11-09
      wl#4124 x28_fix.diff
      handler: forgot MTR

    modified:
      mysql-test/suite/ndb/r/ndb_index_stat.result
=== modified file 'libmysqld/Makefile.am'
--- a/libmysqld/Makefile.am	2011-09-07 22:50:01 +0000
+++ b/libmysqld/Makefile.am	2011-11-10 08:16:52 +0000
@@ -49,7 +49,8 @@ sqlsources = derror.cc field.cc field_co
 	     ha_ndbcluster.cc ha_ndbcluster_cond.cc \
 	ha_ndbcluster_connection.cc ha_ndbinfo.cc \
 	ha_ndb_index_stat.cc \
-	ha_ndbcluster_binlog.cc ndb_conflict_trans.cc ha_partition.cc \
+	ha_ndbcluster_binlog.cc ndb_conflict_trans.cc ndb_component.cc \
+        ha_partition.cc \
 	handler.cc sql_handler.cc \
 	hostname.cc init.cc password.c \
 	item.cc item_buff.cc item_cmpfunc.cc item_create.cc \

=== modified file 'mysql-test/suite/ndb/r/ndb_condition_pushdown.result'
--- a/mysql-test/suite/ndb/r/ndb_condition_pushdown.result	2011-11-04 08:33:56 +0000
+++ b/mysql-test/suite/ndb/r/ndb_condition_pushdown.result	2011-11-10 15:08:40 +0000
@@ -2148,6 +2148,24 @@ id	select_type	table	type	possible_keys
 select * from t where x not like "aa?";
 pk	x
 0	a
+select * from t where x like "%a%";
+pk	x
+0	a
+select * from t where x not like "%b%";
+pk	x
+0	a
+select * from t where x like replace(concat("%", "b%"),"b","a");
+pk	x
+0	a
+select * from t where x not like replace(concat("%", "a%"),"a","b");
+pk	x
+0	a
+select * from t where x like concat("%", replace("b%","b","a"));
+pk	x
+0	a
+select * from t where x not like concat("%", replace("a%","a","b"));
+pk	x
+0	a
 drop table t;
 create table t (pk int primary key, x int) engine = ndb;
 insert into t values (0,0),(1,1),(2,2),(3,3),(4,4),(5,5);
@@ -2345,6 +2363,22 @@ select * from mytable where s like conca
 i	s
 0	Text Hej
 1	xText aaja
+select * from mytable where s like replace(concat("%Xext","%"),"X", "T") order by i;
+i	s
+0	Text Hej
+1	xText aaja
+select * from mytable where s not like replace(concat("%Text","%"),"T", "X") order by i;
+i	s
+0	Text Hej
+1	xText aaja
+select * from mytable where s like concat(replace("%Xext","X", "T"),"%") order by i;
+i	s
+0	Text Hej
+1	xText aaja
+select * from mytable where s not like concat(replace("%Text","T", "X"),"%") order by i;
+i	s
+0	Text Hej
+1	xText aaja
 drop table mytable;
 set engine_condition_pushdown = @old_ecpd;
 DROP TABLE t1,t2,t3,t4,t5;

=== modified file 'mysql-test/suite/ndb/r/ndb_index_stat.result'
--- a/mysql-test/suite/ndb/r/ndb_index_stat.result	2011-11-09 08:27:32 +0000
+++ b/mysql-test/suite/ndb/r/ndb_index_stat.result	2011-11-19 07:56:25 +0000
@@ -548,6 +548,23 @@ SELECT count(*) as Count FROM t1 WHERE L
 Count
 256
 drop table t1;
+create table t1 (
+a int unsigned not null,
+b char(180) not null,
+primary key using hash (a),
+index (b)
+) engine=ndb charset=binary;
+insert into t1 values (1,'a'),(2,'b'),(3,'c');
+analyze table t1;
+Table	Op	Msg_type	Msg_text
+test.t1	analyze	status	OK
+analyze table t1;
+Table	Op	Msg_type	Msg_text
+test.t1	analyze	status	OK
+analyze table t1;
+Table	Op	Msg_type	Msg_text
+test.t1	analyze	status	OK
+drop table t1;
 set @is_enable = @is_enable_default;
 set @is_enable = NULL;
 # is_enable_on=0 is_enable_off=1

=== modified file 'mysql-test/suite/ndb/t/ndb_condition_pushdown.test'
--- a/mysql-test/suite/ndb/t/ndb_condition_pushdown.test	2011-11-04 08:33:56 +0000
+++ b/mysql-test/suite/ndb/t/ndb_condition_pushdown.test	2011-11-10 15:08:40 +0000
@@ -2257,6 +2257,12 @@ explain select * from t where x like "aa
 select * from t where x like "aa?";
 explain select * from t where x not like "aa?";
 select * from t where x not like "aa?";
+select * from t where x like "%a%";
+select * from t where x not like "%b%";
+select * from t where x like replace(concat("%", "b%"),"b","a");
+select * from t where x not like replace(concat("%", "a%"),"a","b");
+select * from t where x like concat("%", replace("b%","b","a"));
+select * from t where x not like concat("%", replace("a%","a","b"));
 drop table t;
 
 # Bug#57735 BETWEEN in pushed condition cause garbage to be read in ::unpack_record() 
@@ -2392,6 +2398,12 @@ set engine_condition_pushdown=1;
  select * from mytable where s like concat("%Text","%") and s not like "%Text%" order by i;
  select * from mytable where s like concat("%Text","%") and s not like "%Text1%" order by i;
 
+ select * from mytable where s like replace(concat("%Xext","%"),"X", "T") order by i;
+ select * from mytable where s not like replace(concat("%Text","%"),"T", "X") order by i;
+ select * from mytable where s like concat(replace("%Xext","X", "T"),"%") order by i;
+ select * from mytable where s not like concat(replace("%Text","T", "X"),"%") order by i;
+
+
 drop table mytable;
 
 set engine_condition_pushdown = @old_ecpd;

=== modified file 'mysql-test/suite/ndb/t/ndb_index_stat.test'
--- a/mysql-test/suite/ndb/t/ndb_index_stat.test	2011-09-02 06:43:38 +0000
+++ b/mysql-test/suite/ndb/t/ndb_index_stat.test	2011-11-19 07:56:25 +0000
@@ -374,5 +374,20 @@ SELECT count(*) as Count FROM t1 WHERE L
 
 drop table t1;
 
+# bug#13407848
+# signed char in compute length bytes caused ndbrequire in Trix.cpp
+
+create table t1 (
+  a int unsigned not null,
+  b char(180) not null,
+  primary key using hash (a),
+  index (b)
+) engine=ndb charset=binary;
+insert into t1 values (1,'a'),(2,'b'),(3,'c');
+analyze table t1;
+analyze table t1;
+analyze table t1;
+drop table t1;
+
 set @is_enable = @is_enable_default;
 source ndb_index_stat_enable.inc;

=== modified file 'sql/Makefile.am'
--- a/sql/Makefile.am	2011-09-07 22:50:01 +0000
+++ b/sql/Makefile.am	2011-11-10 09:01:54 +0000
@@ -64,6 +64,8 @@ noinst_HEADERS =	item.h item_func.h item
 			ha_ndb_index_stat.h \
                         ndb_mi.h \
 			ndb_conflict_trans.h \
+                        ndb_component.h \
+                        ndb_util_thread.h \
 			ha_partition.h rpl_constants.h \
 			debug_sync.h \
 			opt_range.h protocol.h rpl_tblmap.h rpl_utility.h \
@@ -141,7 +143,8 @@ libndb_la_SOURCES=	ha_ndbcluster.cc \
 			ha_ndb_index_stat.cc \
 			ha_ndbinfo.cc \
 			ndb_mi.cc \
-			ndb_conflict_trans.cc
+			ndb_conflict_trans.cc \
+                        ndb_component.cc
 
 gen_lex_hash_SOURCES =	gen_lex_hash.cc
 gen_lex_hash_LDFLAGS =  @NOINST_LDFLAGS@

=== modified file 'sql/ha_ndb_index_stat.cc'
--- a/sql/ha_ndb_index_stat.cc	2011-11-08 21:43:36 +0000
+++ b/sql/ha_ndb_index_stat.cc	2011-11-10 10:35:09 +0000
@@ -24,6 +24,16 @@
 #include <mysql/plugin.h>
 #include <ctype.h>
 
+/* from other files */
+extern struct st_ndb_status g_ndb_status;
+extern pthread_mutex_t ndbcluster_mutex;
+
+/* these have to live in ha_ndbcluster.cc */
+extern bool ndb_index_stat_get_enable(THD *thd);
+extern const char* g_ndb_status_index_stat_status;
+extern long g_ndb_status_index_stat_cache_query;
+extern long g_ndb_status_index_stat_cache_clean;
+
 // Do we have waiter...
 static bool ndb_index_stat_waiter= false;
 
@@ -40,6 +50,28 @@ set_thd_ndb(THD *thd, Thd_ndb *thd_ndb)
 typedef NdbDictionary::Table NDBTAB;
 typedef NdbDictionary::Index NDBINDEX;
 
+/** ndb_index_stat_thread */
+Ndb_index_stat_thread::Ndb_index_stat_thread()
+  : running(-1)
+{
+  pthread_mutex_init(&LOCK, MY_MUTEX_INIT_FAST);
+  pthread_cond_init(&COND, NULL);
+  pthread_cond_init(&COND_ready, NULL);
+  pthread_mutex_init(&list_mutex, MY_MUTEX_INIT_FAST);
+  pthread_mutex_init(&stat_mutex, MY_MUTEX_INIT_FAST);
+  pthread_cond_init(&stat_cond, NULL);
+}
+
+Ndb_index_stat_thread::~Ndb_index_stat_thread()
+{
+  pthread_mutex_destroy(&LOCK);
+  pthread_cond_destroy(&COND);
+  pthread_cond_destroy(&COND_ready);
+  pthread_mutex_destroy(&list_mutex);
+  pthread_mutex_destroy(&stat_mutex);
+  pthread_cond_destroy(&stat_cond);
+}
+
 struct Ndb_index_stat {
   enum {
     LT_Undef= 0,
@@ -912,8 +944,8 @@ ndb_index_stat_get_share(NDB_SHARE *shar
   Ndb_index_stat_glob &glob= ndb_index_stat_glob;
 
   pthread_mutex_lock(&share->mutex);
-  pthread_mutex_lock(&ndb_index_stat_list_mutex);
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.list_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   time_t now= ndb_index_stat_time();
   err_out= 0;
 
@@ -950,8 +982,8 @@ ndb_index_stat_get_share(NDB_SHARE *shar
   }
   while (0);
 
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
-  pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
   pthread_mutex_unlock(&share->mutex);
   return st;
 }
@@ -966,7 +998,7 @@ ndb_index_stat_free(Ndb_index_stat *st)
 {
   DBUG_ENTER("ndb_index_stat_free");
   Ndb_index_stat_glob &glob= ndb_index_stat_glob;
-  pthread_mutex_lock(&ndb_index_stat_list_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.list_mutex);
   NDB_SHARE *share= st->share;
   assert(share != 0);
 
@@ -1003,10 +1035,10 @@ ndb_index_stat_free(Ndb_index_stat *st)
   assert(found);
   share->index_stat_list= st_head;
 
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
-  pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
   DBUG_VOID_RETURN;
 }
 
@@ -1015,7 +1047,7 @@ ndb_index_stat_free(NDB_SHARE *share)
 {
   DBUG_ENTER("ndb_index_stat_free");
   Ndb_index_stat_glob &glob= ndb_index_stat_glob;
-  pthread_mutex_lock(&ndb_index_stat_list_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.list_mutex);
   Ndb_index_stat *st;
   while ((st= share->index_stat_list) != 0)
   {
@@ -1028,10 +1060,10 @@ ndb_index_stat_free(NDB_SHARE *share)
     assert(!st->to_delete);
     st->to_delete= true;
   }
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
-  pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
   DBUG_VOID_RETURN;
 }
 
@@ -1042,7 +1074,7 @@ ndb_index_stat_find_entry(int index_id,
 {
   DBUG_ENTER("ndb_index_stat_find_entry");
   pthread_mutex_lock(&ndbcluster_mutex);
-  pthread_mutex_lock(&ndb_index_stat_list_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.list_mutex);
   DBUG_PRINT("index_stat", ("find index:%d version:%d table:%d",
                             index_id, index_version, table_id));
 
@@ -1055,7 +1087,7 @@ ndb_index_stat_find_entry(int index_id,
       if (st->index_id == index_id &&
           st->index_version == index_version)
       {
-        pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+        pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
         pthread_mutex_unlock(&ndbcluster_mutex);
         DBUG_RETURN(st);
       }
@@ -1063,7 +1095,7 @@ ndb_index_stat_find_entry(int index_id,
     }
   }
 
-  pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
   pthread_mutex_unlock(&ndbcluster_mutex);
   DBUG_RETURN(0);
 }
@@ -1137,7 +1169,7 @@ void
 ndb_index_stat_proc_new(Ndb_index_stat_proc &pr)
 {
   Ndb_index_stat_glob &glob= ndb_index_stat_glob;
-  pthread_mutex_lock(&ndb_index_stat_list_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.list_mutex);
   const int lt= Ndb_index_stat::LT_New;
   Ndb_index_stat_list &list= ndb_index_stat_list[lt];
 
@@ -1151,10 +1183,10 @@ ndb_index_stat_proc_new(Ndb_index_stat_p
     assert(pr.lt != lt);
     ndb_index_stat_list_move(st, pr.lt);
   }
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
-  pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
 }
 
 void
@@ -1189,9 +1221,9 @@ ndb_index_stat_proc_update(Ndb_index_sta
     assert(pr.lt != lt);
     ndb_index_stat_list_move(st, pr.lt);
     // db op so update status after each
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     glob.set_status();
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
     cnt++;
   }
   if (cnt == batch)
@@ -1204,7 +1236,7 @@ ndb_index_stat_proc_read(Ndb_index_stat_
   NdbIndexStat::Head head;
   if (st->is->read_stat(pr.ndb) == -1)
   {
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     ndb_index_stat_error(st, "read_stat", __LINE__);
     const bool force_update= st->force_update;
     ndb_index_stat_force_update(st, false);
@@ -1221,12 +1253,12 @@ ndb_index_stat_proc_read(Ndb_index_stat_
       pr.lt= Ndb_index_stat::LT_Error;
     }
 
-    pthread_cond_broadcast(&ndb_index_stat_stat_cond);
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_cond_broadcast(&ndb_index_stat_thread.stat_cond);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
     return;
   }
 
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   pr.now= ndb_index_stat_time();
   st->is->get_head(head);
   st->load_time= head.m_loadTime;
@@ -1239,8 +1271,8 @@ ndb_index_stat_proc_read(Ndb_index_stat_
   ndb_index_stat_cache_move(st);
   st->cache_clean= false;
   pr.lt= Ndb_index_stat::LT_Idle;
-  pthread_cond_broadcast(&ndb_index_stat_stat_cond);
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_cond_broadcast(&ndb_index_stat_thread.stat_cond);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 }
 
 void
@@ -1263,9 +1295,9 @@ ndb_index_stat_proc_read(Ndb_index_stat_
     assert(pr.lt != lt);
     ndb_index_stat_list_move(st, pr.lt);
     // db op so update status after each
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     glob.set_status();
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
     cnt++;
   }
   if (cnt == batch)
@@ -1329,7 +1361,7 @@ ndb_index_stat_proc_idle(Ndb_index_stat_
   const Ndb_index_stat_opt &opt= ndb_index_stat_opt;
   uint batch= opt.get(Ndb_index_stat_opt::Iidle_batch);
   {
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     const Ndb_index_stat_glob &glob= ndb_index_stat_glob;
     const int lt_update= Ndb_index_stat::LT_Update;
     const Ndb_index_stat_list &list_update= ndb_index_stat_list[lt_update];
@@ -1338,7 +1370,7 @@ ndb_index_stat_proc_idle(Ndb_index_stat_
       // probably there is a force update waiting on Idle list
       batch= ~0;
     }
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
   }
   // entry may be moved to end of this list
   if (batch > list.count)
@@ -1358,9 +1390,9 @@ ndb_index_stat_proc_idle(Ndb_index_stat_
     cnt++;
   }
   // full batch does not set pr.busy
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 }
 
 void
@@ -1417,9 +1449,9 @@ ndb_index_stat_proc_check(Ndb_index_stat
     assert(pr.lt != lt);
     ndb_index_stat_list_move(st, pr.lt);
     // db op so update status after each
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     glob.set_status();
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
     cnt++;
   }
   if (cnt == batch)
@@ -1453,9 +1485,9 @@ ndb_index_stat_proc_evict(Ndb_index_stat
   ndb_index_stat_cache_move(st);
   ndb_index_stat_cache_clean(st);
 
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 }
 
 bool
@@ -1564,10 +1596,10 @@ ndb_index_stat_proc_delete(Ndb_index_sta
     DBUG_PRINT("index_stat", ("st %s proc %s", st->id, list.name));
 
     // adjust global counters at drop
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     ndb_index_stat_force_update(st, false);
     ndb_index_stat_no_stats(st, false);
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 
     ndb_index_stat_proc_evict(pr, st);
     ndb_index_stat_list_remove(st);
@@ -1578,9 +1610,9 @@ ndb_index_stat_proc_delete(Ndb_index_sta
   if (cnt == batch)
     pr.busy= true;
 
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 }
 
 void
@@ -1640,9 +1672,9 @@ ndb_index_stat_proc_error(Ndb_index_stat
     cnt++;
   }
   // full batch does not set pr.busy
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 }
 
 void
@@ -1715,9 +1747,9 @@ ndb_index_stat_proc_event(Ndb_index_stat
       glob.event_miss++;
     }
   }
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 }
 
 /* Control options */
@@ -1731,11 +1763,11 @@ ndb_index_stat_proc_control(Ndb_index_st
   /* Request to zero accumulating counters */
   if (opt.get(Ndb_index_stat_opt::Izero_total) == true)
   {
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     glob.zero_total();
     glob.set_status();
     opt.set(Ndb_index_stat_opt::Izero_total, false);
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
   }
 }
 
@@ -1829,10 +1861,10 @@ ndb_index_stat_list_verify(int lt)
 void
 ndb_index_stat_list_verify()
 {
-  pthread_mutex_lock(&ndb_index_stat_list_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.list_mutex);
   for (int lt= 1; lt < Ndb_index_stat::LT_Count; lt++)
     ndb_index_stat_list_verify(lt);
-  pthread_mutex_unlock(&ndb_index_stat_list_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.list_mutex);
 }
 
 void
@@ -1891,9 +1923,9 @@ ndb_index_stat_end()
    * in LT_Delete.  The first two steps here should be unnecessary.
    */
 
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   ndb_index_stat_allow(0);
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 
   int lt;
   for (lt= 1; lt < Ndb_index_stat::LT_Count; lt++)
@@ -2029,14 +2061,13 @@ ndb_index_stat_stop_listener(Ndb_index_s
   DBUG_RETURN(0);
 }
 
-pthread_handler_t
-ndb_index_stat_thread_func(void *arg __attribute__((unused)))
+void
+Ndb_index_stat_thread::do_run()
 {
   THD *thd; /* needs to be first for thread_stack */
   struct timespec abstime;
   Thd_ndb *thd_ndb= NULL;
 
-  my_thread_init();
   DBUG_ENTER("ndb_index_stat_thread_func");
 
   Ndb_index_stat_glob &glob= ndb_index_stat_glob;
@@ -2046,18 +2077,16 @@ ndb_index_stat_thread_func(void *arg __a
   have_listener= false;
 
   // wl4124_todo remove useless stuff copied from utility thread
- 
-  pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+
+  pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
 
   thd= new THD; /* note that contructor of THD uses DBUG_ */
   if (thd == NULL)
   {
     my_errno= HA_ERR_OUT_OF_MEM;
-    DBUG_RETURN(NULL);
+    DBUG_VOID_RETURN;
   }
   THD_CHECK_SENTRY(thd);
-  pthread_detach_this_thread();
-  ndb_index_stat_thread= pthread_self();
 
   thd->thread_stack= (char*)&thd; /* remember where our stack is */
   if (thd->store_globals())
@@ -2080,9 +2109,9 @@ ndb_index_stat_thread_func(void *arg __a
   thd->update_charset();
 
   /* Signal successful initialization */
-  ndb_index_stat_thread_running= 1;
-  pthread_cond_signal(&COND_ndb_index_stat_ready);
-  pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+  ndb_index_stat_thread.running= 1;
+  pthread_cond_signal(&ndb_index_stat_thread.COND_ready);
+  pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
 
   /*
     wait for mysql server to start
@@ -2096,7 +2125,7 @@ ndb_index_stat_thread_func(void *arg __a
     if (ndbcluster_terminating)
     {
       mysql_mutex_unlock(&LOCK_server_started);
-      pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+      pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
       goto ndb_index_stat_thread_end;
     }
   }
@@ -2105,21 +2134,21 @@ ndb_index_stat_thread_func(void *arg __a
   /*
     Wait for cluster to start
   */
-  pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+  pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
   while (!g_ndb_status.cluster_node_id && (ndbcluster_hton->slot != ~(uint)0))
   {
     /* ndb not connected yet */
-    pthread_cond_wait(&COND_ndb_index_stat_thread, &LOCK_ndb_index_stat_thread);
+    pthread_cond_wait(&ndb_index_stat_thread.COND, &ndb_index_stat_thread.LOCK);
     if (ndbcluster_terminating)
       goto ndb_index_stat_thread_end;
   }
-  pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+  pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
 
   /* Get instance used for sys objects check and create */
   if (!(pr.is_util= new NdbIndexStat))
   {
     sql_print_error("Could not allocate NdbIndexStat is_util object");
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
     goto ndb_index_stat_thread_end;
   }
 
@@ -2127,7 +2156,7 @@ ndb_index_stat_thread_func(void *arg __a
   if (!(thd_ndb= ha_ndbcluster::seize_thd_ndb(thd)))
   {
     sql_print_error("Could not allocate Thd_ndb object");
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
     goto ndb_index_stat_thread_end;
   }
   set_thd_ndb(thd, thd_ndb);
@@ -2136,19 +2165,19 @@ ndb_index_stat_thread_func(void *arg __a
   {
     sql_print_error("Could not change index stats thd_ndb database to %s",
                     NDB_INDEX_STAT_DB);
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
     goto ndb_index_stat_thread_end;
   }
   pr.ndb= thd_ndb->ndb;
 
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   ndb_index_stat_allow(1);
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 
   /* Fill in initial status variable */
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   glob.set_status();
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
 
   bool enable_ok;
   enable_ok= false;
@@ -2156,10 +2185,10 @@ ndb_index_stat_thread_func(void *arg __a
   set_timespec(abstime, 0);
   for (;;)
   {
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
     if (!ndbcluster_terminating && ndb_index_stat_waiter == false) {
-      int ret= pthread_cond_timedwait(&COND_ndb_index_stat_thread,
-                                      &LOCK_ndb_index_stat_thread,
+      int ret= pthread_cond_timedwait(&ndb_index_stat_thread.COND,
+                                      &ndb_index_stat_thread.LOCK,
                                       &abstime);
       const char* reason= ret == ETIMEDOUT ? "timed out" : "wake up";
       (void)reason; // USED
@@ -2168,7 +2197,7 @@ ndb_index_stat_thread_func(void *arg __a
     if (ndbcluster_terminating) /* Shutting down server */
       goto ndb_index_stat_thread_end;
     ndb_index_stat_waiter= false;
-    pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
 
     /* const bool enable_ok_new= THDVAR(NULL, index_stat_enable); */
     const bool enable_ok_new= ndb_index_stat_get_enable(NULL);
@@ -2229,9 +2258,9 @@ ndb_index_stat_thread_func(void *arg __a
     glob.th_enable= enable_ok;
     glob.th_busy= pr.busy;
     glob.th_loop= msecs;
-    pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
     glob.set_status();
-    pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+    pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
   }
 
 ndb_index_stat_thread_end:
@@ -2257,15 +2286,12 @@ ndb_index_stat_thread_fail:
   delete thd;
   
   /* signal termination */
-  ndb_index_stat_thread_running= 0;
-  pthread_cond_signal(&COND_ndb_index_stat_ready);
-  pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+  ndb_index_stat_thread.running= 0;
+  pthread_cond_signal(&ndb_index_stat_thread.COND_ready);
+  pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
   DBUG_PRINT("exit", ("ndb_index_stat_thread"));
 
   DBUG_LEAVE;
-  my_thread_end();
-  pthread_exit(0);
-  return NULL;
 }
 
 /* Optimizer queries */
@@ -2291,7 +2317,7 @@ ndb_index_stat_wait(Ndb_index_stat *st,
   DBUG_ENTER("ndb_index_stat_wait");
 
   Ndb_index_stat_glob &glob= ndb_index_stat_glob;
-  pthread_mutex_lock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_lock(&ndb_index_stat_thread.stat_mutex);
   int err= 0;
   uint count= 0;
   struct timespec abstime;
@@ -2338,13 +2364,13 @@ ndb_index_stat_wait(Ndb_index_stat *st,
       break;
     DBUG_PRINT("index_stat", ("st %s wait count:%u",
                               st->id, ++count));
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
     ndb_index_stat_waiter= true;
-    pthread_cond_signal(&COND_ndb_index_stat_thread);
-    pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+    pthread_cond_signal(&ndb_index_stat_thread.COND);
+    pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
     set_timespec(abstime, 1);
-    ret= pthread_cond_timedwait(&ndb_index_stat_stat_cond,
-                                &ndb_index_stat_stat_mutex,
+    ret= pthread_cond_timedwait(&ndb_index_stat_thread.stat_cond,
+                                &ndb_index_stat_thread.stat_mutex,
                                 &abstime);
     if (ret != 0 && ret != ETIMEDOUT)
     {
@@ -2362,7 +2388,7 @@ ndb_index_stat_wait(Ndb_index_stat *st,
     assert(glob.wait_update != 0);
     glob.wait_update--;
   }
-  pthread_mutex_unlock(&ndb_index_stat_stat_mutex);
+  pthread_mutex_unlock(&ndb_index_stat_thread.stat_mutex);
   if (err != 0)
   {
     DBUG_PRINT("index_stat", ("st %s wait error: %d",
@@ -2408,9 +2434,9 @@ ha_ndbcluster::ndb_index_stat_query(uint
   if (st->read_time == 0)
   {
     DBUG_PRINT("index_stat", ("no index stats"));
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
-    pthread_cond_signal(&COND_ndb_index_stat_thread);
-    pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
+    pthread_cond_signal(&ndb_index_stat_thread.COND);
+    pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
     DBUG_RETURN(NdbIndexStat::NoIndexStats);
   }
 

=== modified file 'sql/ha_ndb_index_stat.h'
--- a/sql/ha_ndb_index_stat.h	2011-10-08 16:54:19 +0000
+++ b/sql/ha_ndb_index_stat.h	2011-11-10 10:35:09 +0000
@@ -15,22 +15,43 @@
    Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA
 */
 
-/* provides declarations only to index_stat.cc */
+#ifndef HA_NDB_INDEX_STAT_H
+#define HA_NDB_INDEX_STAT_H
 
-extern struct st_ndb_status g_ndb_status;
+#include "ndb_component.h"
 
-extern pthread_mutex_t ndbcluster_mutex;
+/* for NdbIndexScanOperation::IndexBound */
+#include <ndbapi/NdbIndexScanOperation.hpp>
 
-extern pthread_t ndb_index_stat_thread;
-extern pthread_cond_t COND_ndb_index_stat_thread;
-extern pthread_mutex_t LOCK_ndb_index_stat_thread;
-
-/* protect entry lists where needed */
-extern pthread_mutex_t ndb_index_stat_list_mutex;
-
-/* protect and signal changes in stats entries */
-extern pthread_mutex_t ndb_index_stat_stat_mutex;
-extern pthread_cond_t ndb_index_stat_stat_cond;
+/* forward declarations */
+struct st_key_range;
+typedef struct st_key_range key_range;
+struct st_key;
+typedef struct st_key KEY;
+
+class Ndb_index_stat_thread : public Ndb_component
+{
+public:
+  Ndb_index_stat_thread();
+  virtual ~Ndb_index_stat_thread();
+
+  int running;
+  pthread_mutex_t LOCK;
+  pthread_cond_t COND;
+  pthread_cond_t COND_ready;
+
+  /* protect entry lists where needed */
+  pthread_mutex_t list_mutex;
+
+  /* protect and signal changes in stats entries */
+  pthread_mutex_t stat_mutex;
+  pthread_cond_t stat_cond;
+
+private:
+  virtual int do_init() { return 0;}
+  virtual void do_run();
+  virtual int do_deinit() { return 0;}
+};
 
 /* these have to live in ha_ndbcluster.cc */
 extern bool ndb_index_stat_get_enable(THD *thd);
@@ -38,7 +59,7 @@ extern const char* g_ndb_status_index_st
 extern long g_ndb_status_index_stat_cache_query;
 extern long g_ndb_status_index_stat_cache_clean;
 
-void 
+void
 compute_index_bounds(NdbIndexScanOperation::IndexBound & bound,
                      const KEY *key_info,
                      const key_range *start_key, const key_range *end_key,
@@ -54,3 +75,5 @@ compute_index_bounds(NdbIndexScanOperati
 
 /* request on stats entry with recent error was ignored */
 #define Ndb_index_stat_error_HAS_ERROR          9003
+
+#endif

=== modified file 'sql/ha_ndbcluster.cc'
--- a/sql/ha_ndbcluster.cc	2011-10-22 09:38:48 +0000
+++ b/sql/ha_ndbcluster.cc	2011-11-10 15:06:03 +0000
@@ -46,6 +46,8 @@
 #include <ndb_version.h>
 #include "ndb_mi.h"
 #include "ndb_conflict_trans.h"
+#include "ndb_component.h"
+#include "ndb_util_thread.h"
 
 #ifdef ndb_dynamite
 #undef assert
@@ -420,25 +422,7 @@ static int ndb_get_table_statistics(THD
 
 THD *injector_thd= 0;
 
-// Util thread variables
-pthread_t ndb_util_thread;
-int ndb_util_thread_running= 0;
-pthread_mutex_t LOCK_ndb_util_thread;
-pthread_cond_t COND_ndb_util_thread;
-pthread_cond_t COND_ndb_util_ready;
-pthread_handler_t ndb_util_thread_func(void *arg);
-
 // Index stats thread variables
-pthread_t ndb_index_stat_thread;
-int ndb_index_stat_thread_running= 0;
-pthread_mutex_t LOCK_ndb_index_stat_thread;
-pthread_cond_t COND_ndb_index_stat_thread;
-pthread_cond_t COND_ndb_index_stat_ready;
-pthread_mutex_t ndb_index_stat_list_mutex;
-pthread_mutex_t ndb_index_stat_stat_mutex;
-pthread_cond_t ndb_index_stat_stat_cond;
-pthread_handler_t ndb_index_stat_thread_func(void *arg);
-
 extern void ndb_index_stat_free(NDB_SHARE *share);
 extern void ndb_index_stat_end();
 
@@ -12251,7 +12235,7 @@ ndbcluster_find_files(handlerton *hton,
 /* Call back after cluster connect */
 static int connect_callback()
 {
-  pthread_mutex_lock(&LOCK_ndb_util_thread);
+  pthread_mutex_lock(&ndb_util_thread.LOCK);
   update_status_variables(NULL, &g_ndb_status,
                           g_ndb_cluster_connection);
 
@@ -12261,8 +12245,8 @@ static int connect_callback()
   while ((node_id= g_ndb_cluster_connection->get_next_node(node_iter)))
     g_node_id_map[node_id]= i++;
 
-  pthread_cond_signal(&COND_ndb_util_thread);
-  pthread_mutex_unlock(&LOCK_ndb_util_thread);
+  pthread_cond_signal(&ndb_util_thread.COND);
+  pthread_mutex_unlock(&ndb_util_thread.LOCK);
   return 0;
 }
 
@@ -12308,6 +12292,12 @@ int(*ndb_wait_setup_func)(ulong) = 0;
 #endif
 extern int ndb_dictionary_is_mysqld;
 
+/**
+ * Components
+ */
+Ndb_util_thread ndb_util_thread;
+Ndb_index_stat_thread ndb_index_stat_thread;
+
 static int ndbcluster_init(void *p)
 {
   DBUG_ENTER("ndbcluster_init");
@@ -12320,20 +12310,11 @@ static int ndbcluster_init(void *p)
   assert(DependencyTracker::InvalidTransactionId ==
          Ndb_binlog_extra_row_info::InvalidTransactionId);
 #endif
+  ndb_util_thread.init();
+  ndb_index_stat_thread.init();
 
   pthread_mutex_init(&ndbcluster_mutex,MY_MUTEX_INIT_FAST);
-  pthread_mutex_init(&LOCK_ndb_util_thread, MY_MUTEX_INIT_FAST);
-  pthread_cond_init(&COND_ndb_util_thread, NULL);
-  pthread_cond_init(&COND_ndb_util_ready, NULL);
   pthread_cond_init(&COND_ndb_setup_complete, NULL);
-  ndb_util_thread_running= -1;
-  pthread_mutex_init(&LOCK_ndb_index_stat_thread, MY_MUTEX_INIT_FAST);
-  pthread_cond_init(&COND_ndb_index_stat_thread, NULL);
-  pthread_cond_init(&COND_ndb_index_stat_ready, NULL);
-  pthread_mutex_init(&ndb_index_stat_list_mutex, MY_MUTEX_INIT_FAST);
-  pthread_mutex_init(&ndb_index_stat_stat_mutex, MY_MUTEX_INIT_FAST);
-  pthread_cond_init(&ndb_index_stat_stat_cond, NULL);
-  ndb_index_stat_thread_running= -1;
   ndbcluster_terminating= 0;
   ndb_dictionary_is_mysqld= 1;
   ndb_setup_complete= 0;
@@ -12397,72 +12378,53 @@ static int ndbcluster_init(void *p)
   }
 
   // Create utility thread
-  pthread_t tmp;
-  if (pthread_create(&tmp, &connection_attrib, ndb_util_thread_func, 0))
+  if (ndb_util_thread.start())
   {
     DBUG_PRINT("error", ("Could not create ndb utility thread"));
     my_hash_free(&ndbcluster_open_tables);
     pthread_mutex_destroy(&ndbcluster_mutex);
-    pthread_mutex_destroy(&LOCK_ndb_util_thread);
-    pthread_cond_destroy(&COND_ndb_util_thread);
-    pthread_cond_destroy(&COND_ndb_util_ready);
     pthread_cond_destroy(&COND_ndb_setup_complete);
     ndbcluster_global_schema_lock_deinit();
     goto ndbcluster_init_error;
   }
 
   /* Wait for the util thread to start */
-  pthread_mutex_lock(&LOCK_ndb_util_thread);
-  while (ndb_util_thread_running < 0)
-    pthread_cond_wait(&COND_ndb_util_ready, &LOCK_ndb_util_thread);
-  pthread_mutex_unlock(&LOCK_ndb_util_thread);
+  pthread_mutex_lock(&ndb_util_thread.LOCK);
+  while (ndb_util_thread.running < 0)
+    pthread_cond_wait(&ndb_util_thread.COND_ready, &ndb_util_thread.LOCK);
+  pthread_mutex_unlock(&ndb_util_thread.LOCK);
   
-  if (!ndb_util_thread_running)
+  if (!ndb_util_thread.running)
   {
     DBUG_PRINT("error", ("ndb utility thread exited prematurely"));
     my_hash_free(&ndbcluster_open_tables);
     pthread_mutex_destroy(&ndbcluster_mutex);
-    pthread_mutex_destroy(&LOCK_ndb_util_thread);
-    pthread_cond_destroy(&COND_ndb_util_thread);
-    pthread_cond_destroy(&COND_ndb_util_ready);
     pthread_cond_destroy(&COND_ndb_setup_complete);
     ndbcluster_global_schema_lock_deinit();
     goto ndbcluster_init_error;
   }
 
   // Create index statistics thread
-  pthread_t tmp2;
-  if (pthread_create(&tmp2, &connection_attrib, ndb_index_stat_thread_func, 0))
+  if (ndb_index_stat_thread.start())
   {
     DBUG_PRINT("error", ("Could not create ndb index statistics thread"));
     my_hash_free(&ndbcluster_open_tables);
     pthread_mutex_destroy(&ndbcluster_mutex);
-    pthread_mutex_destroy(&LOCK_ndb_index_stat_thread);
-    pthread_cond_destroy(&COND_ndb_index_stat_thread);
-    pthread_cond_destroy(&COND_ndb_index_stat_ready);
-    pthread_mutex_destroy(&ndb_index_stat_list_mutex);
-    pthread_mutex_destroy(&ndb_index_stat_stat_mutex);
-    pthread_cond_destroy(&ndb_index_stat_stat_cond);
     goto ndbcluster_init_error;
   }
 
   /* Wait for the index statistics thread to start */
-  pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
-  while (ndb_index_stat_thread_running < 0)
-    pthread_cond_wait(&COND_ndb_index_stat_ready, &LOCK_ndb_index_stat_thread);
-  pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
-  
-  if (!ndb_index_stat_thread_running)
+  pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
+  while (ndb_index_stat_thread.running < 0)
+    pthread_cond_wait(&ndb_index_stat_thread.COND_ready,
+                      &ndb_index_stat_thread.LOCK);
+  pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
+
+  if (!ndb_index_stat_thread.running)
   {
     DBUG_PRINT("error", ("ndb index statistics thread exited prematurely"));
     my_hash_free(&ndbcluster_open_tables);
     pthread_mutex_destroy(&ndbcluster_mutex);
-    pthread_mutex_destroy(&LOCK_ndb_index_stat_thread);
-    pthread_cond_destroy(&COND_ndb_index_stat_thread);
-    pthread_cond_destroy(&COND_ndb_index_stat_ready);
-    pthread_mutex_destroy(&ndb_index_stat_list_mutex);
-    pthread_mutex_destroy(&ndb_index_stat_stat_mutex);
-    pthread_cond_destroy(&ndb_index_stat_stat_cond);
     goto ndbcluster_init_error;
   }
 
@@ -12476,6 +12438,8 @@ static int ndbcluster_init(void *p)
   DBUG_RETURN(FALSE);
 
 ndbcluster_init_error:
+  ndb_util_thread.deinit();
+  ndb_index_stat_thread.deinit();
   /* disconnect from cluster and free connection resources */
   ndbcluster_disconnect();
   ndbcluster_hton->state= SHOW_OPTION_DISABLED;               // If we couldn't use handler
@@ -12513,12 +12477,13 @@ static int ndbcluster_end(handlerton *ht
 
   /* wait for index stat thread to finish */
   sql_print_information("Stopping Cluster Index Statistics thread");
-  pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+  pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
   ndbcluster_terminating= 1;
-  pthread_cond_signal(&COND_ndb_index_stat_thread);
-  while (ndb_index_stat_thread_running > 0)
-    pthread_cond_wait(&COND_ndb_index_stat_ready, &LOCK_ndb_index_stat_thread);
-  pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+  pthread_cond_signal(&ndb_index_stat_thread.COND);
+  while (ndb_index_stat_thread.running > 0)
+    pthread_cond_wait(&ndb_index_stat_thread.COND_ready,
+                      &ndb_index_stat_thread.LOCK);
+  pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
 
   /* wait for util and binlog thread to finish */
   ndbcluster_binlog_end(NULL);
@@ -12547,17 +12512,14 @@ static int ndbcluster_end(handlerton *ht
   ndb_index_stat_end();
   ndbcluster_disconnect();
 
+  ndb_util_thread.deinit();
+  ndb_index_stat_thread.deinit();
+
   // cleanup ndb interface
   ndb_end_internal();
 
   pthread_mutex_destroy(&ndbcluster_mutex);
-  pthread_mutex_destroy(&LOCK_ndb_util_thread);
-  pthread_cond_destroy(&COND_ndb_util_thread);
-  pthread_cond_destroy(&COND_ndb_util_ready);
   pthread_cond_destroy(&COND_ndb_setup_complete);
-  pthread_mutex_destroy(&LOCK_ndb_index_stat_thread);
-  pthread_cond_destroy(&COND_ndb_index_stat_thread);
-  pthread_cond_destroy(&COND_ndb_index_stat_ready);
   ndbcluster_global_schema_lock_deinit();
   DBUG_RETURN(0);
 }
@@ -14776,7 +14738,24 @@ ha_ndbcluster::update_table_comment(
 /**
   Utility thread main loop.
 */
-pthread_handler_t ndb_util_thread_func(void *arg __attribute__((unused)))
+Ndb_util_thread::Ndb_util_thread()
+  : running(-1)
+{
+  pthread_mutex_init(&LOCK, MY_MUTEX_INIT_FAST);
+  pthread_cond_init(&COND, NULL);
+  pthread_cond_init(&COND_ready, NULL);
+}
+
+Ndb_util_thread::~Ndb_util_thread()
+{
+  assert(running <= 0);
+  pthread_mutex_destroy(&LOCK);
+  pthread_cond_destroy(&COND);
+  pthread_cond_destroy(&COND_ready);
+}
+
+void
+Ndb_util_thread::do_run()
 {
   THD *thd; /* needs to be first for thread_stack */
   struct timespec abstime;
@@ -14784,21 +14763,18 @@ pthread_handler_t ndb_util_thread_func(v
   uint share_list_size= 0;
   NDB_SHARE **share_list= NULL;
 
-  my_thread_init();
   DBUG_ENTER("ndb_util_thread");
   DBUG_PRINT("enter", ("cache_check_time: %lu", opt_ndb_cache_check_time));
- 
-   pthread_mutex_lock(&LOCK_ndb_util_thread);
+
+  pthread_mutex_lock(&ndb_util_thread.LOCK);
 
   thd= new THD; /* note that contructor of THD uses DBUG_ */
   if (thd == NULL)
   {
     my_errno= HA_ERR_OUT_OF_MEM;
-    DBUG_RETURN(NULL);
+    DBUG_VOID_RETURN;
   }
   THD_CHECK_SENTRY(thd);
-  pthread_detach_this_thread();
-  ndb_util_thread= pthread_self();
 
   thd->thread_stack= (char*)&thd; /* remember where our stack is */
   if (thd->store_globals())
@@ -14821,9 +14797,9 @@ pthread_handler_t ndb_util_thread_func(v
   thd->update_charset();
 
   /* Signal successful initialization */
-  ndb_util_thread_running= 1;
-  pthread_cond_signal(&COND_ndb_util_ready);
-  pthread_mutex_unlock(&LOCK_ndb_util_thread);
+  ndb_util_thread.running= 1;
+  pthread_cond_signal(&ndb_util_thread.COND_ready);
+  pthread_mutex_unlock(&ndb_util_thread.LOCK);
 
   /*
     wait for mysql server to start
@@ -14837,7 +14813,7 @@ pthread_handler_t ndb_util_thread_func(v
     if (ndbcluster_terminating)
     {
       mysql_mutex_unlock(&LOCK_server_started);
-      pthread_mutex_lock(&LOCK_ndb_util_thread);
+      pthread_mutex_lock(&ndb_util_thread.LOCK);
       goto ndb_util_thread_end;
     }
   }
@@ -14846,21 +14822,21 @@ pthread_handler_t ndb_util_thread_func(v
   /*
     Wait for cluster to start
   */
-  pthread_mutex_lock(&LOCK_ndb_util_thread);
+  pthread_mutex_lock(&ndb_util_thread.LOCK);
   while (!g_ndb_status.cluster_node_id && (ndbcluster_hton->slot != ~(uint)0))
   {
     /* ndb not connected yet */
-    pthread_cond_wait(&COND_ndb_util_thread, &LOCK_ndb_util_thread);
+    pthread_cond_wait(&ndb_util_thread.COND, &ndb_util_thread.LOCK);
     if (ndbcluster_terminating)
       goto ndb_util_thread_end;
   }
-  pthread_mutex_unlock(&LOCK_ndb_util_thread);
+  pthread_mutex_unlock(&ndb_util_thread.LOCK);
 
   /* Get thd_ndb for this thread */
   if (!(thd_ndb= ha_ndbcluster::seize_thd_ndb(thd)))
   {
     sql_print_error("Could not allocate Thd_ndb object");
-    pthread_mutex_lock(&LOCK_ndb_util_thread);
+    pthread_mutex_lock(&ndb_util_thread.LOCK);
     goto ndb_util_thread_end;
   }
   set_thd_ndb(thd, thd_ndb);
@@ -14872,14 +14848,14 @@ pthread_handler_t ndb_util_thread_func(v
   set_timespec(abstime, 0);
   for (;;)
   {
-    pthread_mutex_lock(&LOCK_ndb_util_thread);
+    pthread_mutex_lock(&ndb_util_thread.LOCK);
     if (!ndbcluster_terminating)
-      pthread_cond_timedwait(&COND_ndb_util_thread,
-                             &LOCK_ndb_util_thread,
+      pthread_cond_timedwait(&ndb_util_thread.COND,
+                             &ndb_util_thread.LOCK,
                              &abstime);
     if (ndbcluster_terminating) /* Shutting down server */
       goto ndb_util_thread_end;
-    pthread_mutex_unlock(&LOCK_ndb_util_thread);
+    pthread_mutex_unlock(&ndb_util_thread.LOCK);
 #ifdef NDB_EXTRA_DEBUG_UTIL_THREAD
     DBUG_PRINT("ndb_util_thread", ("Started, cache_check_time: %lu",
                                    opt_ndb_cache_check_time));
@@ -15032,7 +15008,7 @@ next:
     set_timespec_nsec(abstime, opt_ndb_cache_check_time * 1000000ULL);
   }
 
-  pthread_mutex_lock(&LOCK_ndb_util_thread);
+  pthread_mutex_lock(&ndb_util_thread.LOCK);
 
 ndb_util_thread_end:
   net_end(&thd->net);
@@ -15048,15 +15024,12 @@ ndb_util_thread_fail:
   delete thd;
   
   /* signal termination */
-  ndb_util_thread_running= 0;
-  pthread_cond_signal(&COND_ndb_util_ready);
-  pthread_mutex_unlock(&LOCK_ndb_util_thread);
+  ndb_util_thread.running= 0;
+  pthread_cond_signal(&ndb_util_thread.COND_ready);
+  pthread_mutex_unlock(&ndb_util_thread.LOCK);
   DBUG_PRINT("exit", ("ndb_util_thread"));
 
   DBUG_LEAVE;                               // Must match DBUG_ENTER()
-  my_thread_end();
-  pthread_exit(0);
-  return NULL;                              // Avoid compiler warnings
 }
 
 /*

=== modified file 'sql/ha_ndbcluster.h'
--- a/sql/ha_ndbcluster.h	2011-10-17 12:43:31 +0000
+++ b/sql/ha_ndbcluster.h	2011-11-10 10:35:09 +0000
@@ -1090,7 +1090,9 @@ void ndbcluster_print_error(int error, c
 static const char ndbcluster_hton_name[]= "ndbcluster";
 static const int ndbcluster_hton_name_length=sizeof(ndbcluster_hton_name)-1;
 extern int ndbcluster_terminating;
-extern int ndb_util_thread_running;
-extern pthread_cond_t COND_ndb_util_ready;
-extern int ndb_index_stat_thread_running;
-extern pthread_cond_t COND_ndb_index_stat_ready;
+
+#include "ndb_util_thread.h"
+extern Ndb_util_thread ndb_util_thread;
+
+#include "ha_ndb_index_stat.h"
+extern Ndb_index_stat_thread ndb_index_stat_thread;

=== modified file 'sql/ha_ndbcluster_binlog.cc'
--- a/sql/ha_ndbcluster_binlog.cc	2011-10-20 16:18:28 +0000
+++ b/sql/ha_ndbcluster_binlog.cc	2011-11-10 10:35:09 +0000
@@ -812,7 +812,7 @@ int ndbcluster_binlog_end(THD *thd)
 {
   DBUG_ENTER("ndbcluster_binlog_end");
 
-  if (ndb_util_thread_running > 0)
+  if (ndb_util_thread.running > 0)
   {
     /*
       Wait for util thread to die (as this uses the injector mutex)
@@ -822,33 +822,34 @@ int ndbcluster_binlog_end(THD *thd)
       be called before ndb_cluster_end().
     */
     sql_print_information("Stopping Cluster Utility thread");
-    pthread_mutex_lock(&LOCK_ndb_util_thread);
+    pthread_mutex_lock(&ndb_util_thread.LOCK);
     /* Ensure mutex are not freed if ndb_cluster_end is running at same time */
-    ndb_util_thread_running++;
+    ndb_util_thread.running++;
     ndbcluster_terminating= 1;
-    pthread_cond_signal(&COND_ndb_util_thread);
-    while (ndb_util_thread_running > 1)
-      pthread_cond_wait(&COND_ndb_util_ready, &LOCK_ndb_util_thread);
-    ndb_util_thread_running--;
-    pthread_mutex_unlock(&LOCK_ndb_util_thread);
+    pthread_cond_signal(&ndb_util_thread.COND);
+    while (ndb_util_thread.running > 1)
+      pthread_cond_wait(&ndb_util_thread.COND_ready, &ndb_util_thread.LOCK);
+    ndb_util_thread.running--;
+    pthread_mutex_unlock(&ndb_util_thread.LOCK);
   }
 
-  if (ndb_index_stat_thread_running > 0)
+  if (ndb_index_stat_thread.running > 0)
   {
     /*
       Index stats thread blindly imitates util thread.  Following actually
       fixes some "[Warning] Plugin 'ndbcluster' will be forced to shutdown".
     */
     sql_print_information("Stopping Cluster Index Stats thread");
-    pthread_mutex_lock(&LOCK_ndb_index_stat_thread);
+    pthread_mutex_lock(&ndb_index_stat_thread.LOCK);
     /* Ensure mutex are not freed if ndb_cluster_end is running at same time */
-    ndb_index_stat_thread_running++;
+    ndb_index_stat_thread.running++;
     ndbcluster_terminating= 1;
-    pthread_cond_signal(&COND_ndb_index_stat_thread);
-    while (ndb_index_stat_thread_running > 1)
-      pthread_cond_wait(&COND_ndb_index_stat_ready, &LOCK_ndb_index_stat_thread);
-    ndb_index_stat_thread_running--;
-    pthread_mutex_unlock(&LOCK_ndb_index_stat_thread);
+    pthread_cond_signal(&ndb_index_stat_thread.COND);
+    while (ndb_index_stat_thread.running > 1)
+      pthread_cond_wait(&ndb_index_stat_thread.COND_ready,
+                        &ndb_index_stat_thread.LOCK);
+    ndb_index_stat_thread.running--;
+    pthread_mutex_unlock(&ndb_index_stat_thread.LOCK);
   }
 
   if (ndbcluster_binlog_inited)

=== modified file 'sql/ha_ndbcluster_binlog.h'
--- a/sql/ha_ndbcluster_binlog.h	2011-10-20 16:18:28 +0000
+++ b/sql/ha_ndbcluster_binlog.h	2011-11-10 10:35:09 +0000
@@ -191,10 +191,6 @@ void ndbcluster_global_schema_lock_init(
 void ndbcluster_global_schema_lock_deinit();
 
 extern unsigned char g_node_id_map[max_ndb_nodes];
-extern pthread_mutex_t LOCK_ndb_util_thread;
-extern pthread_cond_t COND_ndb_util_thread;
-extern pthread_mutex_t LOCK_ndb_index_stat_thread;
-extern pthread_cond_t COND_ndb_index_stat_thread;
 extern pthread_mutex_t ndbcluster_mutex;
 extern HASH ndbcluster_open_tables;
 

=== modified file 'sql/ha_ndbcluster_cond.cc'
--- a/sql/ha_ndbcluster_cond.cc	2011-11-07 11:09:09 +0000
+++ b/sql/ha_ndbcluster_cond.cc	2011-11-10 15:08:40 +0000
@@ -1217,7 +1217,10 @@ ha_ndbcluster_cond::build_scan_filter_pr
       uint32 len= value->save_in_field(field);
       char buff[MAX_FIELD_WIDTH];
       String str(buff,sizeof(buff),field->get_field_charset());
-      field->get_field_val_str(&str);
+      if (len > field->get_field()->field_length)
+        str.set(value->get_val(), len, field->get_field_charset());
+      else
+        field->get_field_val_str(&str);
       const char *val=
         (value->is_const_func() && is_string)?
         str.ptr()
@@ -1245,7 +1248,10 @@ ha_ndbcluster_cond::build_scan_filter_pr
       uint32 len= value->save_in_field(field);
       char buff[MAX_FIELD_WIDTH];
       String str(buff,sizeof(buff),field->get_field_charset());
-      field->get_field_val_str(&str);
+      if (len > field->get_field()->field_length)
+        str.set(value->get_val(), len, field->get_field_charset());
+      else
+        field->get_field_val_str(&str);
       const char *val=
         (value->is_const_func() && is_string)?
         str.ptr()

=== modified file 'sql/ha_ndbcluster_cond.h'
--- a/sql/ha_ndbcluster_cond.h	2011-11-04 08:33:56 +0000
+++ b/sql/ha_ndbcluster_cond.h	2011-11-10 15:08:40 +0000
@@ -250,16 +250,7 @@ public:
     const Item *item= value.item;
     if (item && field)
     {
-      DBUG_PRINT("info", ("item length %u, field length %u",
-                          item->max_length, field->field_length));
-      if (item->max_length > field->field_length)
-      {
-        DBUG_PRINT("info", ("Comparing field with longer value"));
-        DBUG_PRINT("info", ("Field can store %u", field->field_length));
-        length= field->field_length;
-      }
-      else
-        length= item->max_length;
+      length= item->max_length;
       my_bitmap_map *old_map=
         dbug_tmp_use_all_columns(field->table, field->table->write_set);
       ((Item *)item)->save_in_field(field, FALSE);

=== added file 'sql/ndb_component.cc'
--- a/sql/ndb_component.cc	1970-01-01 00:00:00 +0000
+++ b/sql/ndb_component.cc	2011-11-10 08:16:52 +0000
@@ -0,0 +1,143 @@
+/*
+   Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; version 2 of the License.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, write to the Free Software
+   Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA
+*/
+
+#include "ndb_component.h"
+
+Ndb_component::Ndb_component()
+  : m_thread_state(TS_UNINIT)
+{
+}
+
+Ndb_component::~Ndb_component()
+{
+
+}
+
+int
+Ndb_component::init()
+{
+  assert(m_thread_state == TS_UNINIT);
+
+  pthread_mutex_init(&m_start_stop_mutex, MY_MUTEX_INIT_FAST);
+  pthread_cond_init(&m_start_stop_cond, NULL);
+
+  int res= do_init();
+  if (res == 0)
+  {
+    m_thread_state= TS_INIT;
+  }
+  return res;
+}
+
+void *
+Ndb_component_run_C(void * arg)
+{
+  my_thread_init();
+  Ndb_component * self = reinterpret_cast<Ndb_component*>(arg);
+  self->run_impl();
+  my_thread_end();
+  pthread_exit(0);
+  return NULL;                              // Avoid compiler warnings
+}
+
+extern pthread_attr_t connection_attrib; // mysql global pthread attr
+
+int
+Ndb_component::start()
+{
+  assert(m_thread_state == TS_INIT);
+  pthread_mutex_lock(&m_start_stop_mutex);
+  m_thread_state= TS_STARTING;
+  int res= pthread_create(&m_thread, &connection_attrib, Ndb_component_run_C,
+                          this);
+
+  if (res == 0)
+  {
+    while (m_thread_state == TS_STARTING)
+    {
+      pthread_cond_wait(&m_start_stop_cond, &m_start_stop_mutex);
+    }
+    pthread_mutex_unlock(&m_start_stop_mutex);
+    return m_thread_state == TS_RUNNING ? 0 : 1;
+  }
+
+  pthread_mutex_unlock(&m_start_stop_mutex);
+  return res;
+}
+
+void
+Ndb_component::run_impl()
+{
+  pthread_detach_this_thread();
+  pthread_mutex_lock(&m_start_stop_mutex);
+  if (m_thread_state == TS_STARTING)
+  {
+    m_thread_state= TS_RUNNING;
+    pthread_cond_signal(&m_start_stop_cond);
+    pthread_mutex_unlock(&m_start_stop_mutex);
+    do_run();
+    pthread_mutex_lock(&m_start_stop_mutex);
+  }
+  m_thread_state = TS_STOPPED;
+  pthread_cond_signal(&m_start_stop_cond);
+  pthread_mutex_unlock(&m_start_stop_mutex);
+}
+
+bool
+Ndb_component::is_stop_requested()
+{
+  bool res = false;
+  pthread_mutex_lock(&m_start_stop_mutex);
+  res = m_thread_state != TS_RUNNING;
+  pthread_mutex_unlock(&m_start_stop_mutex);
+  return res;
+}
+
+int
+Ndb_component::stop()
+{
+  pthread_mutex_lock(&m_start_stop_mutex);
+  assert(m_thread_state == TS_RUNNING ||
+         m_thread_state == TS_STOPPING ||
+         m_thread_state == TS_STOPPED);
+
+  if (m_thread_state == TS_RUNNING)
+  {
+    m_thread_state= TS_STOPPING;
+  }
+
+  if (m_thread_state == TS_STOPPING)
+  {
+    while (m_thread_state != TS_STOPPED)
+    {
+      pthread_cond_signal(&m_start_stop_cond);
+      pthread_cond_wait(&m_start_stop_cond, &m_start_stop_mutex);
+    }
+  }
+  pthread_mutex_unlock(&m_start_stop_mutex);
+
+  return 0;
+}
+
+int
+Ndb_component::deinit()
+{
+  assert(m_thread_state == TS_STOPPED);
+  pthread_mutex_destroy(&m_start_stop_mutex);
+  pthread_cond_destroy(&m_start_stop_cond);
+  return do_deinit();
+}

=== added file 'sql/ndb_component.h'
--- a/sql/ndb_component.h	1970-01-01 00:00:00 +0000
+++ b/sql/ndb_component.h	2011-11-10 08:16:52 +0000
@@ -0,0 +1,82 @@
+/*
+   Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; version 2 of the License.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, write to the Free Software
+   Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA
+*/
+
+#ifndef HA_NDBCLUSTER_COMPONENT_H
+#define HA_NDBCLUSTER_COMPONENT_H
+
+#include <my_global.h>
+#include <my_pthread.h>
+
+extern "C" void * Ndb_component_run_C(void *);
+
+class Ndb_component
+{
+public:
+  virtual int init();
+  virtual int start();
+  virtual int stop();
+  virtual int deinit();
+
+protected:
+  /**
+   * Con/de-structor is protected...so that sub-class needs to provide own
+   */
+  Ndb_component();
+  virtual ~Ndb_component();
+
+  /**
+   * Component init function
+   */
+  virtual int do_init() = 0;
+
+  /**
+   * Component run function
+   */
+  virtual void do_run() = 0;
+
+  /**
+   * Component deinit function
+   */
+  virtual int do_deinit() = 0;
+
+  /**
+   * For usage in threads main loop
+   */
+  bool is_stop_requested();
+
+private:
+
+  enum ThreadState
+  {
+    TS_UNINIT   = 0,
+    TS_INIT     = 1,
+    TS_STARTING = 2,
+    TS_RUNNING  = 3,
+    TS_STOPPING = 4,
+    TS_STOPPED  = 5
+  };
+
+  ThreadState m_thread_state;
+  pthread_t m_thread;
+  pthread_mutex_t m_start_stop_mutex;
+  pthread_cond_t m_start_stop_cond;
+
+  void run_impl();
+  friend void * Ndb_component_run_C(void *);
+};
+
+#endif

=== added file 'sql/ndb_util_thread.h'
--- a/sql/ndb_util_thread.h	1970-01-01 00:00:00 +0000
+++ b/sql/ndb_util_thread.h	2011-11-10 08:16:52 +0000
@@ -0,0 +1,40 @@
+/*
+   Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; version 2 of the License.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, write to the Free Software
+   Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA
+*/
+
+#ifndef NDB_UTIL_THREAD_H
+#define NDB_UTIL_THREAD_H
+
+#include "ndb_component.h"
+
+class Ndb_util_thread : public Ndb_component
+{
+public:
+  Ndb_util_thread();
+  virtual ~Ndb_util_thread();
+
+  int running;
+  pthread_mutex_t LOCK;
+  pthread_cond_t COND;
+  pthread_cond_t COND_ready;
+
+private:
+  virtual int do_init() { return 0;}
+  virtual void do_run();
+  virtual int do_deinit() { return 0;}
+};
+
+#endif

=== modified file 'storage/ndb/CMakeLists.txt'
--- a/storage/ndb/CMakeLists.txt	2011-09-07 22:50:01 +0000
+++ b/storage/ndb/CMakeLists.txt	2011-11-10 08:16:52 +0000
@@ -149,7 +149,8 @@ SET(NDBCLUSTER_SOURCES
   ../../sql/ha_ndb_index_stat.cc
   ../../sql/ha_ndbinfo.cc
   ../../sql/ndb_mi.cc
-  ../../sql/ndb_conflict_trans.cc)
+  ../../sql/ndb_conflict_trans.cc
+  ../../sql/ndb_component.cc)
 INCLUDE_DIRECTORIES(${CMAKE_SOURCE_DIR}/storage/ndb/include)
 
 IF(EXISTS ${CMAKE_SOURCE_DIR}/storage/mysql_storage_engine.cmake)

=== modified file 'storage/ndb/include/kernel/ndb_limits.h'
--- a/storage/ndb/include/kernel/ndb_limits.h	2011-10-07 13:15:08 +0000
+++ b/storage/ndb/include/kernel/ndb_limits.h	2011-11-16 11:05:46 +0000
@@ -194,8 +194,10 @@
 #define NDBMT_BLOCK_MASK ((1 << NDBMT_BLOCK_BITS) - 1)
 #define NDBMT_BLOCK_INSTANCE_BITS 7
 
-#define MAX_NDBMT_LQH_WORKERS 4
-#define MAX_NDBMT_LQH_THREADS 4
+#define NDB_DEFAULT_LOG_PARTS 4
+#define NDB_MAX_LOG_PARTS     4
+#define MAX_NDBMT_LQH_WORKERS NDB_MAX_LOG_PARTS
+#define MAX_NDBMT_LQH_THREADS NDB_MAX_LOG_PARTS
 #define MAX_NDBMT_TC_THREADS  2
 
 #define NDB_FILE_BUFFER_SIZE (256*1024)

=== modified file 'storage/ndb/include/kernel/signaldata/ScanFrag.hpp'
--- a/storage/ndb/include/kernel/signaldata/ScanFrag.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/include/kernel/signaldata/ScanFrag.hpp	2011-11-09 13:10:53 +0000
@@ -177,7 +177,7 @@ public:
   Uint32 fragmentCompleted;
   Uint32 transId1;
   Uint32 transId2;
-  Uint32 total_len;
+  Uint32 total_len;  // Total #Uint32 returned as TRANSID_AI
 };
 
 class ScanFragRef {

=== modified file 'storage/ndb/include/kernel/signaldata/TupKey.hpp'
--- a/storage/ndb/include/kernel/signaldata/TupKey.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/include/kernel/signaldata/TupKey.hpp	2011-11-09 13:10:53 +0000
@@ -91,7 +91,7 @@ private:
    * DATA VARIABLES
    */
   Uint32 userPtr;
-  Uint32 readLength;
+  Uint32 readLength;  // Length in Uint32 words
   Uint32 writeLength;
   Uint32 noFiredTriggers;
   Uint32 lastRow;

=== modified file 'storage/ndb/include/mgmapi/mgmapi_config_parameters.h'
--- a/storage/ndb/include/mgmapi/mgmapi_config_parameters.h	2011-10-07 16:12:13 +0000
+++ b/storage/ndb/include/mgmapi/mgmapi_config_parameters.h	2011-11-14 12:02:56 +0000
@@ -68,6 +68,7 @@
 
 #define CFG_DB_FILESYSTEM_PATH        125
 #define CFG_DB_NO_REDOLOG_FILES       126
+#define CFG_DB_NO_REDOLOG_PARTS       632
 #define CFG_DB_REDOLOG_FILE_SIZE      140
 
 #define CFG_DB_LCP_DISC_PAGES_TUP     127
@@ -198,6 +199,7 @@
 #define CFG_DB_MT_THREAD_CONFIG          628
 
 #define CFG_DB_CRASH_ON_CORRUPTED_TUPLE  629
+/* 632 used for CFG_DB_NO_REDOLOG_PARTS */
 
 #define CFG_NODE_ARBIT_RANK           200
 #define CFG_NODE_ARBIT_DELAY          201

=== modified file 'storage/ndb/include/ndb_version.h.in'
--- a/storage/ndb/include/ndb_version.h.in	2011-07-04 13:37:56 +0000
+++ b/storage/ndb/include/ndb_version.h.in	2011-11-14 12:02:56 +0000
@@ -693,4 +693,25 @@ ndbd_get_config_supported(Uint32 x)
   return x >= NDBD_GET_CONFIG_SUPPORT_71;
 }
 
+#define NDBD_CONFIGURABLE_LOG_PARTS_70 NDB_MAKE_VERSION(7,0,29)
+#define NDBD_CONFIGURABLE_LOG_PARTS_71 NDB_MAKE_VERSION(7,1,18)
+#define NDBD_CONFIGURABLE_LOG_PARTS_72 NDB_MAKE_VERSION(7,2,3)
+
+static
+inline
+int
+ndb_configurable_log_parts(Uint32 x)
+{
+  const Uint32 major = (x >> 16) & 0xFF;
+  const Uint32 minor = (x >>  8) & 0xFF;
+
+  if (major == 7 && minor < 2)
+  {
+    if (minor == 0)
+      return x >= NDBD_CONFIGURABLE_LOG_PARTS_70;
+    else if (minor == 1)
+      return x >= NDBD_CONFIGURABLE_LOG_PARTS_71;
+  }
+  return x >= NDBD_CONFIGURABLE_LOG_PARTS_72;
+}
 #endif

=== modified file 'storage/ndb/include/ndbapi/NdbReceiver.hpp'
--- a/storage/ndb/include/ndbapi/NdbReceiver.hpp	2011-08-17 12:36:56 +0000
+++ b/storage/ndb/include/ndbapi/NdbReceiver.hpp	2011-11-09 13:10:53 +0000
@@ -105,16 +105,13 @@ private:
 
   static
   void calculate_batch_size(const NdbImpl&,
-                            const NdbRecord *,
-                            const NdbRecAttr *first_rec_attr,
-                            Uint32, Uint32, Uint32&, Uint32&, Uint32&);
-
-  void calculate_batch_size(Uint32 key_size,
                             Uint32 parallelism,
                             Uint32& batch_size,
-                            Uint32& batch_byte_size,
-                            Uint32& first_batch_size,
-                            const NdbRecord *rec) const;
+                            Uint32& batch_byte_size);
+
+  void calculate_batch_size(Uint32 parallelism,
+                            Uint32& batch_size,
+                            Uint32& batch_byte_size) const;
 
   /*
     Set up buffers for receiving TRANSID_AI and KEYINFO20 signals

=== modified file 'storage/ndb/src/common/debugger/EventLogger.cpp'
--- a/storage/ndb/src/common/debugger/EventLogger.cpp	2011-10-21 08:59:23 +0000
+++ b/storage/ndb/src/common/debugger/EventLogger.cpp	2011-11-17 08:49:40 +0000
@@ -527,23 +527,47 @@ void getTextTransReportCounters(QQQQ) {
   // -------------------------------------------------------------------  
   // Report information about transaction activity once per 10 seconds.
   // ------------------------------------------------------------------- 
-  BaseString::snprintf(m_text, m_text_len, 
-		       "Trans. Count = %u, Commit Count = %u, "
-		       "Read Count = %u, Simple Read Count = %u, "
-		       "Write Count = %u, AttrInfo Count = %u, "
-		       "Concurrent Operations = %u, Abort Count = %u"
-		       " Scans = %u Range scans = %u", 
-		       theData[1], 
-		       theData[2], 
-		       theData[3], 
-		       theData[4],
-		       theData[5], 
-		       theData[6], 
-		       theData[7], 
-		       theData[8],
-		       theData[9],
-		       theData[10]);
+  if (len <= 11)
+  {
+    BaseString::snprintf(m_text, m_text_len,
+                         "Trans. Count = %u, Commit Count = %u, "
+                         "Read Count = %u, Simple Read Count = %u, "
+                         "Write Count = %u, AttrInfo Count = %u, "
+                         "Concurrent Operations = %u, Abort Count = %u"
+                         " Scans = %u Range scans = %u",
+                         theData[1],
+                         theData[2],
+                         theData[3],
+                         theData[4],
+                         theData[5],
+                         theData[6],
+                         theData[7],
+                         theData[8],
+                         theData[9],
+                         theData[10]);
+  }
+  else
+  {
+    BaseString::snprintf(m_text, m_text_len,
+                         "Trans. Count = %u, Commit Count = %u, "
+                         "Read Count = %u, Simple Read Count = %u, "
+                         "Write Count = %u, AttrInfo Count = %u, "
+                         "Concurrent Operations = %u, Abort Count = %u"
+                         " Scans = %u Range scans = %u, Local Read Count = %u",
+                         theData[1],
+                         theData[2],
+                         theData[3],
+                         theData[4],
+                         theData[5],
+                         theData[6],
+                         theData[7],
+                         theData[8],
+                         theData[9],
+                         theData[10],
+                         theData[11]);
+  }
 }
+
 void getTextOperationReportCounters(QQQQ) {
   BaseString::snprintf(m_text, m_text_len,
 		       "Operations=%u",

=== modified file 'storage/ndb/src/common/portlib/NdbThread.c'
--- a/storage/ndb/src/common/portlib/NdbThread.c	2011-10-21 08:59:23 +0000
+++ b/storage/ndb/src/common/portlib/NdbThread.c	2011-11-10 15:06:03 +0000
@@ -540,11 +540,17 @@ NdbThread_End()
   {
     NdbMutex_Destroy(g_ndb_thread_mutex);
   }
-  
+
   if (g_ndb_thread_condition)
   {
     NdbCondition_Destroy(g_ndb_thread_condition);
   }
+
+  if (g_main_thread)
+  {
+    NdbMem_Free((char *)g_main_thread);
+    g_main_thread = 0;
+  }
 }
 
 int

=== modified file 'storage/ndb/src/common/util/NdbPack.cpp'
--- a/storage/ndb/src/common/util/NdbPack.cpp	2011-08-09 15:37:45 +0000
+++ b/storage/ndb/src/common/util/NdbPack.cpp	2011-11-11 08:38:00 +0000
@@ -930,7 +930,6 @@ const char*
 NdbPack::Data::print(char* buf, Uint32 bufsz) const
 {
   Print p(buf, bufsz);
-  char* ptr = buf;
   if (m_varBytes != 0)
   {
     p.print("varBytes:");
@@ -1291,6 +1290,7 @@ Tdata::create()
     Uint8 xbuf[Tspec::MaxBuf];
     Uint64 xbuf_align;
   };
+  (void)xbuf_align; // compiler warning
   memset(xbuf, 0x3f, sizeof(xbuf));
   m_xsize = 0;
   m_xnulls = 0;
@@ -1830,7 +1830,7 @@ testdesc(const Tdata& tdata)
   const NdbPack::Data& data = tdata.m_data;
   const Uint8* buf_old = (const Uint8*)data.get_full_buf();
   const Uint32 varBytes = data.get_var_bytes();
-  const Uint32 nullMaskLen = tspec.m_spec.get_nullmask_len(false);
+  // const Uint32 nullMaskLen = tspec.m_spec.get_nullmask_len(false);
   const Uint32 dataLen = data.get_data_len();
   const Uint32 fullLen = data.get_full_len();
   const Uint32 cnt = data.get_cnt();

=== modified file 'storage/ndb/src/kernel/blocks/dbacc/Dbacc.hpp'
--- a/storage/ndb/src/kernel/blocks/dbacc/Dbacc.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dbacc/Dbacc.hpp	2011-11-14 09:18:48 +0000
@@ -624,8 +624,8 @@ struct ScanRec {
 /* TABREC                                                                            */
 /* --------------------------------------------------------------------------------- */
 struct Tabrec {
-  Uint32 fragholder[MAX_FRAG_PER_NODE];
-  Uint32 fragptrholder[MAX_FRAG_PER_NODE];
+  Uint32 fragholder[MAX_FRAG_PER_LQH];
+  Uint32 fragptrholder[MAX_FRAG_PER_LQH];
   Uint32 tabUserPtr;
   BlockReference tabUserRef;
   Uint32 tabUserGsn;

=== modified file 'storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dbacc/DbaccMain.cpp	2011-11-14 09:18:48 +0000
@@ -481,7 +481,7 @@ void Dbacc::initialiseTableRec(Signal* s
   for (tabptr.i = 0; tabptr.i < ctablesize; tabptr.i++) {
     refresh_watch_dog();
     ptrAss(tabptr, tabrec);
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragholder); i++) {
       tabptr.p->fragholder[i] = RNIL;
       tabptr.p->fragptrholder[i] = RNIL;
     }//for
@@ -653,7 +653,7 @@ Dbacc::execDROP_FRAG_REQ(Signal* signal)
   tabPtr.p->tabUserPtr = req->senderData;
   tabPtr.p->tabUserGsn = GSN_DROP_FRAG_REQ;
 
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabPtr.p->fragholder); i++)
   {
     jam();
     if (tabPtr.p->fragholder[i] == req->fragId)
@@ -677,7 +677,7 @@ void Dbacc::releaseRootFragResources(Sig
   if (tabPtr.p->tabUserGsn == GSN_DROP_TAB_REQ)
   {
     jam();
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabPtr.p->fragholder); i++)
     {
       jam();
       if (tabPtr.p->fragholder[i] != RNIL)
@@ -857,7 +857,7 @@ void Dbacc::releaseFragRecord(Signal* si
 /* -------------------------------------------------------------------------- */
 bool Dbacc::addfragtotab(Signal* signal, Uint32 rootIndex, Uint32 fid) 
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragholder); i++) {
     jam();
     if (tabptr.p->fragholder[i] == RNIL) {
       jam();
@@ -2493,7 +2493,7 @@ void Dbacc::execACC_LOCKREQ(Signal* sign
     ptrCheckGuard(tabptr, ctablesize, tabrec);
     // find fragment (TUX will know it)
     if (req->fragPtrI == RNIL) {
-      for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+      for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragholder); i++) {
         jam();
         if (tabptr.p->fragholder[i] == req->fragId){
 	  jam();
@@ -7590,7 +7590,7 @@ void Dbacc::takeOutReadyScanQueue(Signal
 
 bool Dbacc::getfragmentrec(Signal* signal, FragmentrecPtr& rootPtr, Uint32 fid) 
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragholder); i++) {
     jam();
     if (tabptr.p->fragholder[i] == fid) {
       jam();

=== modified file 'storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp'
--- a/storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp	2011-11-03 08:40:19 +0000
+++ b/storage/ndb/src/kernel/blocks/dbdict/Dbdict.cpp	2011-11-19 14:55:47 +0000
@@ -207,7 +207,8 @@ Dbdict::execDUMP_STATE_ORD(Signal* signa
     const Uint32 tab = signal->theData[1];
     const Uint32 ver = signal->theData[2];
     TableRecordPtr tabRecPtr;
-    c_tableRecordPool.getPtr(tabRecPtr, tab);
+    bool ok = find_object(tabRecPtr, tab);
+    ndbrequire(ok);
     DropTableReq * req = (DropTableReq*)signal->getDataPtr();
     req->senderData = 1225;
     req->senderRef = numberToRef(1,1);
@@ -226,9 +227,9 @@ Dbdict::execDUMP_STATE_ORD(Signal* signa
 
   if (signal->theData[0] == 1227)
   {
-    DictObject_hash::Iterator iter;
-    bool ok = c_obj_hash.first(iter);
-    for(; ok; ok = c_obj_hash.next(iter))
+    DictObjectName_hash::Iterator iter;
+    bool ok = c_obj_name_hash.first(iter);
+    for (; ok; ok = c_obj_name_hash.next(iter))
     {
       LocalRope name(c_rope_pool, iter.curr.p->m_name);
       char buf[1024];
@@ -252,8 +253,8 @@ Dbdict::execDUMP_STATE_ORD(Signal* signa
   {
     RSS_AP_SNAPSHOT_SAVE(c_rope_pool);
     RSS_AP_SNAPSHOT_SAVE(c_attributeRecordPool);
-    RSS_AP_SNAPSHOT_SAVE(c_tableRecordPool);
-    RSS_AP_SNAPSHOT_SAVE(c_triggerRecordPool);
+    RSS_AP_SNAPSHOT_SAVE(c_tableRecordPool_);
+    RSS_AP_SNAPSHOT_SAVE(c_triggerRecordPool_);
     RSS_AP_SNAPSHOT_SAVE(c_obj_pool);
     RSS_AP_SNAPSHOT_SAVE(c_hash_map_pool);
     RSS_AP_SNAPSHOT_SAVE(g_hash_map);
@@ -263,8 +264,8 @@ Dbdict::execDUMP_STATE_ORD(Signal* signa
   {
     RSS_AP_SNAPSHOT_CHECK(c_rope_pool);
     RSS_AP_SNAPSHOT_CHECK(c_attributeRecordPool);
-    RSS_AP_SNAPSHOT_CHECK(c_tableRecordPool);
-    RSS_AP_SNAPSHOT_CHECK(c_triggerRecordPool);
+    RSS_AP_SNAPSHOT_CHECK(c_tableRecordPool_);
+    RSS_AP_SNAPSHOT_CHECK(c_triggerRecordPool_);
     RSS_AP_SNAPSHOT_CHECK(c_obj_pool);
     RSS_AP_SNAPSHOT_CHECK(c_hash_map_pool);
     RSS_AP_SNAPSHOT_CHECK(g_hash_map);
@@ -296,16 +297,16 @@ void Dbdict::execDBINFO_SCANREQ(Signal *
         c_attributeRecordPool.getUsedHi(),
         { CFG_DB_NO_ATTRIBUTES,0,0,0 }},
       { "Table Record",
-        c_tableRecordPool.getUsed(),
-        c_tableRecordPool.getSize(),
-        c_tableRecordPool.getEntrySize(),
-        c_tableRecordPool.getUsedHi(),
+        c_tableRecordPool_.getUsed(),
+        c_noOfMetaTables,
+        c_tableRecordPool_.getEntrySize(),
+        c_tableRecordPool_.getUsedHi(),
         { CFG_DB_NO_TABLES,0,0,0 }},
       { "Trigger Record",
-        c_triggerRecordPool.getUsed(),
-        c_triggerRecordPool.getSize(),
-        c_triggerRecordPool.getEntrySize(),
-        c_triggerRecordPool.getUsedHi(),
+        c_triggerRecordPool_.getUsed(),
+        c_triggerRecordPool_.getSize(),
+        c_triggerRecordPool_.getEntrySize(),
+        c_triggerRecordPool_.getUsedHi(),
         { CFG_DB_NO_TRIGGERS,0,0,0 }},
       { "FS Connect Record",
         c_fsConnectRecordPool.getUsed(),
@@ -512,8 +513,8 @@ void Dbdict::packTableIntoPages(Signal*
   case DictTabInfo::OrderedIndex:{
     jam();
     TableRecordPtr tablePtr;
-    c_tableRecordPool.getPtr(tablePtr, tableId);
-    if (tablePtr.p->m_obj_ptr_i == RNIL)
+    bool ok = find_object(tablePtr, tableId);
+    if (!ok)
     {
       jam();
       sendGET_TABINFOREF(signal, &req_copy,
@@ -543,7 +544,7 @@ void Dbdict::packTableIntoPages(Signal*
   case DictTabInfo::Tablespace:
   case DictTabInfo::LogfileGroup:{
     FilegroupPtr fg_ptr;
-    ndbrequire(c_filegroup_hash.find(fg_ptr, tableId));
+    ndbrequire(find_object(fg_ptr, tableId));
     const Uint32 free_hi= signal->theData[4];
     const Uint32 free_lo= signal->theData[5];
     packFilegroupIntoPages(w, fg_ptr, free_hi, free_lo);
@@ -551,20 +552,20 @@ void Dbdict::packTableIntoPages(Signal*
   }
   case DictTabInfo::Datafile:{
     FilePtr fg_ptr;
-    ndbrequire(c_file_hash.find(fg_ptr, tableId));
+    ndbrequire(find_object(fg_ptr, tableId));
     const Uint32 free_extents= signal->theData[4];
     packFileIntoPages(w, fg_ptr, free_extents);
     break;
   }
   case DictTabInfo::Undofile:{
     FilePtr fg_ptr;
-    ndbrequire(c_file_hash.find(fg_ptr, tableId));
+    ndbrequire(find_object(fg_ptr, tableId));
     packFileIntoPages(w, fg_ptr, 0);
     break;
   }
   case DictTabInfo::HashMap:{
     HashMapRecordPtr hm_ptr;
-    ndbrequire(c_hash_map_hash.find(hm_ptr, tableId));
+    ndbrequire(find_object(hm_ptr, tableId));
     packHashMapIntoPages(w, hm_ptr);
     break;
   }
@@ -656,7 +657,7 @@ Dbdict::packTableIntoPages(SimplePropert
   if (tablePtr.p->hashMapObjectId != RNIL)
   {
     HashMapRecordPtr hm_ptr;
-    ndbrequire(c_hash_map_hash.find(hm_ptr, tablePtr.p->hashMapObjectId));
+    ndbrequire(find_object(hm_ptr, tablePtr.p->hashMapObjectId));
     w.add(DictTabInfo::HashMapVersion, hm_ptr.p->m_object_version);
   }
 
@@ -698,7 +699,15 @@ Dbdict::packTableIntoPages(SimplePropert
   {
     jam();
     TableRecordPtr primTab;
-    c_tableRecordPool.getPtr(primTab, tablePtr.p->primaryTableId);
+    bool ok = find_object(primTab, tablePtr.p->primaryTableId);
+    if (!ok)
+    {
+      jam();
+      ndbrequire(signal != NULL);
+      Uint32 err = CreateFragmentationRef::InvalidPrimaryTable;
+      signal->theData[0] = err;
+      return;
+    }
     ConstRope r2(c_rope_pool, primTab.p->tableName);
     r2.copy(tableName);
     w.add(DictTabInfo::PrimaryTable, tableName);
@@ -731,7 +740,7 @@ Dbdict::packTableIntoPages(SimplePropert
   {
     w.add(DictTabInfo::TablespaceId, tablePtr.p->m_tablespace_id);
     FilegroupPtr tsPtr;
-    ndbrequire(c_filegroup_hash.find(tsPtr, tablePtr.p->m_tablespace_id));
+    ndbrequire(find_object(tsPtr, tablePtr.p->m_tablespace_id));
     w.add(DictTabInfo::TablespaceVersion, tsPtr.p->m_version);
   }
 
@@ -830,7 +839,7 @@ Dbdict::packFilegroupIntoPages(SimplePro
     fg.TS_ExtentSize = fg_ptr.p->m_tablespace.m_extent_size;
     fg.TS_LogfileGroupId = fg_ptr.p->m_tablespace.m_default_logfile_group_id;
     FilegroupPtr lfg_ptr;
-    ndbrequire(c_filegroup_hash.find(lfg_ptr, fg.TS_LogfileGroupId));
+    ndbrequire(find_object(lfg_ptr, fg.TS_LogfileGroupId));
     fg.TS_LogfileGroupVersion = lfg_ptr.p->m_version;
     break;
   case DictTabInfo::LogfileGroup:
@@ -869,7 +878,7 @@ Dbdict::packFileIntoPages(SimpleProperti
   f.FileVersion = f_ptr.p->m_version;
 
   FilegroupPtr lfg_ptr;
-  ndbrequire(c_filegroup_hash.find(lfg_ptr, f.FilegroupId));
+  ndbrequire(find_object(lfg_ptr, f.FilegroupId));
   f.FilegroupVersion = lfg_ptr.p->m_version;
 
   SimpleProperties::UnpackStatus s;
@@ -893,8 +902,6 @@ Dbdict::execCREATE_FRAGMENTATION_REQ(Sig
     return;
   }
 
-  TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, req->primaryTableId);
   XSchemaFile * xsf = &c_schemaFile[SchemaRecord::NEW_SCHEMA_FILE];
   SchemaFile::TableEntry * te = getTableEntry(xsf, req->primaryTableId);
   if (te->m_tableState != SchemaFile::SF_CREATE)
@@ -911,8 +918,9 @@ Dbdict::execCREATE_FRAGMENTATION_REQ(Sig
   }
 
   DictObjectPtr obj_ptr;
-  c_obj_pool.getPtr(obj_ptr, tablePtr.p->m_obj_ptr_i);
-
+  TableRecordPtr tablePtr;
+  bool ok = find_object(obj_ptr, tablePtr, req->primaryTableId);
+  ndbrequire(ok);
   SchemaOpPtr op_ptr;
   findDictObjectOp(op_ptr, obj_ptr);
   ndbrequire(!op_ptr.isNull());
@@ -1816,7 +1824,7 @@ void Dbdict::closeReadSchemaConf(Signal*
       ndbrequire(c_writeSchemaRecord.inUse == false);
       XSchemaFile * xsf = &c_schemaFile[c_schemaRecord.oldSchemaPage != 0 ];
       Uint32 noOfPages =
-        (c_tableRecordPool.getSize() + NDB_SF_PAGE_ENTRIES - 1) /
+        (c_noOfMetaTables + NDB_SF_PAGE_ENTRIES - 1) /
         NDB_SF_PAGE_ENTRIES;
       resizeSchemaFile(xsf, noOfPages);
 
@@ -1946,15 +1954,13 @@ Dbdict::convertSchemaFileTo_6_4(XSchemaF
 Dbdict::Dbdict(Block_context& ctx):
   SimulatedBlock(DBDICT, ctx),
   c_attributeRecordHash(c_attributeRecordPool),
-  c_file_hash(c_file_pool),
-  c_filegroup_hash(c_filegroup_pool),
-  c_obj_hash(c_obj_pool),
+  c_obj_name_hash(c_obj_pool),
+  c_obj_id_hash(c_obj_pool),
   c_schemaOpHash(c_schemaOpPool),
   c_schemaTransHash(c_schemaTransPool),
   c_schemaTransList(c_schemaTransPool),
   c_schemaTransCount(0),
   c_txHandleHash(c_txHandlePool),
-  c_hash_map_hash(c_hash_map_pool),
   c_opCreateEvent(c_opRecordPool),
   c_opSubEvent(c_opRecordPool),
   c_opDropEvent(c_opRecordPool),
@@ -2235,8 +2241,6 @@ void Dbdict::initRecords()
 {
   initNodeRecords();
   initPageRecords();
-  initTableRecords();
-  initTriggerRecords();
 }//Dbdict::initRecords()
 
 void Dbdict::initSendSchemaRecord()
@@ -2317,27 +2321,12 @@ void Dbdict::initPageRecords()
   c_schemaRecord.oldSchemaPage = NDB_SF_MAX_PAGES;
 }//Dbdict::initPageRecords()
 
-void Dbdict::initTableRecords()
-{
-  TableRecordPtr tablePtr;
-  while (1) {
-    jam();
-    refresh_watch_dog();
-    c_tableRecordPool.seize(tablePtr);
-    if (tablePtr.i == RNIL) {
-      jam();
-      break;
-    }//if
-    initialiseTableRecord(tablePtr);
-  }//while
-}//Dbdict::initTableRecords()
-
-void Dbdict::initialiseTableRecord(TableRecordPtr tablePtr)
+void Dbdict::initialiseTableRecord(TableRecordPtr tablePtr, Uint32 tableId)
 {
   new (tablePtr.p) TableRecord();
   tablePtr.p->filePtr[0] = RNIL;
   tablePtr.p->filePtr[1] = RNIL;
-  tablePtr.p->tableId = tablePtr.i;
+  tablePtr.p->tableId = tableId;
   tablePtr.p->tableVersion = (Uint32)-1;
   tablePtr.p->fragmentType = DictTabInfo::AllNodesSmallTable;
   tablePtr.p->gciTableCreated = 0;
@@ -2374,31 +2363,18 @@ void Dbdict::initialiseTableRecord(Table
   tablePtr.p->indexStatFragId = ZNIL;
   tablePtr.p->indexStatNodeId = ZNIL;
   tablePtr.p->indexStatBgRequest = 0;
+  tablePtr.p->m_obj_ptr_i = RNIL;
 }//Dbdict::initialiseTableRecord()
 
-void Dbdict::initTriggerRecords()
-{
-  TriggerRecordPtr triggerPtr;
-  while (1) {
-    jam();
-    refresh_watch_dog();
-    c_triggerRecordPool.seize(triggerPtr);
-    if (triggerPtr.i == RNIL) {
-      jam();
-      break;
-    }//if
-    initialiseTriggerRecord(triggerPtr);
-  }//while
-}
-
-void Dbdict::initialiseTriggerRecord(TriggerRecordPtr triggerPtr)
+void Dbdict::initialiseTriggerRecord(TriggerRecordPtr triggerPtr, Uint32 triggerId)
 {
   new (triggerPtr.p) TriggerRecord();
   triggerPtr.p->triggerState = TriggerRecord::TS_NOT_DEFINED;
-  triggerPtr.p->triggerId = RNIL;
+  triggerPtr.p->triggerId = triggerId;
   triggerPtr.p->tableId = RNIL;
   triggerPtr.p->attributeMask.clear();
   triggerPtr.p->indexId = RNIL;
+  triggerPtr.p->m_obj_ptr_i = RNIL;
 }
 
 Uint32 Dbdict::getFsConnRecord()
@@ -2416,12 +2392,12 @@ Uint32 Dbdict::getFsConnRecord()
  * Search schemafile for free entry.  Its index is used as 'logical id'
  * of new disk-stored object.
  */
-Uint32 Dbdict::getFreeObjId(Uint32 minId, bool both)
+Uint32 Dbdict::getFreeObjId(bool both)
 {
   const XSchemaFile * newxsf = &c_schemaFile[SchemaRecord::NEW_SCHEMA_FILE];
   const XSchemaFile * oldxsf = &c_schemaFile[SchemaRecord::OLD_SCHEMA_FILE];
   const Uint32 noOfEntries = newxsf->noOfPages * NDB_SF_PAGE_ENTRIES;
-  for (Uint32 i = minId; i<noOfEntries; i++)
+  for (Uint32 i = 0; i<noOfEntries; i++)
   {
     const SchemaFile::TableEntry * oldentry = getTableEntry(oldxsf, i);
     const SchemaFile::TableEntry * newentry = getTableEntry(newxsf, i);
@@ -2439,40 +2415,81 @@ Uint32 Dbdict::getFreeObjId(Uint32 minId
   return RNIL;
 }
 
-Uint32 Dbdict::getFreeTableRecord()
+bool Dbdict::seizeTableRecord(TableRecordPtr& tablePtr, Uint32& schemaFileId)
 {
-  Uint32 i = getFreeObjId(0);
-  if (i == RNIL) {
+  if (schemaFileId == RNIL)
+  {
     jam();
-    return RNIL;
+    schemaFileId = getFreeObjId();
   }
-  if (i >= c_tableRecordPool.getSize()) {
+  if (schemaFileId == RNIL)
+  {
     jam();
-    return RNIL;
+    return false;
+  }
+  if (schemaFileId >= c_noOfMetaTables)
+  {
+    jam();
+    return false;
   }
 
-  TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, i);
-  initialiseTableRecord(tablePtr);
-  return i;
+  c_tableRecordPool_.seize(tablePtr);
+  if (tablePtr.isNull())
+  {
+    jam();
+    return false;
+  }
+  initialiseTableRecord(tablePtr, schemaFileId);
+  return true;
 }
 
 Uint32 Dbdict::getFreeTriggerRecord()
 {
-  const Uint32 size = c_triggerRecordPool.getSize();
+  const Uint32 size = c_triggerRecordPool_.getSize();
   TriggerRecordPtr triggerPtr;
-  for (triggerPtr.i = 0; triggerPtr.i < size; triggerPtr.i++) {
+  for (Uint32 id = 0; id < size; id++) {
     jam();
-    c_triggerRecordPool.getPtr(triggerPtr);
-    if (triggerPtr.p->triggerState == TriggerRecord::TS_NOT_DEFINED) {
+    bool ok = find_object(triggerPtr, id);
+    if (!ok)
+    {
       jam();
-      initialiseTriggerRecord(triggerPtr);
-      return triggerPtr.i;
+      return id;
     }
   }
   return RNIL;
 }
 
+bool Dbdict::seizeTriggerRecord(TriggerRecordPtr& triggerPtr, Uint32 triggerId)
+{
+  if (triggerId == RNIL)
+  {
+    triggerId = getFreeTriggerRecord();
+  }
+  else
+  {
+    TriggerRecordPtr ptr;
+    bool ok =  find_object(ptr, triggerId);
+    if (ok)
+    { // triggerId already in use
+      jam();
+      return false;
+    }
+  }
+  if (triggerId == RNIL)
+  {
+    jam();
+    return false;
+  }
+  c_triggerRecordPool_.seize(triggerPtr);
+  if (triggerPtr.isNull())
+  {
+    jam();
+    return false;
+  }
+  initialiseTriggerRecord(triggerPtr, triggerId);
+  return true;
+}
+
 Uint32
 Dbdict::check_read_obj(Uint32 objId, Uint32 transId)
 {
@@ -2632,11 +2649,11 @@ void Dbdict::execREAD_CONFIG_REQ(Signal*
     m_ctx.m_config.getOwnConfigIterator();
   ndbrequire(p != 0);
 
-  Uint32 attributesize, tablerecSize;
+  Uint32 attributesize;
   ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_DB_NO_TRIGGERS,
 					&c_maxNoOfTriggers));
   ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_DICT_ATTRIBUTE,&attributesize));
-  ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_DICT_TABLE, &tablerecSize));
+  ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_DICT_TABLE, &c_noOfMetaTables));
   c_indexStatAutoCreate = 0;
   ndb_mgm_get_int_parameter(p, CFG_DB_INDEX_STAT_AUTO_CREATE,
                             &c_indexStatAutoCreate);
@@ -2655,9 +2672,9 @@ void Dbdict::execREAD_CONFIG_REQ(Signal*
   c_nodes.setSize(MAX_NDB_NODES);
   c_pageRecordArray.setSize(ZNUMBER_OF_PAGES);
   c_schemaPageRecordArray.setSize(2 * NDB_SF_MAX_PAGES);
-  c_tableRecordPool.setSize(tablerecSize);
-  g_key_descriptor_pool.setSize(tablerecSize);
-  c_triggerRecordPool.setSize(c_maxNoOfTriggers);
+  c_tableRecordPool_.setSize(c_noOfMetaTables);
+  g_key_descriptor_pool.setSize(c_noOfMetaTables);
+  c_triggerRecordPool_.setSize(c_maxNoOfTriggers);
 
   Record_info ri;
   OpSectionBuffer::createRecordInfo(ri, RT_DBDICT_OP_SECTION_BUFFER);
@@ -2669,13 +2686,11 @@ void Dbdict::execREAD_CONFIG_REQ(Signal*
   c_txHandlePool.setSize(2);
   c_txHandleHash.setSize(2);
 
-  c_obj_pool.setSize(tablerecSize+c_maxNoOfTriggers);
-  c_obj_hash.setSize((tablerecSize+c_maxNoOfTriggers+1)/2);
+  c_obj_pool.setSize(c_noOfMetaTables+c_maxNoOfTriggers);
+  c_obj_name_hash.setSize((c_noOfMetaTables+c_maxNoOfTriggers+1)/2);
+  c_obj_id_hash.setSize((c_noOfMetaTables+c_maxNoOfTriggers+1)/2);
   m_dict_lock_pool.setSize(MAX_NDB_NODES);
 
-  c_file_hash.setSize(16);
-  c_filegroup_hash.setSize(16);
-
   c_file_pool.init(RT_DBDICT_FILE, pc);
   c_filegroup_pool.init(RT_DBDICT_FILEGROUP, pc);
 
@@ -2704,7 +2719,6 @@ void Dbdict::execREAD_CONFIG_REQ(Signal*
   c_copyDataRecPool.arena_pool_init(&c_arenaAllocator, RT_DBDICT_COPY_DATA, pc);
   c_schemaOpPool.arena_pool_init(&c_arenaAllocator, RT_DBDICT_SCHEMA_OPERATION, pc);
 
-  c_hash_map_hash.setSize(4);
   c_hash_map_pool.setSize(32);
   g_hash_map.setSize(32);
 
@@ -2726,7 +2740,7 @@ void Dbdict::execREAD_CONFIG_REQ(Signal*
   c_schemaFile[1].noOfPages = 0;
 
   Uint32 rps = 0;
-  rps += tablerecSize * (MAX_TAB_NAME_SIZE + MAX_FRM_DATA_SIZE);
+  rps += c_noOfMetaTables * (MAX_TAB_NAME_SIZE + MAX_FRM_DATA_SIZE);
   rps += attributesize * (MAX_ATTR_NAME_SIZE + MAX_ATTR_DEFAULT_VALUE_SIZE);
   rps += c_maxNoOfTriggers * MAX_TAB_NAME_SIZE;
   rps += (10 + 10) * MAX_TAB_NAME_SIZE;
@@ -2896,7 +2910,7 @@ void Dbdict::execREAD_NODESCONF(Signal*
 void Dbdict::initSchemaFile(Signal* signal)
 {
   XSchemaFile * xsf = &c_schemaFile[SchemaRecord::NEW_SCHEMA_FILE];
-  xsf->noOfPages = (c_tableRecordPool.getSize() + NDB_SF_PAGE_ENTRIES - 1)
+  xsf->noOfPages = (c_noOfMetaTables + NDB_SF_PAGE_ENTRIES - 1)
                    / NDB_SF_PAGE_ENTRIES;
   initSchemaFile(xsf, 0, xsf->noOfPages, true);
   // init alt copy too for INR
@@ -2937,9 +2951,9 @@ Dbdict::initSchemaFile_conf(Signal* sign
 }
 
 void
-Dbdict::activateIndexes(Signal* signal, Uint32 i)
+Dbdict::activateIndexes(Signal* signal, Uint32 id)
 {
-  if (i == 0)
+  if (id == 0)
     D("activateIndexes start");
 
   Uint32 requestFlags = 0;
@@ -2965,12 +2979,16 @@ Dbdict::activateIndexes(Signal* signal,
   }
 
   TableRecordPtr indexPtr;
-  indexPtr.i = i;
-  for (; indexPtr.i < c_tableRecordPool.getSize(); indexPtr.i++)
+  for (; id < c_noOfMetaTables; id++)
   {
-    c_tableRecordPool.getPtr(indexPtr);
+    bool ok = find_object(indexPtr, id);
+    if (!ok)
+    {
+      jam();
+      continue;
+    }
 
-    if (check_read_obj(indexPtr.i))
+    if (check_read_obj(id))
     {
       continue;
     }
@@ -2988,7 +3006,7 @@ Dbdict::activateIndexes(Signal* signal,
     }
 
     // wl3600_todo use simple schema trans when implemented
-    D("activateIndexes i=" << indexPtr.i);
+    D("activateIndexes id=" << id);
 
     TxHandlePtr tx_ptr;
     seizeTxHandle(tx_ptr);
@@ -3030,8 +3048,10 @@ Dbdict::activateIndex_fromBeginTrans(Sig
   ndbrequire(!tx_ptr.isNull());
 
   TableRecordPtr indexPtr;
-  indexPtr.i = tx_ptr.p->m_userData;
-  c_tableRecordPool.getPtr(indexPtr);
+  c_tableRecordPool_.getPtr(indexPtr, tx_ptr.p->m_userData);
+  ndbrequire(!indexPtr.isNull());
+  DictObjectPtr index_obj_ptr;
+  c_obj_pool.getPtr(index_obj_ptr, indexPtr.p->m_obj_ptr_i);
 
   AlterIndxReq* req = (AlterIndxReq*)signal->getDataPtrSend();
 
@@ -3044,7 +3064,7 @@ Dbdict::activateIndex_fromBeginTrans(Sig
   req->transId = tx_ptr.p->m_transId;
   req->transKey = tx_ptr.p->m_transKey;
   req->requestInfo = requestInfo;
-  req->indexId = indexPtr.i;
+  req->indexId = index_obj_ptr.p->m_id;
   req->indexVersion = indexPtr.p->tableVersion;
 
   Callback c = {
@@ -3095,13 +3115,13 @@ Dbdict::activateIndex_fromEndTrans(Signa
   ndbrequire(!tx_ptr.isNull());
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, tx_ptr.p->m_userData);
+  c_tableRecordPool_.getPtr(indexPtr, tx_ptr.p->m_userData);
+  DictObjectPtr index_obj_ptr;
+  c_obj_pool.getPtr(index_obj_ptr, indexPtr.p->m_obj_ptr_i);
 
   char indexName[MAX_TAB_NAME_SIZE];
   {
-    DictObjectPtr obj_ptr;
-    c_obj_pool.getPtr(obj_ptr, indexPtr.p->m_obj_ptr_i);
-    LocalRope name(c_rope_pool, obj_ptr.p->m_name);
+    LocalRope name(c_rope_pool, index_obj_ptr.p->m_name);
     name.copy(indexName);
   }
 
@@ -3110,38 +3130,43 @@ Dbdict::activateIndex_fromEndTrans(Signa
   {
     jam();
     infoEvent("DICT: activate index %u done (%s)",
-	      indexPtr.i, indexName);
+             index_obj_ptr.p->m_id, indexName);
   }
   else
   {
     jam();
     warningEvent("DICT: activate index %u error: code=%u line=%u node=%u (%s)",
-		 indexPtr.i,
+                index_obj_ptr.p->m_id,
 		 error.errorCode, error.errorLine, error.errorNodeId,
 		 indexName);
   }
 
+  Uint32 id = index_obj_ptr.p->m_id;
   releaseTxHandle(tx_ptr);
-  activateIndexes(signal, indexPtr.i + 1);
+  activateIndexes(signal, id + 1);
 }
 
 void
-Dbdict::rebuildIndexes(Signal* signal, Uint32 i)
+Dbdict::rebuildIndexes(Signal* signal, Uint32 id)
 {
-  if (i == 0)
+  if (id == 0)
     D("rebuildIndexes start");
 
   TableRecordPtr indexPtr;
-  indexPtr.i = i;
-  for (; indexPtr.i < c_tableRecordPool.getSize(); indexPtr.i++) {
-    c_tableRecordPool.getPtr(indexPtr);
-    if (check_read_obj(indexPtr.i))
+  for (; id < c_noOfMetaTables; id++) {
+    bool ok = find_object(indexPtr, id);
+    if (!ok)
+    {
+      jam();
+      continue;
+    }
+    if (check_read_obj(id))
       continue;
     if (!indexPtr.p->isIndex())
       continue;
 
     // wl3600_todo use simple schema trans when implemented
-    D("rebuildIndexes i=" << indexPtr.i);
+    D("rebuildIndexes id=" << id);
 
     TxHandlePtr tx_ptr;
     seizeTxHandle(tx_ptr);
@@ -3180,8 +3205,9 @@ Dbdict::rebuildIndex_fromBeginTrans(Sign
   ndbrequire(!tx_ptr.isNull());
 
   TableRecordPtr indexPtr;
-  indexPtr.i = tx_ptr.p->m_userData;
-  c_tableRecordPool.getPtr(indexPtr);
+  c_tableRecordPool_.getPtr(indexPtr, tx_ptr.p->m_userData);
+  DictObjectPtr index_obj_ptr;
+  c_obj_pool.getPtr(index_obj_ptr,indexPtr.p->m_obj_ptr_i);
 
   BuildIndxReq* req = (BuildIndxReq*)signal->getDataPtrSend();
 
@@ -3197,7 +3223,7 @@ Dbdict::rebuildIndex_fromBeginTrans(Sign
   req->buildId = 0;
   req->buildKey = 0;
   req->tableId = indexPtr.p->primaryTableId;
-  req->indexId = indexPtr.i;
+  req->indexId = index_obj_ptr.p->m_id;
   req->indexType = indexPtr.p->tableType;
   req->parallelism = 16;
 
@@ -3249,7 +3275,7 @@ Dbdict::rebuildIndex_fromEndTrans(Signal
   ndbrequire(!tx_ptr.isNull());
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, tx_ptr.p->m_userData);
+  c_tableRecordPool_.getPtr(indexPtr, tx_ptr.p->m_userData);
 
   const char* actionName;
   {
@@ -3258,10 +3284,11 @@ Dbdict::rebuildIndex_fromEndTrans(Signal
     actionName = !noBuild ? "rebuild" : "online";
   }
 
+  DictObjectPtr obj_ptr;
+  c_obj_pool.getPtr(obj_ptr, indexPtr.p->m_obj_ptr_i);
+
   char indexName[MAX_TAB_NAME_SIZE];
   {
-    DictObjectPtr obj_ptr;
-    c_obj_pool.getPtr(obj_ptr, indexPtr.p->m_obj_ptr_i);
     LocalRope name(c_rope_pool, obj_ptr.p->m_name);
     name.copy(indexName);
   }
@@ -3271,20 +3298,20 @@ Dbdict::rebuildIndex_fromEndTrans(Signal
     jam();
     infoEvent(
         "DICT: %s index %u done (%s)",
-        actionName, indexPtr.i, indexName);
+        actionName, obj_ptr.p->m_id, indexName);
   } else {
     jam();
     warningEvent(
         "DICT: %s index %u error: code=%u line=%u node=%u (%s)",
         actionName,
-        indexPtr.i, error.errorCode, error.errorLine, error.errorNodeId,
+        obj_ptr.p->m_id, error.errorCode, error.errorLine, error.errorNodeId,
         indexName);
   }
 
-  Uint32 i = tx_ptr.p->m_userData;
+  Uint32 id = obj_ptr.p->m_id;
   releaseTxHandle(tx_ptr);
 
-  rebuildIndexes(signal, i + 1);
+  rebuildIndexes(signal, id + 1);
 }
 
 /* **************************************************************** */
@@ -3656,7 +3683,7 @@ void Dbdict::checkSchemaStatus(Signal* s
     SchemaFile::EntryState ownState =
       (SchemaFile::EntryState)ownEntry->m_tableState;
 
-    if (c_restartRecord.activeTable >= c_tableRecordPool.getSize())
+    if (c_restartRecord.activeTable >= c_noOfMetaTables)
     {
       jam();
       ndbrequire(masterState == SchemaFile::SF_UNUSED);
@@ -4150,7 +4177,7 @@ Dbdict::execGET_TABINFO_CONF(Signal* sig
     {
       jam();
       FilePtr fg_ptr;
-      ndbrequire(c_file_hash.find(fg_ptr, conf->tableId));
+      ndbrequire(find_object(fg_ptr, conf->tableId));
       const Uint32 free_extents= conf->freeExtents;
       const Uint32 id= conf->tableId;
       const Uint32 type= conf->tableType;
@@ -4168,7 +4195,7 @@ Dbdict::execGET_TABINFO_CONF(Signal* sig
     {
       jam();
       FilegroupPtr fg_ptr;
-      ndbrequire(c_filegroup_hash.find(fg_ptr, conf->tableId));
+      ndbrequire(find_object(fg_ptr, conf->tableId));
       const Uint32 free_hi= conf->freeWordsHi;
       const Uint32 free_lo= conf->freeWordsLo;
       const Uint32 id= conf->tableId;
@@ -4643,8 +4670,8 @@ void Dbdict::execINCL_NODEREQ(Signal* si
 inline
 void Dbdict::printTables()
 {
-  DictObject_hash::Iterator iter;
-  bool moreTables = c_obj_hash.first(iter);
+  DictObjectName_hash::Iterator iter;
+  bool moreTables = c_obj_name_hash.first(iter);
   printf("OBJECTS IN DICT:\n");
   char name[PATH_MAX];
   while (moreTables) {
@@ -4652,7 +4679,7 @@ void Dbdict::printTables()
     ConstRope r(c_rope_pool, tablePtr.p->m_name);
     r.copy(name);
     printf("%s ", name);
-    moreTables = c_obj_hash.next(iter);
+    moreTables = c_obj_name_hash.next(iter);
   }
   printf("\n");
 }
@@ -4685,16 +4712,25 @@ Dbdict::get_object(DictObjectPtr& obj_pt
   key.m_key.m_name_len = len;
   key.m_key.m_pool = &c_rope_pool;
   key.m_name.m_hash = hash;
-  return c_obj_hash.find(obj_ptr, key);
+  return c_obj_name_hash.find(obj_ptr, key);
 }
 
 void
 Dbdict::release_object(Uint32 obj_ptr_i, DictObject* obj_ptr_p){
-  LocalRope name(c_rope_pool, obj_ptr_p->m_name);
+  jam();
+  RopeHandle obj_name = obj_ptr_p->m_name;
+  DictObjectPtr ptr = { obj_ptr_p, obj_ptr_i };
+
+  LocalRope name(c_rope_pool, obj_name);
   name.erase();
 
-  DictObjectPtr ptr = { obj_ptr_p, obj_ptr_i };
-  c_obj_hash.release(ptr);
+jam();
+  c_obj_name_hash.remove(ptr);
+jam();
+  c_obj_id_hash.remove(ptr);
+jam();
+  c_obj_pool.release(ptr);
+jam();
 }
 
 void
@@ -4769,19 +4805,16 @@ void Dbdict::handleTabInfoInit(Signal *
   }
 
   TableRecordPtr tablePtr;
+  Uint32 schemaFileId;
   switch (parseP->requestType) {
   case DictTabInfo::CreateTableFromAPI: {
     jam();
   }
   case DictTabInfo::AlterTableFromAPI:{
     jam();
-    tablePtr.i = getFreeTableRecord();
-    /* ---------------------------------------------------------------- */
-    // Check if no free tables existed.
-    /* ---------------------------------------------------------------- */
-    tabRequire(tablePtr.i != RNIL, CreateTableRef::NoMoreTableRecords);
-
-    c_tableRecordPool.getPtr(tablePtr);
+    schemaFileId = RNIL;
+    bool ok = seizeTableRecord(tablePtr,schemaFileId);
+    tabRequire(ok, CreateTableRef::NoMoreTableRecords);
     break;
   }
   case DictTabInfo::AddTableFromDict:
@@ -4791,20 +4824,16 @@ void Dbdict::handleTabInfoInit(Signal *
 /* ---------------------------------------------------------------- */
 // Get table id and check that table doesn't already exist
 /* ---------------------------------------------------------------- */
-    tablePtr.i = c_tableDesc.TableId;
-
     if (parseP->requestType == DictTabInfo::ReadTableFromDiskSR) {
-      ndbrequire(tablePtr.i == c_restartRecord.activeTable);
+      ndbrequire(c_tableDesc.TableId == c_restartRecord.activeTable);
     }//if
     if (parseP->requestType == DictTabInfo::GetTabInfoConf) {
-      ndbrequire(tablePtr.i == c_restartRecord.activeTable);
+      ndbrequire(c_tableDesc.TableId == c_restartRecord.activeTable);
     }//if
 
-    c_tableRecordPool.getPtr(tablePtr);
-
-    //Uint32 oldTableVersion = tablePtr.p->tableVersion;
-    initialiseTableRecord(tablePtr);
-
+    schemaFileId = c_tableDesc.TableId;
+    bool ok = seizeTableRecord(tablePtr,schemaFileId);
+    ndbrequire(ok); // Already exists or out of memory
 /* ---------------------------------------------------------------- */
 // Set table version
 /* ---------------------------------------------------------------- */
@@ -4824,27 +4853,29 @@ void Dbdict::handleTabInfoInit(Signal *
 	       CreateTableRef::OutOfStringBuffer);
   }
 
-  DictObjectPtr obj_ptr;
   if (parseP->requestType != DictTabInfo::AlterTableFromAPI) {
     jam();
-    ndbrequire(c_obj_hash.seize(obj_ptr));
+
+    DictObjectPtr obj_ptr;
+    ndbrequire(c_obj_pool.seize(obj_ptr));
     new (obj_ptr.p) DictObject;
-    obj_ptr.p->m_id = tablePtr.i;
+    obj_ptr.p->m_id = schemaFileId;
     obj_ptr.p->m_type = c_tableDesc.TableType;
     obj_ptr.p->m_name = tablePtr.p->tableName;
     obj_ptr.p->m_ref_count = 0;
-    c_obj_hash.add(obj_ptr);
-    tablePtr.p->m_obj_ptr_i = obj_ptr.i;
+    ndbrequire(link_object(obj_ptr, tablePtr));
+    c_obj_id_hash.add(obj_ptr);
+    c_obj_name_hash.add(obj_ptr);
 
     if (g_trace)
     {
-      g_eventLogger->info("Dbdict: create name=%s,id=%u,obj_ptr_i=%d",
+      g_eventLogger->info("Dbdict: %u: create name=%s,id=%u,obj_ptr_i=%d",__LINE__,
                           c_tableDesc.TableName,
-                          tablePtr.i, tablePtr.p->m_obj_ptr_i);
+                          schemaFileId, tablePtr.p->m_obj_ptr_i);
     }
     send_event(signal, trans_ptr,
                NDB_LE_CreateSchemaObject,
-               tablePtr.i,
+               schemaFileId,
                tablePtr.p->tableVersion,
                c_tableDesc.TableType);
   }
@@ -4909,7 +4940,7 @@ void Dbdict::handleTabInfoInit(Signal *
     {
       jam();
       HashMapRecordPtr hm_ptr;
-      ndbrequire(c_hash_map_hash.find(hm_ptr, dictObj->m_id));
+      ndbrequire(find_object(hm_ptr, dictObj->m_id));
       tablePtr.p->hashMapObjectId = hm_ptr.p->m_object_id;
       tablePtr.p->hashMapVersion = hm_ptr.p->m_object_version;
     }
@@ -4919,7 +4950,7 @@ void Dbdict::handleTabInfoInit(Signal *
   {
     jam();
     HashMapRecordPtr hm_ptr;
-    tabRequire(c_hash_map_hash.find(hm_ptr, tablePtr.p->hashMapObjectId),
+    tabRequire(find_object(hm_ptr, tablePtr.p->hashMapObjectId),
                CreateTableRef::InvalidHashMap);
 
     tabRequire(hm_ptr.p->m_object_version ==  tablePtr.p->hashMapVersion,
@@ -4988,12 +5019,12 @@ void Dbdict::handleTabInfoInit(Signal *
         ndbrequire(c_tableDesc.UpdateTriggerId != RNIL);
         ndbrequire(c_tableDesc.DeleteTriggerId != RNIL);
         ndbout_c("table: %u UPGRADE saving (%u/%u/%u)",
-                 tablePtr.i,
+                 schemaFileId,
                  c_tableDesc.InsertTriggerId,
                  c_tableDesc.UpdateTriggerId,
                  c_tableDesc.DeleteTriggerId);
         infoEvent("table: %u UPGRADE saving (%u/%u/%u)",
-                  tablePtr.i,
+                  schemaFileId,
                   c_tableDesc.InsertTriggerId,
                   c_tableDesc.UpdateTriggerId,
                   c_tableDesc.DeleteTriggerId);
@@ -5037,7 +5068,7 @@ void Dbdict::handleTabInfoInit(Signal *
      * Increase ref count
      */
     FilegroupPtr ptr;
-    ndbrequire(c_filegroup_hash.find(ptr, tablePtr.p->m_tablespace_id));
+    ndbrequire(find_object(ptr, tablePtr.p->m_tablespace_id));
     increase_ref_count(ptr.p->m_obj_ptr_i);
   }
 }//handleTabInfoInit()
@@ -5052,66 +5083,78 @@ Dbdict::upgrade_seizeTrigger(TableRecord
    * The insert trigger will be "main" trigger so
    *   it does not need any special treatment
    */
-  const Uint32 size = c_triggerRecordPool.getSize();
+  const Uint32 size = c_triggerRecordPool_.getSize();
   ndbrequire(updateTriggerId == RNIL || updateTriggerId < size);
   ndbrequire(deleteTriggerId == RNIL || deleteTriggerId < size);
 
+  DictObjectPtr tab_obj_ptr;
+  c_obj_pool.getPtr(tab_obj_ptr, tabPtr.p->m_obj_ptr_i);
+
   TriggerRecordPtr triggerPtr;
   if (updateTriggerId != RNIL)
   {
     jam();
-    c_triggerRecordPool.getPtr(triggerPtr, updateTriggerId);
-    if (triggerPtr.p->triggerState == TriggerRecord::TS_NOT_DEFINED)
+    bool ok = find_object(triggerPtr, updateTriggerId);
+    if (!ok)
     {
       jam();
-      initialiseTriggerRecord(triggerPtr);
+      bool ok = seizeTriggerRecord(triggerPtr, updateTriggerId);
+      if (!ok)
+      {
+        jam();
+        ndbrequire(ok);
+      }
       triggerPtr.p->triggerState = TriggerRecord::TS_FAKE_UPGRADE;
-      triggerPtr.p->triggerId = triggerPtr.i;
       triggerPtr.p->tableId = tabPtr.p->primaryTableId;
-      triggerPtr.p->indexId = tabPtr.i;
+      triggerPtr.p->indexId = tab_obj_ptr.p->m_id;
       TriggerInfo::packTriggerInfo(triggerPtr.p->triggerInfo,
                                    g_hashIndexTriggerTmpl[0].triggerInfo);
 
       char buf[256];
       BaseString::snprintf(buf, sizeof(buf),
-                           "UPG_UPD_NDB$INDEX_%u_UI", tabPtr.i);
+                           "UPG_UPD_NDB$INDEX_%u_UI", tab_obj_ptr.p->m_id);
       {
         LocalRope name(c_rope_pool, triggerPtr.p->triggerName);
         name.assign(buf);
       }
 
       DictObjectPtr obj_ptr;
-      bool ok = c_obj_hash.seize(obj_ptr);
+      ok = c_obj_pool.seize(obj_ptr);
       ndbrequire(ok);
       new (obj_ptr.p) DictObject();
 
       obj_ptr.p->m_name = triggerPtr.p->triggerName;
-      c_obj_hash.add(obj_ptr);
       obj_ptr.p->m_ref_count = 0;
 
-      triggerPtr.p->m_obj_ptr_i = obj_ptr.i;
       obj_ptr.p->m_id = triggerPtr.p->triggerId;
       obj_ptr.p->m_type =TriggerInfo::getTriggerType(triggerPtr.p->triggerInfo);
+      link_object(obj_ptr, triggerPtr);
+      c_obj_name_hash.add(obj_ptr);
+      c_obj_id_hash.add(obj_ptr);
     }
   }
 
   if (deleteTriggerId != RNIL)
   {
     jam();
-    c_triggerRecordPool.getPtr(triggerPtr, deleteTriggerId);
-    if (triggerPtr.p->triggerState == TriggerRecord::TS_NOT_DEFINED)
+    bool ok = find_object(triggerPtr, deleteTriggerId); // TODO: msundell seizeTriggerRecord
+    if (!ok)
     {
       jam();
-      initialiseTriggerRecord(triggerPtr);
+      bool ok = seizeTriggerRecord(triggerPtr, deleteTriggerId);
+      if (!ok)
+      {
+        jam();
+        ndbrequire(ok);
+      }
       triggerPtr.p->triggerState = TriggerRecord::TS_FAKE_UPGRADE;
-      triggerPtr.p->triggerId = triggerPtr.i;
       triggerPtr.p->tableId = tabPtr.p->primaryTableId;
-      triggerPtr.p->indexId = tabPtr.i;
+      triggerPtr.p->indexId = tab_obj_ptr.p->m_id;
       TriggerInfo::packTriggerInfo(triggerPtr.p->triggerInfo,
                                    g_hashIndexTriggerTmpl[0].triggerInfo);
       char buf[256];
       BaseString::snprintf(buf, sizeof(buf),
-                           "UPG_DEL_NDB$INDEX_%u_UI", tabPtr.i);
+                           "UPG_DEL_NDB$INDEX_%u_UI", tab_obj_ptr.p->m_id);
 
       {
         LocalRope name(c_rope_pool, triggerPtr.p->triggerName);
@@ -5119,17 +5162,18 @@ Dbdict::upgrade_seizeTrigger(TableRecord
       }
 
       DictObjectPtr obj_ptr;
-      bool ok = c_obj_hash.seize(obj_ptr);
+      ok = c_obj_pool.seize(obj_ptr);
       ndbrequire(ok);
       new (obj_ptr.p) DictObject();
 
       obj_ptr.p->m_name = triggerPtr.p->triggerName;
-      c_obj_hash.add(obj_ptr);
       obj_ptr.p->m_ref_count = 0;
 
-      triggerPtr.p->m_obj_ptr_i = obj_ptr.i;
       obj_ptr.p->m_id = triggerPtr.p->triggerId;
       obj_ptr.p->m_type =TriggerInfo::getTriggerType(triggerPtr.p->triggerInfo);
+      link_object(obj_ptr, triggerPtr);
+      c_obj_name_hash.add(obj_ptr);
+      c_obj_id_hash.add(obj_ptr);
     }
   }
 }
@@ -5421,7 +5465,7 @@ void Dbdict::handleTabInfo(SimplePropert
   if(tablePtr.p->m_tablespace_id != RNIL || counts[3] || counts[4])
   {
     FilegroupPtr tablespacePtr;
-    if(!c_filegroup_hash.find(tablespacePtr, tablePtr.p->m_tablespace_id))
+    if (!find_object(tablespacePtr, tablePtr.p->m_tablespace_id))
     {
       tabRequire(false, CreateTableRef::InvalidTablespace);
     }
@@ -5657,7 +5701,7 @@ Dbdict::create_fragmentation(Signal* sig
   {
     jam();
     HashMapRecordPtr hm_ptr;
-    ndbrequire(c_hash_map_hash.find(hm_ptr, tabPtr.p->hashMapObjectId));
+    ndbrequire(find_object(hm_ptr, tabPtr.p->hashMapObjectId));
     frag_req->map_ptr_i = hm_ptr.p->m_map_ptr_i;
   }
   else
@@ -5780,20 +5824,20 @@ Dbdict::createTable_parse(Signal* signal
     TableRecordPtr tabPtr = parseRecord.tablePtr;
 
     // link operation to object seized in handleTabInfoInit
+    DictObjectPtr obj_ptr;
     {
-      DictObjectPtr obj_ptr;
       Uint32 obj_ptr_i = tabPtr.p->m_obj_ptr_i;
       bool ok = findDictObject(op_ptr, obj_ptr, obj_ptr_i);
       ndbrequire(ok);
     }
 
     {
-      Uint32 version = getTableEntry(tabPtr.i)->m_tableVersion;
+      Uint32 version = getTableEntry(obj_ptr.p->m_id)->m_tableVersion;
       tabPtr.p->tableVersion = create_obj_inc_schema_version(version);
     }
 
     // fill in table id and version
-    impl_req->tableId = tabPtr.i;
+    impl_req->tableId = obj_ptr.p->m_id;
     impl_req->tableVersion = tabPtr.p->tableVersion;
 
     if (ERROR_INSERTED(6202) ||
@@ -5932,7 +5976,13 @@ Dbdict::createTable_parse(Signal* signal
   }
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, tableId);
+  bool ok = find_object(tabPtr, tableId);
+  if (!ok)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
+    return;
+  }
   tabPtr.p->packedSize = tabInfoPtr.sz;
   // wl3600_todo verify version on slave
   tabPtr.p->tableVersion = tableVersion;
@@ -6025,7 +6075,7 @@ Dbdict::createTable_prepare(Signal* sign
 
   Uint32 tableId = createTabPtr.p->m_request.tableId;
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, tableId);
+  bool ok = find_object(tabPtr, tableId);
 
   Callback cb;
   cb.m_callbackData = op_ptr.p->op_key;
@@ -6041,6 +6091,7 @@ Dbdict::createTable_prepare(Signal* sign
     return;
   }
 
+  ndbrequire(ok);
   bool savetodisk = !(tabPtr.p->m_bits & TableRecord::TR_Temporary);
   if (savetodisk)
   {
@@ -6104,7 +6155,8 @@ Dbdict::createTab_local(Signal* signal,
   createTabPtr.p->m_callback = * c;
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, createTabPtr.p->m_request.tableId);
+  bool ok = find_object(tabPtr, createTabPtr.p->m_request.tableId);
+  ndbrequire(ok);
 
   /**
    * Start by createing table in LQH
@@ -6134,7 +6186,7 @@ Dbdict::createTab_local(Signal* signal,
    * Create KeyDescriptor
    */
   {
-    KeyDescriptor* desc= g_key_descriptor_pool.getPtr(tabPtr.i);
+    KeyDescriptor* desc= g_key_descriptor_pool.getPtr(createTabPtr.p->m_request.tableId);
     new (desc) KeyDescriptor();
 
     if (tabPtr.p->primaryTableId == RNIL)
@@ -6218,7 +6270,8 @@ Dbdict::execCREATE_TAB_CONF(Signal* sign
   createTabPtr.p->m_lqhFragPtr = conf->lqhConnectPtr;
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, createTabPtr.p->m_request.tableId);
+  bool ok = find_object(tabPtr, createTabPtr.p->m_request.tableId);
+  ndbrequire(ok);
   sendLQHADDATTRREQ(signal, op_ptr, tabPtr.p->m_attributes.firstItem);
 }
 
@@ -6233,7 +6286,8 @@ Dbdict::sendLQHADDATTRREQ(Signal* signal
   getOpRec(op_ptr, createTabPtr);
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, createTabPtr.p->m_request.tableId);
+  bool ok = find_object(tabPtr, createTabPtr.p->m_request.tableId);
+  ndbrequire(ok);
 
   const bool isHashIndex = tabPtr.p->isHashIndex();
 
@@ -6361,8 +6415,8 @@ Dbdict::createTab_dih(Signal* signal, Sc
   D("createTab_dih" << *op_ptr.p);
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, createTabPtr.p->m_request.tableId);
-
+  bool ok = find_object(tabPtr, createTabPtr.p->m_request.tableId);
+  ndbrequire(ok);
 
   /**
    * NOTE: use array access here...
@@ -6373,7 +6427,7 @@ Dbdict::createTab_dih(Signal* signal, Sc
 
   DiAddTabReq * req = (DiAddTabReq*)signal->getDataPtrSend();
   req->connectPtr = op_ptr.p->op_key;
-  req->tableId = tabPtr.i;
+  req->tableId = createTabPtr.p->m_request.tableId;
   req->fragType = tabPtr.p->fragmentType;
   req->kValue = tabPtr.p->kValue;
   req->noOfReplicas = 0;
@@ -6388,7 +6442,7 @@ Dbdict::createTab_dih(Signal* signal, Sc
   if (tabPtr.p->hashMapObjectId != RNIL)
   {
     HashMapRecordPtr hm_ptr;
-    ndbrequire(c_hash_map_hash.find(hm_ptr, tabPtr.p->hashMapObjectId));
+    ndbrequire(find_object(hm_ptr, tabPtr.p->hashMapObjectId));
     req->hashMapPtrI = hm_ptr.p->m_map_ptr_i;
   }
   else
@@ -6473,7 +6527,8 @@ Dbdict::execADD_FRAGREQ(Signal* signal)
   TableRecordPtr tabPtr;
   if (AlterTableReq::getAddFragFlag(changeMask))
   {
-    c_tableRecordPool.getPtr(tabPtr, tableId);
+    bool ok = find_object(tabPtr, tableId);
+    ndbrequire(ok);
     if (DictTabInfo::isTable(tabPtr.p->tableType))
     {
       jam();
@@ -6500,7 +6555,8 @@ Dbdict::execADD_FRAGREQ(Signal* signal)
     findSchemaOp(op_ptr, createTabPtr, senderData);
     ndbrequire(!op_ptr.isNull());
     createTabPtr.p->m_dihAddFragPtr = dihPtr;
-    c_tableRecordPool.getPtr(tabPtr, tableId);
+    bool ok = find_object(tabPtr, tableId);
+    ndbrequire(ok);
   }
 
 #if 0
@@ -6568,7 +6624,8 @@ Dbdict::execLQHFRAGCONF(Signal * signal)
     jam();
     SchemaOpPtr op_ptr;
     TableRecordPtr tabPtr;
-    c_tableRecordPool.getPtr(tabPtr, tableId);
+    bool ok = find_object(tabPtr, tableId);
+    ndbrequire(ok);
     if (DictTabInfo::isTable(tabPtr.p->tableType))
     {
       AlterTableRecPtr alterTabPtr;
@@ -6621,7 +6678,8 @@ Dbdict::execLQHFRAGREF(Signal * signal)
     jam();
     SchemaOpPtr op_ptr;
     TableRecordPtr tabPtr;
-    c_tableRecordPool.getPtr(tabPtr, tableId);
+    bool ok = find_object(tabPtr, tableId);
+    ndbrequire(ok);
     if (DictTabInfo::isTable(tabPtr.p->tableType))
     {
       jam();
@@ -6709,13 +6767,14 @@ Dbdict::execTAB_COMMITCONF(Signal* signa
   //const CreateTabReq* impl_req = &createTabPtr.p->m_request;
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, createTabPtr.p->m_request.tableId);
+  bool ok = find_object(tabPtr, createTabPtr.p->m_request.tableId);
+  ndbrequire(ok);
 
   if (refToBlock(signal->getSendersBlockRef()) == DBLQH) {
     jam();
     // prepare table in DBTC
     TcSchVerReq * req = (TcSchVerReq*)signal->getDataPtr();
-    req->tableId = tabPtr.i;
+    req->tableId = createTabPtr.p->m_request.tableId;
     req->tableVersion = tabPtr.p->tableVersion;
     req->tableLogged = (Uint32)!!(tabPtr.p->m_bits & TableRecord::TR_Logged);
     req->senderRef = reference();
@@ -6729,7 +6788,8 @@ Dbdict::execTAB_COMMITCONF(Signal* signa
     {
       jam();
       TableRecordPtr basePtr;
-      c_tableRecordPool.getPtr(basePtr, tabPtr.p->primaryTableId);
+      bool ok = find_object(basePtr, tabPtr.p->primaryTableId);
+      ndbrequire(ok);
       req->userDefinedPartition = (basePtr.p->fragmentType == DictTabInfo::UserDefined);
     }
 
@@ -6743,7 +6803,7 @@ Dbdict::execTAB_COMMITCONF(Signal* signa
     // commit table in DBTC
     signal->theData[0] = op_ptr.p->op_key;
     signal->theData[1] = reference();
-    signal->theData[2] = tabPtr.i;
+    signal->theData[2] = createTabPtr.p->m_request.tableId;
 
     sendSignal(DBTC_REF, GSN_TAB_COMMITREQ, signal, 3, JBB);
     return;
@@ -6808,7 +6868,8 @@ Dbdict::createTable_commit(Signal* signa
 
   Uint32 tableId = createTabPtr.p->m_request.tableId;
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, tableId);
+  bool ok = find_object(tabPtr, tableId);
+  ndbrequire(ok);
 
   D("createTable_commit" << *op_ptr.p);
 
@@ -6820,9 +6881,10 @@ Dbdict::createTable_commit(Signal* signa
   if (DictTabInfo::isIndex(tabPtr.p->tableType))
   {
     TableRecordPtr basePtr;
-    c_tableRecordPool.getPtr(basePtr, tabPtr.p->primaryTableId);
+    bool ok = find_object(basePtr, tabPtr.p->primaryTableId);
+    ndbrequire(ok);
 
-    LocalTableRecord_list list(c_tableRecordPool, basePtr.p->m_indexes);
+    LocalTableRecord_list list(c_tableRecordPool_, basePtr.p->m_indexes);
     list.add(tabPtr);
   }
 }
@@ -6868,7 +6930,8 @@ Dbdict::createTab_alterComplete(Signal*
   const CreateTabReq* impl_req = &createTabPtr.p->m_request;
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, impl_req->tableId);
+  bool ok = find_object(tabPtr, impl_req->tableId);
+  ndbrequire(ok);
 
   D("createTab_alterComplete" << *op_ptr.p);
 
@@ -6930,13 +6993,17 @@ Dbdict::createTable_abortParse(Signal* s
     }
 
     TableRecordPtr tabPtr;
-    c_tableRecordPool.getPtr(tabPtr, tableId);
+    bool ok = find_object(tabPtr, tableId);
 
     // any link was to a new object
     if (hasDictObject(op_ptr)) {
       jam();
       unlinkDictObject(op_ptr);
-      releaseTableObject(tableId, true);
+      if (ok)
+      {
+        jam();
+        releaseTableObject(tabPtr.i, true);
+      }
     }
   } while (0);
 
@@ -6955,13 +7022,14 @@ Dbdict::createTable_abortPrepare(Signal*
   D("createTable_abortPrepare" << *op_ptr.p);
 
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, impl_req->tableId);
+  bool ok = find_object(tabPtr, impl_req->tableId);
+  ndbrequire(ok);
 
   // create drop table operation  wl3600_todo must pre-allocate
 
   SchemaOpPtr oplnk_ptr;
   DropTableRecPtr dropTabPtr;
-  bool ok = seizeLinkedSchemaOp(op_ptr, oplnk_ptr, dropTabPtr);
+  ok = seizeLinkedSchemaOp(op_ptr, oplnk_ptr, dropTabPtr);
   ndbrequire(ok);
 
   DropTabReq* aux_impl_req = &dropTabPtr.p->m_request;
@@ -6992,7 +7060,7 @@ Dbdict::createTable_abortPrepare(Signal*
 
   if (tabPtr.p->m_tablespace_id != RNIL) {
     FilegroupPtr ptr;
-    ndbrequire(c_filegroup_hash.find(ptr, tabPtr.p->m_tablespace_id));
+    ndbrequire(find_object(ptr, tabPtr.p->m_tablespace_id));
     decrease_ref_count(ptr.p->m_obj_ptr_i);
   }
 }
@@ -7017,10 +7085,12 @@ Dbdict::createTable_abortLocalConf(Signa
   Uint32 tableId = impl_req->tableId;
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, tableId);
-
-  releaseTableObject(tableId);
-
+  bool ok = find_object(tablePtr, tableId);
+  if (ok)
+  {
+    jam();
+    releaseTableObject(tablePtr.i);
+  }
   createTabPtr.p->m_abortPrepareDone = true;
   sendTransConf(signal, op_ptr);
 }
@@ -7062,18 +7132,20 @@ void Dbdict::execCREATE_TABLE_REF(Signal
   handleDictRef(signal, ref);
 }
 
-void Dbdict::releaseTableObject(Uint32 tableId, bool removeFromHash)
+void Dbdict::releaseTableObject(Uint32 table_ptr_i, bool removeFromHash)
 {
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, tableId);
+  c_tableRecordPool_.getPtr(tablePtr, table_ptr_i);
   if (removeFromHash)
   {
     jam();
+    ndbrequire(tablePtr.p->m_obj_ptr_i != RNIL);
     release_object(tablePtr.p->m_obj_ptr_i);
     tablePtr.p->m_obj_ptr_i = RNIL;
   }
   else
   {
+    ndbrequire(tablePtr.p->m_obj_ptr_i == RNIL);
     LocalRope tmp(c_rope_pool, tablePtr.p->tableName);
     tmp.erase();
   }
@@ -7115,9 +7187,12 @@ void Dbdict::releaseTableObject(Uint32 t
       {
         jam();
         TriggerRecordPtr triggerPtr;
-        c_triggerRecordPool.getPtr(triggerPtr, triggerId);
-        triggerPtr.p->triggerState = TriggerRecord::TS_NOT_DEFINED;
-        release_object(triggerPtr.p->m_obj_ptr_i);
+        bool ok = find_object(triggerPtr, triggerId);
+        if (ok)
+        {
+          release_object(triggerPtr.p->m_obj_ptr_i);
+          c_triggerRecordPool_.release(triggerPtr);
+        }
       }
 
       triggerId = tablePtr.p->m_upgrade_trigger_handling.deleteTriggerId;
@@ -7125,13 +7200,16 @@ void Dbdict::releaseTableObject(Uint32 t
       {
         jam();
         TriggerRecordPtr triggerPtr;
-        c_triggerRecordPool.getPtr(triggerPtr, triggerId);
-        triggerPtr.p->triggerState = TriggerRecord::TS_NOT_DEFINED;
-        release_object(triggerPtr.p->m_obj_ptr_i);
+        bool ok = find_object(triggerPtr, triggerId);
+        if (ok)
+        {
+          release_object(triggerPtr.p->m_obj_ptr_i);
+          c_triggerRecordPool_.release(triggerPtr);
+        }
       }
     }
   }
-
+  c_tableRecordPool_.release(tablePtr);
 }//releaseTableObject()
 
 // CreateTable: END
@@ -7243,14 +7321,31 @@ Dbdict::dropTable_parse(Signal* signal,
   getOpRec(op_ptr, dropTabPtr);
   DropTabReq* impl_req = &dropTabPtr.p->m_request;
   Uint32 tableId = impl_req->tableId;
+  Uint32 err;
 
   TableRecordPtr tablePtr;
-  if (!(tableId < c_tableRecordPool.getSize())) {
+  if (!(tableId < c_noOfMetaTables)) {
     jam();
     setError(error, DropTableRef::NoSuchTable, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+
+  err = check_read_obj(impl_req->tableId, trans_ptr.p->m_transId);
+  if (err)
+  {
+    jam();
+    setError(error, err, __LINE__);
+    return;
+  }
+
+  bool ok = find_object(tablePtr, impl_req->tableId);
+  if (!ok)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
+    return;
+  }
+
 
   // check version first (api will retry)
   if (tablePtr.p->tableVersion != impl_req->tableVersion) {
@@ -7266,7 +7361,7 @@ Dbdict::dropTable_parse(Signal* signal,
     return;
   }
 
-  if (check_write_obj(tablePtr.i,
+  if (check_write_obj(impl_req->tableId,
                       trans_ptr.p->m_transId,
                       SchemaFile::SF_DROP, error))
   {
@@ -7292,7 +7387,7 @@ Dbdict::dropTable_parse(Signal* signal,
   SchemaFile::TableEntry te; te.init();
   te.m_tableState = SchemaFile::SF_DROP;
   te.m_transId = trans_ptr.p->m_transId;
-  Uint32 err = trans_log_schema_op(op_ptr, tableId, &te);
+  err = trans_log_schema_op(op_ptr, tableId, &te);
   if (err)
   {
     jam();
@@ -7389,7 +7484,8 @@ Dbdict::dropTable_backup_mutex_locked(Si
   const DropTabReq* impl_req = &dropTabPtr.p->m_request;
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+  bool ok = find_object(tablePtr, impl_req->tableId);
+  ndbrequire(ok);
 
   Mutex mutex(signal, c_mutexMgr, dropTabPtr.p->m_define_backup_mutex);
   mutex.unlock(); // ignore response
@@ -7417,12 +7513,13 @@ Dbdict::dropTable_commit(Signal* signal,
   D("dropTable_commit" << *op_ptr.p);
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, dropTabPtr.p->m_request.tableId);
+  bool ok = find_object(tablePtr, dropTabPtr.p->m_request.tableId);
+  ndbrequire(ok);
 
   if (tablePtr.p->m_tablespace_id != RNIL)
   {
     FilegroupPtr ptr;
-    ndbrequire(c_filegroup_hash.find(ptr, tablePtr.p->m_tablespace_id));
+    ndbrequire(find_object(ptr, tablePtr.p->m_tablespace_id));
     decrease_ref_count(ptr.p->m_obj_ptr_i);
   }
 
@@ -7432,22 +7529,22 @@ Dbdict::dropTable_commit(Signal* signal,
     char buf[1024];
     LocalRope name(c_rope_pool, tablePtr.p->tableName);
     name.copy(buf);
-    g_eventLogger->info("Dbdict: drop name=%s,id=%u,obj_id=%u", buf, tablePtr.i,
+    g_eventLogger->info("Dbdict: drop name=%s,id=%u,obj_id=%u", buf, dropTabPtr.p->m_request.tableId,
                         tablePtr.p->m_obj_ptr_i);
   }
 
   send_event(signal, trans_ptr,
              NDB_LE_DropSchemaObject,
-             tablePtr.i,
+             dropTabPtr.p->m_request.tableId,
              tablePtr.p->tableVersion,
              tablePtr.p->tableType);
 
   if (DictTabInfo::isIndex(tablePtr.p->tableType))
   {
     TableRecordPtr basePtr;
-    c_tableRecordPool.getPtr(basePtr, tablePtr.p->primaryTableId);
-
-    LocalTableRecord_list list(c_tableRecordPool, basePtr.p->m_indexes);
+    bool ok = find_object(basePtr, tablePtr.p->primaryTableId);
+    ndbrequire(ok);
+    LocalTableRecord_list list(c_tableRecordPool_, basePtr.p->m_indexes);
     list.remove(tablePtr);
   }
   dropTabPtr.p->m_block = 0;
@@ -7593,9 +7690,6 @@ Dbdict::dropTable_complete(Signal* signa
   DropTableRecPtr dropTabPtr;
   getOpRec(op_ptr, dropTabPtr);
 
-  TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, dropTabPtr.p->m_request.tableId);
-
   dropTabPtr.p->m_block = 0;
   dropTabPtr.p->m_blockNo[0] = DBTC;
   dropTabPtr.p->m_blockNo[1] = DBLQH; // wait usage + LCP
@@ -7622,9 +7716,6 @@ Dbdict::dropTable_complete_nextStep(Sign
    */
   ndbrequire(!hasError(op_ptr.p->m_error));
 
-  TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
-
   Uint32 block = dropTabPtr.p->m_block;
   Uint32 blockNo = dropTabPtr.p->m_blockNo[block];
   D("dropTable_complete_nextStep" << hex << V(blockNo) << *op_ptr.p);
@@ -7707,7 +7798,12 @@ Dbdict::dropTable_complete_done(Signal*
   Uint32 tableId = dropTabPtr.p->m_request.tableId;
 
   unlinkDictObject(op_ptr);
-  releaseTableObject(tableId);
+  TableRecordPtr tablePtr;
+  bool ok = find_object(tablePtr, tableId);
+  if (ok)
+  {
+    releaseTableObject(tablePtr.i);
+  }
 
   // inform SUMA
   {
@@ -7916,6 +8012,7 @@ Dbdict::alterTable_parse(Signal* signal,
   AlterTableRecPtr alterTabPtr;
   getOpRec(op_ptr, alterTabPtr);
   AlterTabReq* impl_req = &alterTabPtr.p->m_request;
+  Uint32 err;
 
   if (AlterTableReq::getReorgSubOp(impl_req->changeMask))
   {
@@ -7934,12 +8031,19 @@ Dbdict::alterTable_parse(Signal* signal,
 
   // get table definition
   TableRecordPtr tablePtr;
-  if (!(impl_req->tableId < c_tableRecordPool.getSize())) {
+  if (!(impl_req->tableId < c_noOfMetaTables)) {
     jam();
     setError(error, AlterTableRef::NoSuchTable, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+
+  bool ok = find_object(tablePtr, impl_req->tableId);
+  if (!ok)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
+    return;
+  }
 
   if (tablePtr.p->m_read_locked)
   {
@@ -7948,7 +8052,7 @@ Dbdict::alterTable_parse(Signal* signal,
     return;
   }
 
-  if (check_write_obj(tablePtr.i, trans_ptr.p->m_transId,
+  if (check_write_obj(impl_req->tableId, trans_ptr.p->m_transId,
                       SchemaFile::SF_ALTER, error))
   {
     jam();
@@ -8310,7 +8414,7 @@ Dbdict::alterTable_parse(Signal* signal,
   te.m_gcp = 0;
   te.m_transId = trans_ptr.p->m_transId;
 
-  Uint32 err = trans_log_schema_op(op_ptr, impl_req->tableId, &te);
+  err = trans_log_schema_op(op_ptr, impl_req->tableId, &te);
   if (err)
   {
     jam();
@@ -8329,10 +8433,10 @@ Dbdict::check_supported_reorg(Uint32 org
   }
 
   HashMapRecordPtr orgmap_ptr;
-  ndbrequire(c_hash_map_hash.find(orgmap_ptr, org_map_id));
+  ndbrequire(find_object(orgmap_ptr, org_map_id));
 
   HashMapRecordPtr newmap_ptr;
-  ndbrequire(c_hash_map_hash.find(newmap_ptr, new_map_id));
+  ndbrequire(find_object(newmap_ptr, new_map_id));
 
   Ptr<Hash2FragmentMap> orgptr;
   g_hash_map.getPtr(orgptr, orgmap_ptr.p->m_map_ptr_i);
@@ -8429,8 +8533,9 @@ Dbdict::alterTable_subOps(Signal* signal
       jam();
       TableRecordPtr tabPtr;
       TableRecordPtr indexPtr;
-      c_tableRecordPool.getPtr(tabPtr, impl_req->tableId);
-      LocalTableRecord_list list(c_tableRecordPool, tabPtr.p->m_indexes);
+      bool ok = find_object(tabPtr, impl_req->tableId);
+      ndbrequire(ok);
+      LocalTableRecord_list list(c_tableRecordPool_, tabPtr.p->m_indexes);
       Uint32 ptrI = alterTabPtr.p->m_sub_add_frag_index_ptr;
 
       if (ptrI == RNIL)
@@ -8577,7 +8682,8 @@ Dbdict::alterTable_toAlterIndex(Signal*
   SchemaTransPtr trans_ptr = op_ptr.p->m_trans_ptr;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, alterTabPtr.p->m_sub_add_frag_index_ptr);
+  c_tableRecordPool_.getPtr(indexPtr, alterTabPtr.p->m_sub_add_frag_index_ptr);
+  ndbrequire(!indexPtr.isNull());
 
   AlterIndxReq* req = (AlterIndxReq*)signal->getDataPtrSend();
   req->clientRef = reference();
@@ -8704,8 +8810,6 @@ Dbdict::alterTable_toCreateTrigger(Signa
   AlterTableRecPtr alterTablePtr;
   getOpRec(op_ptr, alterTablePtr);
   const AlterTabReq* impl_req = &alterTablePtr.p->m_request;
-  TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
 
   const TriggerTmpl& triggerTmpl = g_reorgTriggerTmpl[0];
 
@@ -8767,7 +8871,8 @@ Dbdict::alterTable_toCopyData(Signal* si
   getOpRec(op_ptr, alterTablePtr);
   const AlterTabReq* impl_req = &alterTablePtr.p->m_request;
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+  bool ok = find_object(tablePtr, impl_req->tableId);
+  ndbrequire(ok);
 
   CopyDataReq* req = (CopyDataReq*)signal->getDataPtrSend();
 
@@ -8942,7 +9047,8 @@ Dbdict::alterTable_backup_mutex_locked(S
   const AlterTabReq* impl_req = &alterTabPtr.p->m_request;
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+  bool ok = find_object(tablePtr, impl_req->tableId);
+  ndbrequire(ok);
 
   Mutex mutex(signal, c_mutexMgr, alterTabPtr.p->m_define_backup_mutex);
   mutex.unlock(); // ignore response
@@ -9072,7 +9178,7 @@ Dbdict::alterTable_toLocal(Signal* signa
     {
       jam();
       HashMapRecordPtr hm_ptr;
-      ndbrequire(c_hash_map_hash.find(hm_ptr,
+      ndbrequire(find_object(hm_ptr,
                                       alterTabPtr.p->m_newTablePtr.p->hashMapObjectId));
       req->new_map_ptr_i = hm_ptr.p->m_map_ptr_i;
     }
@@ -9143,7 +9249,8 @@ Dbdict::alterTable_commit(Signal* signal
   D("alterTable_commit" << *op_ptr.p);
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+  bool ok = find_object(tablePtr, impl_req->tableId);
+  ndbrequire(ok);
 
   if (op_ptr.p->m_sections)
   {
@@ -9177,7 +9284,7 @@ Dbdict::alterTable_commit(Signal* signal
       c_obj_pool.getPtr(obj_ptr, tablePtr.p->m_obj_ptr_i);
 
       // remove old name from hash
-      c_obj_hash.remove(obj_ptr);
+      c_obj_name_hash.remove(obj_ptr);
 
       // save old name and replace it by new
       bool ok =
@@ -9187,7 +9294,7 @@ Dbdict::alterTable_commit(Signal* signal
 
       // add new name to object hash
       obj_ptr.p->m_name = tablePtr.p->tableName;
-      c_obj_hash.add(obj_ptr);
+      c_obj_name_hash.add(obj_ptr);
     }
 
     if (AlterTableReq::getFrmFlag(changeMask))
@@ -9389,7 +9496,8 @@ Dbdict::alterTable_fromCommitComplete(Si
 
   const Uint32 tableId = impl_req->tableId;
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, tableId);
+  bool ok = find_object(tablePtr, tableId);
+  ndbrequire(ok);
 
   // inform Suma so it can send events to any subscribers of the table
   {
@@ -9774,7 +9882,7 @@ void Dbdict::execGET_TABLEDID_REQ(Signal
   }
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, obj_ptr_p->m_id);
+  c_tableRecordPool_.getPtr(tablePtr, obj_ptr_p->m_object_ptr_i);
 
   GetTableIdConf * conf = (GetTableIdConf *)req;
   conf->tableId = tablePtr.p->tableId;
@@ -10091,9 +10199,9 @@ void Dbdict::sendOLD_LIST_TABLES_CONF(Si
   conf->counter = 0;
   Uint32 pos = 0;
 
-  DictObject_hash::Iterator iter;
-  bool ok = c_obj_hash.first(iter);
-  for(; ok; ok = c_obj_hash.next(iter)){
+  DictObjectName_hash::Iterator iter;
+  bool ok = c_obj_name_hash.first(iter);
+  for (; ok; ok = c_obj_name_hash.next(iter)){
     Uint32 type = iter.curr.p->m_type;
     if ((reqTableType != (Uint32)0) && (reqTableType != type))
       continue;
@@ -10103,19 +10211,19 @@ void Dbdict::sendOLD_LIST_TABLES_CONF(Si
 
     TableRecordPtr tablePtr;
     if (DictTabInfo::isTable(type) || DictTabInfo::isIndex(type)){
-      c_tableRecordPool.getPtr(tablePtr, iter.curr.p->m_id);
+      c_tableRecordPool_.getPtr(tablePtr, iter.curr.p->m_object_ptr_i);
 
       if(reqListIndexes && (reqTableId != tablePtr.p->primaryTableId))
 	continue;
 
       conf->tableData[pos] = 0;
-      conf->setTableId(pos, tablePtr.i); // id
+      conf->setTableId(pos, iter.curr.p->m_id); // id
       conf->setTableType(pos, type); // type
       // state
 
       if(DictTabInfo::isTable(type))
       {
-        SchemaFile::TableEntry * te = getTableEntry(xsf, tablePtr.i);
+        SchemaFile::TableEntry * te = getTableEntry(xsf, iter.curr.p->m_id);
         switch(te->m_tableState){
         case SchemaFile::SF_CREATE:
           jam();
@@ -10183,24 +10291,30 @@ void Dbdict::sendOLD_LIST_TABLES_CONF(Si
     }
     if(DictTabInfo::isTrigger(type)){
       TriggerRecordPtr triggerPtr;
-      c_triggerRecordPool.getPtr(triggerPtr, iter.curr.p->m_id);
-
+      bool ok = find_object(triggerPtr, iter.curr.p->m_id);
       conf->tableData[pos] = 0;
-      conf->setTableId(pos, triggerPtr.i);
+      conf->setTableId(pos, iter.curr.p->m_id);
       conf->setTableType(pos, type);
-      switch (triggerPtr.p->triggerState) {
-      case TriggerRecord::TS_DEFINING:
-	conf->setTableState(pos, DictTabInfo::StateBuilding);
-	break;
-      case TriggerRecord::TS_OFFLINE:
-	conf->setTableState(pos, DictTabInfo::StateOffline);
-	break;
-      case TriggerRecord::TS_ONLINE:
-	conf->setTableState(pos, DictTabInfo::StateOnline);
-	break;
-      default:
-	conf->setTableState(pos, DictTabInfo::StateBroken);
-	break;
+      if (!ok)
+      {
+        conf->setTableState(pos, DictTabInfo::StateBroken);
+      }
+      else
+      {
+        switch (triggerPtr.p->triggerState) {
+        case TriggerRecord::TS_DEFINING:
+          conf->setTableState(pos, DictTabInfo::StateBuilding);
+          break;
+        case TriggerRecord::TS_OFFLINE:
+          conf->setTableState(pos, DictTabInfo::StateOffline);
+          break;
+        case TriggerRecord::TS_ONLINE:
+          conf->setTableState(pos, DictTabInfo::StateOnline);
+          break;
+        default:
+          conf->setTableState(pos, DictTabInfo::StateBroken);
+          break;
+        }
       }
       conf->setTableStore(pos, DictTabInfo::StoreNotLogged);
       pos++;
@@ -10279,8 +10393,8 @@ void Dbdict::sendLIST_TABLES_CONF(Signal
   XSchemaFile * xsf = &c_schemaFile[SchemaRecord::NEW_SCHEMA_FILE];
   NodeReceiverGroup rg(senderRef);
 
-  DictObject_hash::Iterator iter;
-  bool done = !c_obj_hash.first(iter);
+  DictObjectName_hash::Iterator iter;
+  bool done = !c_obj_name_hash.first(iter);
 
   if (done)
   {
@@ -10326,18 +10440,18 @@ void Dbdict::sendLIST_TABLES_CONF(Signal
 
     TableRecordPtr tablePtr;
     if (DictTabInfo::isTable(type) || DictTabInfo::isIndex(type)){
-      c_tableRecordPool.getPtr(tablePtr, iter.curr.p->m_id);
+      c_tableRecordPool_.getPtr(tablePtr, iter.curr.p->m_object_ptr_i);
 
       if(reqListIndexes && (reqTableId != tablePtr.p->primaryTableId))
 	goto flush;
 
       ltd.requestData = 0; // clear
-      ltd.setTableId(tablePtr.i); // id
+      ltd.setTableId(iter.curr.p->m_id); // id
       ltd.setTableType(type); // type
       // state
 
       if(DictTabInfo::isTable(type)){
-        SchemaFile::TableEntry * te = getTableEntry(xsf, tablePtr.i);
+        SchemaFile::TableEntry * te = getTableEntry(xsf, iter.curr.p->m_id);
         switch(te->m_tableState){
         case SchemaFile::SF_CREATE:
           jam();
@@ -10404,24 +10518,31 @@ void Dbdict::sendLIST_TABLES_CONF(Signal
     }
     if(DictTabInfo::isTrigger(type)){
       TriggerRecordPtr triggerPtr;
-      c_triggerRecordPool.getPtr(triggerPtr, iter.curr.p->m_id);
+      bool ok = find_object(triggerPtr, iter.curr.p->m_id);
 
       ltd.requestData = 0;
-      ltd.setTableId(triggerPtr.i);
+      ltd.setTableId(iter.curr.p->m_id);
       ltd.setTableType(type);
-      switch (triggerPtr.p->triggerState) {
-      case TriggerRecord::TS_DEFINING:
-	ltd.setTableState(DictTabInfo::StateBuilding);
-	break;
-      case TriggerRecord::TS_OFFLINE:
-	ltd.setTableState(DictTabInfo::StateOffline);
-	break;
-      case TriggerRecord::TS_ONLINE:
-	ltd.setTableState(DictTabInfo::StateOnline);
-	break;
-      default:
-	ltd.setTableState(DictTabInfo::StateBroken);
-	break;
+      if (!ok)
+      {
+        ltd.setTableState(DictTabInfo::StateBroken);
+      }
+      else
+      {
+        switch (triggerPtr.p->triggerState) {
+        case TriggerRecord::TS_DEFINING:
+          ltd.setTableState(DictTabInfo::StateBuilding);
+          break;
+        case TriggerRecord::TS_OFFLINE:
+          ltd.setTableState(DictTabInfo::StateOffline);
+          break;
+        case TriggerRecord::TS_ONLINE:
+          ltd.setTableState(DictTabInfo::StateOnline);
+          break;
+        default:
+          ltd.setTableState(DictTabInfo::StateBroken);
+          break;
+        }
       }
       ltd.setTableStore(DictTabInfo::StoreNotLogged);
     }
@@ -10464,7 +10585,7 @@ flush:
     Uint32 tableDataWords = tableDataWriter.getWordsUsed();
     Uint32 tableNameWords = tableNamesWriter.getWordsUsed();
 
-    done = !c_obj_hash.next(iter);
+    done = !c_obj_name_hash.next(iter);
     if ((tableDataWords + tableNameWords) > fragSize || done)
     {
       jam();
@@ -10746,21 +10867,21 @@ Dbdict::createIndex_parse(Signal* signal
   // check primary table
   TableRecordPtr tablePtr;
   {
-    if (!(impl_req->tableId < c_tableRecordPool.getSize())) {
+    if (!(impl_req->tableId < c_noOfMetaTables)) {
       jam();
       setError(error, CreateIndxRef::InvalidPrimaryTable, __LINE__);
       return;
     }
-    c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+    bool ok = find_object(tablePtr, impl_req->tableId);
+    if (!ok || !tablePtr.p->isTable()) {
 
-    if (!tablePtr.p->isTable()) {
       jam();
       setError(error, CreateIndxRef::InvalidPrimaryTable, __LINE__);
       return;
     }
 
     Uint32 err;
-    if ((err = check_read_obj(tablePtr.i, trans_ptr.p->m_transId)))
+    if ((err = check_read_obj(impl_req->tableId, trans_ptr.p->m_transId)))
     {
       jam();
       setError(error, err, __LINE__);
@@ -10898,7 +11019,7 @@ Dbdict::createIndex_parse(Signal* signal
   if (master)
   {
     jam();
-    impl_req->indexId = getFreeObjId(0);
+    impl_req->indexId = getFreeObjId();
   }
 
   if (impl_req->indexId == RNIL)
@@ -10908,7 +11029,7 @@ Dbdict::createIndex_parse(Signal* signal
     return;
   }
 
-  if (impl_req->indexId >= c_tableRecordPool.getSize())
+  if (impl_req->indexId >= c_noOfMetaTables)
   {
     jam();
     setError(error, CreateTableRef::NoMoreTableRecords, __LINE__);
@@ -10966,8 +11087,8 @@ Dbdict::createIndex_toCreateTable(Signal
   getOpRec(op_ptr, createIndexPtr);
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, createIndexPtr.p->m_request.tableId);
-  ndbrequire(tablePtr.i == tablePtr.p->tableId);
+  bool ok = find_object(tablePtr, createIndexPtr.p->m_request.tableId);
+  ndbrequire(ok);
 
   // signal data writer
   Uint32* wbuffer = &c_indexPage.word[0];
@@ -11439,21 +11560,30 @@ Dbdict::dropIndex_parse(Signal* signal,
                         SectionHandle& handle, ErrorInfo& error)
 {
   D("dropIndex_parse" << V(op_ptr.i) << *op_ptr.p);
+  jam();
 
   SchemaTransPtr trans_ptr = op_ptr.p->m_trans_ptr;
   DropIndexRecPtr dropIndexPtr;
   getOpRec(op_ptr, dropIndexPtr);
   DropIndxImplReq* impl_req = &dropIndexPtr.p->m_request;
 
-  TableRecordPtr indexPtr;
-  if (!(impl_req->indexId < c_tableRecordPool.getSize())) {
+  if (!(impl_req->indexId < c_noOfMetaTables)) {
     jam();
     setError(error, DropIndxRef::IndexNotFound, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
 
-  if (!indexPtr.p->isIndex())
+  Uint32 err = check_read_obj(impl_req->indexId, trans_ptr.p->m_transId);
+  if (err)
+  {
+    jam();
+    setError(error, err, __LINE__);
+    return;
+  }
+
+  TableRecordPtr indexPtr;
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  if (!ok || !indexPtr.p->isIndex())
   {
     jam();
     setError(error, DropIndxRef::NotAnIndex, __LINE__);
@@ -11467,16 +11597,21 @@ Dbdict::dropIndex_parse(Signal* signal,
     return;
   }
 
-  if (check_write_obj(indexPtr.i, trans_ptr.p->m_transId,
+  if (check_write_obj(impl_req->indexId, trans_ptr.p->m_transId,
                       SchemaFile::SF_DROP, error))
   {
     jam();
     return;
   }
 
-  ndbrequire(indexPtr.p->primaryTableId != RNIL);
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, indexPtr.p->primaryTableId);
+  ok = find_object(tablePtr, indexPtr.p->primaryTableId);
+  if (!ok)
+  {
+    jam();
+    setError(error, CreateIndxRef::InvalidPrimaryTable, __LINE__);
+    return;
+  }
 
   // master sets primary table, slave verifies it agrees
   if (master)
@@ -11928,12 +12063,26 @@ Dbdict::alterIndex_parse(Signal* signal,
   AlterIndxImplReq* impl_req = &alterIndexPtr.p->m_request;
 
   TableRecordPtr indexPtr;
-  if (!(impl_req->indexId < c_tableRecordPool.getSize())) {
+  if (!(impl_req->indexId < c_noOfMetaTables)) {
     jam();
     setError(error, AlterIndxRef::IndexNotFound, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  if (check_read_obj(impl_req->indexId, trans_ptr.p->m_transId) == GetTabInfoRef::TableNotDefined)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
+    return;
+  }
+  jam();
+
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  if (!ok)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
+    return;
+  }
 
   // get name for system index check later
   char indexName[MAX_TAB_NAME_SIZE];
@@ -11950,7 +12099,7 @@ Dbdict::alterIndex_parse(Signal* signal,
     return;
   }
 
-  if (check_write_obj(indexPtr.i, trans_ptr.p->m_transId,
+  if (check_write_obj(impl_req->indexId, trans_ptr.p->m_transId,
                       SchemaFile::SF_ALTER, error))
   {
     jam();
@@ -11986,7 +12135,8 @@ Dbdict::alterIndex_parse(Signal* signal,
 
   ndbrequire(indexPtr.p->primaryTableId != RNIL);
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, indexPtr.p->primaryTableId);
+  ok = find_object(tablePtr, indexPtr.p->primaryTableId);
+  ndbrequire(ok); // TODO:msundell set error
 
   // master sets primary table, participant verifies it agrees
   if (master)
@@ -12113,7 +12263,9 @@ void
 Dbdict::set_index_stat_frag(Signal* signal, TableRecordPtr indexPtr)
 {
   jam();
-  const Uint32 indexId = indexPtr.i;
+  DictObjectPtr index_obj_ptr;
+  c_obj_pool.getPtr(index_obj_ptr, indexPtr.p->m_obj_ptr_i);
+  const Uint32 indexId = index_obj_ptr.p->m_id;
   Uint32 err = get_fragmentation(signal, indexId);
   ndbrequire(err == 0);
   // format: R F { fragId node1 .. nodeR } x { F }
@@ -12124,7 +12276,7 @@ Dbdict::set_index_stat_frag(Signal* sign
   ndbrequire(noOfFragments != 0 && noOfReplicas != 0);
 
   // distribute by table and index id
-  const Uint32 value = indexPtr.p->primaryTableId + indexPtr.i;
+  const Uint32 value = indexPtr.p->primaryTableId + indexId;
   const Uint32 fragId = value % noOfFragments;
   const Uint32 fragIndex = 2 + (1 + noOfReplicas) * fragId;
   const Uint32 nodeIndex = value % noOfReplicas;
@@ -12144,8 +12296,6 @@ Dbdict::alterIndex_subOps(Signal* signal
   getOpRec(op_ptr, alterIndexPtr);
   const AlterIndxImplReq* impl_req = &alterIndexPtr.p->m_request;
   Uint32 requestType = impl_req->requestType;
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
 
   // ops to create or drop triggers
   if (alterIndexPtr.p->m_sub_trigger == false)
@@ -12197,7 +12347,10 @@ Dbdict::alterIndex_subOps(Signal* signal
     return true;
   }
 
-  if (indexPtr.p->isOrderedIndex() &&
+  TableRecordPtr indexPtr;
+  bool ok = find_object(indexPtr, impl_req->indexId);
+
+  if (ok && indexPtr.p->isOrderedIndex() &&
       (!alterIndexPtr.p->m_sub_index_stat_dml ||
        !alterIndexPtr.p->m_sub_index_stat_mon)) {
     jam();
@@ -12223,7 +12376,8 @@ Dbdict::alterIndex_toCreateTrigger(Signa
   getOpRec(op_ptr, alterIndexPtr);
   const AlterIndxImplReq* impl_req = &alterIndexPtr.p->m_request;
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   const TriggerTmpl& triggerTmpl = alterIndexPtr.p->m_triggerTmpl[0];
 
@@ -12317,7 +12471,8 @@ Dbdict::alterIndex_toDropTrigger(Signal*
   const AlterIndxImplReq* impl_req = &alterIndexPtr.p->m_request;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   //const TriggerTmpl& triggerTmpl = alterIndexPtr.p->m_triggerTmpl[0];
 
@@ -12480,7 +12635,8 @@ Dbdict::alterIndex_toIndexStat(Signal* s
   DictSignal::addRequestFlagsGlobal(requestInfo, op_ptr.p->m_requestInfo);
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   req->clientRef = reference();
   req->clientData = op_ptr.p->op_key;
@@ -12584,7 +12740,8 @@ Dbdict::alterIndex_prepare(Signal* signa
   Uint32 requestType = impl_req->requestType;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   D("alterIndex_prepare" << *op_ptr.p);
 
@@ -12644,7 +12801,8 @@ Dbdict::alterIndex_toCreateLocal(Signal*
   const AlterIndxImplReq* impl_req = &alterIndexPtr.p->m_request;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   D("alterIndex_toCreateLocal" << *op_ptr.p);
 
@@ -12677,9 +12835,6 @@ Dbdict::alterIndex_toDropLocal(Signal* s
   getOpRec(op_ptr, alterIndexPtr);
   const AlterIndxImplReq* impl_req = &alterIndexPtr.p->m_request;
 
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
-
   D("alterIndex_toDropLocal" << *op_ptr.p);
 
   DropIndxImplReq* req = (DropIndxImplReq*)signal->getDataPtrSend();
@@ -12849,14 +13004,19 @@ Dbdict::alterIndex_abortParse(Signal* si
   D("alterIndex_abortParse" << *op_ptr.p);
 
   do {
-    if (!(impl_req->indexId < c_tableRecordPool.getSize())) {
+    if (!(impl_req->indexId < c_noOfMetaTables)) {
       jam();
       D("invalid index id" << V(indexId));
       break;
     }
 
     TableRecordPtr indexPtr;
-    c_tableRecordPool.getPtr(indexPtr, indexId);
+    bool ok = find_object(indexPtr, indexId);
+    if (!ok)
+    {
+      jam();
+      break;
+    }
 
     switch (requestType) {
     case AlterIndxImplReq::AlterIndexOnline:
@@ -13096,31 +13256,47 @@ Dbdict::buildIndex_parse(Signal* signal,
                          SchemaOpPtr op_ptr,
                          SectionHandle& handle, ErrorInfo& error)
 {
-  D("buildIndex_parse");
+   D("buildIndex_parse");
 
+  SchemaTransPtr trans_ptr = op_ptr.p->m_trans_ptr;
   BuildIndexRecPtr buildIndexPtr;
   getOpRec(op_ptr, buildIndexPtr);
   BuildIndxImplReq* impl_req = &buildIndexPtr.p->m_request;
+  Uint32 err;
 
   // get index
   TableRecordPtr indexPtr;
-  if (!(impl_req->indexId < c_tableRecordPool.getSize())) {
+  err = check_read_obj(impl_req->indexId, trans_ptr.p->m_transId);
+  if (err)
+  {
     jam();
-    setError(error, BuildIndxRef::IndexNotFound, __LINE__);
+    setError(error, err, __LINE__);
+    return;
+  }
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  if (!ok)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
 
   ndbrequire(indexPtr.p->primaryTableId == impl_req->tableId);
 
   // get primary table
   TableRecordPtr tablePtr;
-  if (!(impl_req->tableId < c_tableRecordPool.getSize())) {
+  if (!(impl_req->tableId < c_noOfMetaTables)) {
     jam();
     setError(error, BuildIndxRef::IndexNotFound, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(tablePtr, impl_req->tableId);
+  ok = find_object(tablePtr, impl_req->tableId);
+  if (!ok)
+  {
+    jam();
+    setError(error, GetTabInfoRef::TableNotDefined, __LINE__);
+    return;
+  }
 
   // set attribute lists
   getIndexAttrList(indexPtr, buildIndexPtr.p->m_indexKeyList);
@@ -13392,9 +13568,6 @@ Dbdict::buildIndex_toDropConstraint(Sign
   getOpRec(op_ptr, buildIndexPtr);
   const BuildIndxImplReq* impl_req = &buildIndexPtr.p->m_request;
 
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
-
   const TriggerTmpl& triggerTmpl = buildIndexPtr.p->m_triggerTmpl[0];
 
   DropTrigReq* req = (DropTrigReq*)signal->getDataPtrSend();
@@ -13479,9 +13652,6 @@ Dbdict::buildIndex_reply(Signal* signal,
 
   D("buildIndex_reply" << V(impl_req->indexId));
 
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
-
   if (!hasError(error)) {
     BuildIndxConf* conf = (BuildIndxConf*)signal->getDataPtrSend();
     conf->senderRef = reference();
@@ -13544,7 +13714,8 @@ Dbdict:: buildIndex_toLocalBuild(Signal*
   SchemaTransPtr trans_ptr = op_ptr.p->m_trans_ptr;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   D("buildIndex_toLocalBuild");
 
@@ -13645,7 +13816,8 @@ Dbdict::buildIndex_toLocalOnline(Signal*
   const BuildIndxImplReq* impl_req = &buildIndexPtr.p->m_request;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   D("buildIndex_toLocalOnline");
 
@@ -13696,7 +13868,8 @@ Dbdict::buildIndex_fromLocalOnline(Signa
   const BuildIndxImplReq* impl_req = &buildIndexPtr.p->m_request;
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   D("buildIndex_fromLocalOnline");
 
@@ -13871,20 +14044,23 @@ Dbdict::indexStat_parse(Signal* signal,
 {
   D("indexStat_parse");
 
+  SchemaTransPtr trans_ptr = op_ptr.p->m_trans_ptr;
   IndexStatRecPtr indexStatPtr;
   getOpRec(op_ptr, indexStatPtr);
   IndexStatImplReq* impl_req = &indexStatPtr.p->m_request;
+  Uint32 err;
 
   // get index
   TableRecordPtr indexPtr;
-  if (!(impl_req->indexId < c_tableRecordPool.getSize())) {
+  err = check_read_obj(impl_req->indexId, trans_ptr.p->m_transId);
+  if (err)
+  {
     jam();
-    setError(error, IndexStatRef::InvalidIndex, __LINE__);
+    setError(error, err, __LINE__);
     return;
   }
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
-
-  if (!indexPtr.p->isOrderedIndex()) {
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  if (!ok || !indexPtr.p->isOrderedIndex()) {
     jam();
     setError(error, IndexStatRef::InvalidIndex, __LINE__);
     return;
@@ -14014,7 +14190,8 @@ Dbdict::indexStat_toIndexStat(Signal* si
   DictSignal::addRequestFlagsGlobal(requestInfo, op_ptr.p->m_requestInfo);
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok);
 
   req->clientRef = reference();
   req->clientData = op_ptr.p->op_key;
@@ -14075,9 +14252,6 @@ Dbdict::indexStat_reply(Signal* signal,
 
   D("indexStat_reply" << V(impl_req->indexId));
 
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
-
   if (!hasError(error)) {
     IndexStatConf* conf = (IndexStatConf*)signal->getDataPtrSend();
     conf->senderRef = reference();
@@ -14134,8 +14308,8 @@ Dbdict::indexStat_toLocalStat(Signal* si
   D("indexStat_toLocalStat");
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
-  ndbrequire(indexPtr.p->isOrderedIndex());
+  bool ok = find_object(indexPtr, impl_req->indexId);
+  ndbrequire(ok && indexPtr.p->isOrderedIndex());
 
   Callback c = {
     safe_cast(&Dbdict::indexStat_fromLocalStat),
@@ -14333,7 +14507,7 @@ Dbdict::execINDEX_STAT_REP(Signal* signa
 
   // check
   TableRecordPtr indexPtr;
-  if (rep->indexId >= c_tableRecordPool.getSize()) {
+  if (rep->indexId >= c_noOfMetaTables) {
     jam();
     return;
   }
@@ -14343,7 +14517,12 @@ Dbdict::execINDEX_STAT_REP(Signal* signa
     jam();
     return;
   }
-  c_tableRecordPool.getPtr(indexPtr, rep->indexId);
+  bool ok = find_object(indexPtr, rep->indexId);
+  if (!ok)
+  {
+    jam();
+    return;
+  }
   if (rep->indexVersion != 0 &&
       rep->indexVersion != indexPtr.p->tableVersion) {
     jam();
@@ -14361,7 +14540,7 @@ Dbdict::execINDEX_STAT_REP(Signal* signa
   D("index stat: " << copyRope<MAX_TAB_NAME_SIZE>(indexPtr.p->tableName)
     << " request type:" << rep->requestType);
 
-  infoEvent("DICT: index %u stats auto-update requested", indexPtr.i);
+  infoEvent("DICT: index %u stats auto-update requested", rep->indexId);
   indexPtr.p->indexStatBgRequest = rep->requestType;
 }
 
@@ -14381,7 +14560,7 @@ Dbdict::indexStatBg_process(Signal* sign
   uint loop;
   for (loop = 0; loop < maxloop; loop++, c_indexStatBgId++) {
     jam();
-    c_indexStatBgId %= c_tableRecordPool.getSize();
+    c_indexStatBgId %= c_noOfMetaTables;
 
     // check
     TableRecordPtr indexPtr;
@@ -14391,8 +14570,8 @@ Dbdict::indexStatBg_process(Signal* sign
       jam();
       continue;
     }
-    c_tableRecordPool.getPtr(indexPtr, c_indexStatBgId);
-    if (!indexPtr.p->isOrderedIndex()) {
+    bool ok = find_object(indexPtr, c_indexStatBgId);
+    if (!ok || !indexPtr.p->isOrderedIndex()) {
       jam();
       continue;
     }
@@ -14428,15 +14607,16 @@ Dbdict::indexStatBg_fromBeginTrans(Signa
   findTxHandle(tx_ptr, tx_key);
   ndbrequire(!tx_ptr.isNull());
 
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, c_indexStatBgId);
-
   if (ret != 0) {
     jam();
     indexStatBg_sendContinueB(signal);
     return;
   }
 
+  TableRecordPtr indexPtr;
+  bool ok = find_object(indexPtr, c_indexStatBgId);
+  ndbrequire(ok);
+
   Callback c = {
     safe_cast(&Dbdict::indexStatBg_fromIndexStat),
     tx_ptr.p->tx_key
@@ -14466,13 +14646,10 @@ Dbdict::indexStatBg_fromIndexStat(Signal
   findTxHandle(tx_ptr, tx_key);
   ndbrequire(!tx_ptr.isNull());
 
-  TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, c_indexStatBgId);
-
   if (ret != 0) {
     jam();
     setError(tx_ptr.p->m_error, ret, __LINE__);
-    warningEvent("DICT: index %u stats auto-update error: %d", indexPtr.i, ret);
+    warningEvent("DICT: index %u stats auto-update error: %d", c_indexStatBgId, ret);
   }
 
   Callback c = {
@@ -14497,17 +14674,18 @@ Dbdict::indexStatBg_fromEndTrans(Signal*
   ndbrequire(!tx_ptr.isNull());
 
   TableRecordPtr indexPtr;
-  c_tableRecordPool.getPtr(indexPtr, c_indexStatBgId);
+  bool ok = find_object(indexPtr, c_indexStatBgId);
 
   if (ret != 0) {
     jam();
     // skip over but leave the request on
-    warningEvent("DICT: index %u stats auto-update error: %d", indexPtr.i, ret);
+    warningEvent("DICT: index %u stats auto-update error: %d", c_indexStatBgId, ret);
   } else {
     jam();
+    ndbrequire(ok);
     // mark request done
     indexPtr.p->indexStatBgRequest = 0;
-    infoEvent("DICT: index %u stats auto-update done", indexPtr.i);
+    infoEvent("DICT: index %u stats auto-update done", c_indexStatBgId);
   }
 
   releaseTxHandle(tx_ptr);
@@ -14692,7 +14870,8 @@ Dbdict::copyData_prepare(Signal* signal,
   Uint32 tmp[MAX_ATTRIBUTES_IN_TABLE];
   bool tabHasDiskCols = false;
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, impl_req->srcTableId);
+  bool ok = find_object(tabPtr, impl_req->srcTableId);
+  ndbrequire(ok);
   {
     LocalAttributeRecord_list alist(c_attributeRecordPool,
                                            tabPtr.p->m_attributes);
@@ -14793,7 +14972,8 @@ Dbdict::copyData_complete(Signal* signal
   Uint32 tmp[MAX_ATTRIBUTES_IN_TABLE];
   bool tabHasDiskCols = false;
   TableRecordPtr tabPtr;
-  c_tableRecordPool.getPtr(tabPtr, impl_req->srcTableId);
+  bool ok = find_object(tabPtr, impl_req->srcTableId);
+  ndbrequire(ok);
   {
     LocalAttributeRecord_list alist(c_attributeRecordPool,
                                            tabPtr.p->m_attributes);
@@ -15064,7 +15244,7 @@ Dbdict::prepareTransactionEventSysTable
 
   ndbrequire(opj_ptr_p != 0);
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, opj_ptr_p->m_id);
+  c_tableRecordPool_.getPtr(tablePtr, opj_ptr_p->m_object_ptr_i);
   ndbrequire(tablePtr.i != RNIL); // system table must exist
 
   Uint32 tableId = tablePtr.p->tableId; /* System table */
@@ -15682,7 +15862,7 @@ void Dbdict::executeTransEventSysTable(C
 
   ndbrequire(opj_ptr_p != 0);
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, opj_ptr_p->m_id);
+  c_tableRecordPool_.getPtr(tablePtr, opj_ptr_p->m_object_ptr_i);
   ndbrequire(tablePtr.i != RNIL); // system table must exist
 
   Uint32 noAttr = tablePtr.p->noOfAttributes;
@@ -15862,7 +16042,7 @@ void Dbdict::parseReadEventSys(Signal* s
 
   ndbrequire(opj_ptr_p != 0);
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, opj_ptr_p->m_id);
+  c_tableRecordPool_.getPtr(tablePtr, opj_ptr_p->m_object_ptr_i);
   ndbrequire(tablePtr.i != RNIL); // system table must exist
 
   Uint32 noAttr = tablePtr.p->noOfAttributes;
@@ -15952,7 +16132,7 @@ void Dbdict::createEventUTIL_EXECUTE(Sig
       }
 
       TableRecordPtr tablePtr;
-      c_tableRecordPool.getPtr(tablePtr, obj_ptr_p->m_id);
+      c_tableRecordPool_.getPtr(tablePtr, obj_ptr_p->m_object_ptr_i);
       evntRec->m_request.setTableId(tablePtr.p->tableId);
       evntRec->m_request.setTableVersion(tablePtr.p->tableVersion);
 
@@ -17648,7 +17828,7 @@ Dbdict::createTrigger_parse(Signal* sign
   // check the table
   {
     const Uint32 tableId = impl_req->tableId;
-    if (! (tableId < c_tableRecordPool.getSize()))
+    if (! (tableId < c_noOfMetaTables))
     {
       jam();
       setError(error, CreateTrigRef::InvalidTable, __LINE__);
@@ -17726,28 +17906,33 @@ Dbdict::createTrigger_parse(Signal* sign
       impl_req->triggerId = getFreeTriggerRecord();
       if (impl_req->triggerId == RNIL)
       {
-	jam();
-	setError(error, CreateTrigRef::TooManyTriggers, __LINE__);
-	return;
+        jam();
+        setError(error, CreateTrigRef::TooManyTriggers, __LINE__);
+        return;
+      }
+      bool ok = find_object(triggerPtr, impl_req->triggerId);
+      if (ok)
+      {
+        jam();
+        setError(error, CreateTrigRef::TriggerExists, __LINE__);
+        return;
       }
-      c_triggerRecordPool.getPtr(triggerPtr, impl_req->triggerId);
-      ndbrequire(triggerPtr.p->triggerState == TriggerRecord::TS_NOT_DEFINED);
       D("master allocated triggerId " << impl_req->triggerId);
     }
     else
     {
-      if (!(impl_req->triggerId < c_triggerRecordPool.getSize()))
+      if (!(impl_req->triggerId < c_triggerRecordPool_.getSize()))
       {
 	jam();
 	setError(error, CreateTrigRef::TooManyTriggers, __LINE__);
 	return;
       }
-      c_triggerRecordPool.getPtr(triggerPtr, impl_req->triggerId);
-      if (triggerPtr.p->triggerState != TriggerRecord::TS_NOT_DEFINED)
+      bool ok = find_object(triggerPtr, impl_req->triggerId);
+      if (ok)
       {
-	jam();
-	setError(error, CreateTrigRef::TriggerExists, __LINE__);
-	return;
+        jam();
+        setError(error, CreateTrigRef::TriggerExists, __LINE__);
+        return;
       }
       D("master forced triggerId " << impl_req->triggerId);
     }
@@ -17756,14 +17941,14 @@ Dbdict::createTrigger_parse(Signal* sign
   {
     jam();
     // slave receives trigger id from master
-    if (! (impl_req->triggerId < c_triggerRecordPool.getSize()))
+    if (! (impl_req->triggerId < c_triggerRecordPool_.getSize()))
     {
       jam();
       setError(error, CreateTrigRef::TooManyTriggers, __LINE__);
       return;
     }
-    c_triggerRecordPool.getPtr(triggerPtr, impl_req->triggerId);
-    if (triggerPtr.p->triggerState != TriggerRecord::TS_NOT_DEFINED)
+    bool ok = find_object(triggerPtr, impl_req->triggerId);
+    if (ok)
     {
       jam();
       setError(error, CreateTrigRef::TriggerExists, __LINE__);
@@ -17772,15 +17957,20 @@ Dbdict::createTrigger_parse(Signal* sign
     D("slave allocated triggerId " << hex << impl_req->triggerId);
   }
 
-  initialiseTriggerRecord(triggerPtr);
-
-  triggerPtr.p->triggerId = impl_req->triggerId;
+  bool ok = seizeTriggerRecord(triggerPtr, impl_req->triggerId);
+  if (!ok)
+  {
+    jam();
+    setError(error, CreateTrigRef::TooManyTriggers, __LINE__);
+    return;
+  }
   triggerPtr.p->tableId = impl_req->tableId;
   triggerPtr.p->indexId = RNIL; // feedback method connects to index
   triggerPtr.p->triggerInfo = impl_req->triggerInfo;
   triggerPtr.p->receiverRef = impl_req->receiverRef;
   triggerPtr.p->triggerState = TriggerRecord::TS_DEFINING;
 
+  // TODO:msundell on failure below, leak of TriggerRecord
   if (handle.m_cnt >= 2)
   {
     jam();
@@ -17820,12 +18010,13 @@ Dbdict::createTrigger_parse(Signal* sign
   // connect to new DictObject
   {
     DictObjectPtr obj_ptr;
-    seizeDictObject(op_ptr, obj_ptr, triggerPtr.p->triggerName);
+    seizeDictObject(op_ptr, obj_ptr, triggerPtr.p->triggerName); // added to c_obj_name_hash
 
     obj_ptr.p->m_id = impl_req->triggerId; // wl3600_todo id
     obj_ptr.p->m_type =
       TriggerInfo::getTriggerType(triggerPtr.p->triggerInfo);
-    triggerPtr.p->m_obj_ptr_i = obj_ptr.i;
+    link_object(obj_ptr, triggerPtr);
+    c_obj_id_hash.add(obj_ptr);
   }
 
   {
@@ -17866,7 +18057,8 @@ Dbdict::createTrigger_parse(Signal* sign
   if (impl_req->indexId != RNIL)
   {
     TableRecordPtr indexPtr;
-    c_tableRecordPool.getPtr(indexPtr, impl_req->indexId);
+    bool ok = find_object(indexPtr, impl_req->indexId);
+    ndbrequire(ok);
     triggerPtr.p->indexId = impl_req->indexId;
     indexPtr.p->triggerId = impl_req->triggerId;
   }
@@ -17902,7 +18094,12 @@ Dbdict::createTrigger_parse_endpoint(Sig
   }
 
   TriggerRecordPtr triggerPtr;
-  c_triggerRecordPool.getPtr(triggerPtr, impl_req->triggerId);
+  bool ok = find_object(triggerPtr, impl_req->triggerId);
+  if (!ok)
+  {
+    jam();
+    return;
+  }
   switch(TriggerInfo::getTriggerType(triggerPtr.p->triggerInfo)){
   case TriggerType::REORG_TRIGGER:
     jam();
@@ -18217,8 +18414,8 @@ Dbdict::createTrigger_commit(Signal* sig
 
     Uint32 triggerId = impl_req->triggerId;
     TriggerRecordPtr triggerPtr;
-    c_triggerRecordPool.getPtr(triggerPtr, triggerId);
-
+    bool ok = find_object(triggerPtr, triggerId);
+    ndbrequire(ok);
     triggerPtr.p->triggerState = TriggerRecord::TS_ONLINE;
     unlinkDictObject(op_ptr);
   }
@@ -18276,26 +18473,30 @@ Dbdict::createTrigger_abortParse(Signal*
     jam();
 
     TriggerRecordPtr triggerPtr;
-    if (! (triggerId < c_triggerRecordPool.getSize()))
+    if (! (triggerId < c_triggerRecordPool_.getSize()))
     {
       jam();
       goto done;
     }
 
-    c_triggerRecordPool.getPtr(triggerPtr, triggerId);
-
-    if (triggerPtr.p->triggerState == TriggerRecord::TS_DEFINING)
+    bool ok = find_object(triggerPtr, triggerId);
+    if (ok)
     {
       jam();
-      triggerPtr.p->triggerState = TriggerRecord::TS_NOT_DEFINED;
-    }
 
-    if (triggerPtr.p->indexId != RNIL)
-    {
-      TableRecordPtr indexPtr;
-      c_tableRecordPool.getPtr(indexPtr, triggerPtr.p->indexId);
-      triggerPtr.p->indexId = RNIL;
-      indexPtr.p->triggerId = RNIL;
+      if (triggerPtr.p->indexId != RNIL)
+      {
+        TableRecordPtr indexPtr;
+        bool ok = find_object(indexPtr, triggerPtr.p->indexId);
+        if (ok)
+        {
+          jam();
+          indexPtr.p->triggerId = RNIL;
+        }
+        triggerPtr.p->indexId = RNIL;
+      }
+
+      c_triggerRecordPool_.release(triggerPtr);
     }
 
     // ignore Feedback for now (referencing object will be dropped too)
@@ -18381,8 +18582,8 @@ Dbdict::send_create_trig_req(Signal* sig
   const CreateTrigImplReq* impl_req = &createTriggerPtr.p->m_request;
 
   TriggerRecordPtr triggerPtr;
-  c_triggerRecordPool.getPtr(triggerPtr, impl_req->triggerId);
-
+  bool ok = find_object(triggerPtr, impl_req->triggerId);
+  ndbrequire(ok);
   D("send_create_trig_req");
 
   CreateTrigImplReq* req = (CreateTrigImplReq*)signal->getDataPtrSend();
@@ -18412,7 +18613,8 @@ Dbdict::send_create_trig_req(Signal* sig
     if (triggerPtr.p->indexId != RNIL)
     {
       jam();
-      c_tableRecordPool.getPtr(indexPtr, triggerPtr.p->indexId);
+      bool ok = find_object(indexPtr, triggerPtr.p->indexId);
+      ndbrequire(ok);
       if (indexPtr.p->m_upgrade_trigger_handling.m_upgrade)
       {
         jam();
@@ -18627,12 +18829,18 @@ Dbdict::dropTrigger_parse(Signal* signal
   // check trigger id from user or via name
   TriggerRecordPtr triggerPtr;
   {
-    if (!(impl_req->triggerId < c_triggerRecordPool.getSize())) {
+    if (!(impl_req->triggerId < c_triggerRecordPool_.getSize())) {
+      jam();
+      setError(error, DropTrigImplRef::TriggerNotFound, __LINE__);
+      return;
+    }
+    bool ok = find_object(triggerPtr, impl_req->triggerId);
+    if (!ok)
+    {
       jam();
       setError(error, DropTrigImplRef::TriggerNotFound, __LINE__);
       return;
     }
-    c_triggerRecordPool.getPtr(triggerPtr, impl_req->triggerId);
     // wl3600_todo state check
   }
 
@@ -18943,19 +19151,24 @@ Dbdict::dropTrigger_commit(Signal* signa
     Uint32 triggerId = dropTriggerPtr.p->m_request.triggerId;
 
     TriggerRecordPtr triggerPtr;
-    c_triggerRecordPool.getPtr(triggerPtr, triggerId);
-
+    bool ok = find_object(triggerPtr, triggerId);
+    ndbrequire(ok);
     if (triggerPtr.p->indexId != RNIL)
     {
+      jam();
       TableRecordPtr indexPtr;
-      c_tableRecordPool.getPtr(indexPtr, triggerPtr.p->indexId);
+      bool ok = find_object(indexPtr, triggerPtr.p->indexId);
+      if (ok)
+      {
+        jam();
+        indexPtr.p->triggerId = RNIL;
+      }
       triggerPtr.p->indexId = RNIL;
-      indexPtr.p->triggerId = RNIL;
     }
 
     // remove trigger
+    c_triggerRecordPool_.release(triggerPtr);
     releaseDictObject(op_ptr);
-    triggerPtr.p->triggerState = TriggerRecord::TS_NOT_DEFINED;
 
     sendTransConf(signal, op_ptr);
     return;
@@ -19130,7 +19343,8 @@ Dbdict::getIndexAttr(TableRecordPtr inde
   TableRecordPtr tablePtr;
   AttributeRecordPtr attrPtr;
 
-  c_tableRecordPool.getPtr(tablePtr, indexPtr.p->primaryTableId);
+  bool ok = find_object(tablePtr, indexPtr.p->primaryTableId);
+  ndbrequire(ok);
   AttributeRecord* iaRec = c_attributeRecordPool.getPtr(itAttr);
   {
     ConstRope tmp(c_rope_pool, iaRec->attributeName);
@@ -20394,10 +20608,14 @@ Dbdict::execBACKUP_LOCK_TAB_REQ(Signal*
   Uint32 lock = req->m_lock_unlock;
 
   TableRecordPtr tablePtr;
-  c_tableRecordPool.getPtr(tablePtr, tableId, true);
-
+  bool ok = find_object(tablePtr, tableId);
   Uint32 err = 0;
-  if(lock == BackupLockTab::LOCK_TABLE)
+  if (!ok)
+  {
+    jam();
+    err = GetTabInfoRef::InvalidTableId;
+  }
+  else if(lock == BackupLockTab::LOCK_TABLE)
   {
     jam();
     if ((err = check_write_obj(tableId)) == 0)
@@ -20669,7 +20887,7 @@ Dbdict::createFile_parse(Signal* signal,
 
   // Get Filegroup
   FilegroupPtr fg_ptr;
-  if(!c_filegroup_hash.find(fg_ptr, f.FilegroupId))
+  if (!find_object(fg_ptr, f.FilegroupId))
   {
     jam();
     setError(error, CreateFileRef::NoSuchFilegroup, __LINE__, f.FileName);
@@ -20773,7 +20991,7 @@ Dbdict::createFile_parse(Signal* signal,
   {
     jam();
 
-    Uint32 objId = getFreeObjId(0);
+    Uint32 objId = getFreeObjId();
     if (objId == RNIL)
     {
       jam();
@@ -20853,6 +21071,8 @@ Dbdict::createFile_parse(Signal* signal,
   obj_ptr.p->m_type = f.FileType;
   obj_ptr.p->m_ref_count = 0;
 
+  ndbrequire(link_object(obj_ptr, filePtr));
+
   {
     SchemaFile::TableEntry te; te.init();
     te.m_tableState = SchemaFile::SF_CREATE;
@@ -20871,8 +21091,8 @@ Dbdict::createFile_parse(Signal* signal,
     }
   }
 
-  c_obj_hash.add(obj_ptr);
-  c_file_hash.add(filePtr);
+  c_obj_name_hash.add(obj_ptr);
+  c_obj_id_hash.add(obj_ptr);
 
   // save sections to DICT memory
   saveOpSection(op_ptr, handle, 0);
@@ -20900,8 +21120,8 @@ Dbdict::createFile_parse(Signal* signal,
 
   if (g_trace)
   {
-    g_eventLogger->info("Dbdict: create name=%s,id=%u,obj_ptr_i=%d,"
-                        "type=%s,bytes=%llu,warn=0x%x",
+    g_eventLogger->info("Dbdict: %u: create name=%s,id=%u,obj_ptr_i=%d,"
+                        "type=%s,bytes=%llu,warn=0x%x",__LINE__,
                         f.FileName,
                         impl_req->file_id,
                         filePtr.p->m_obj_ptr_i,
@@ -20944,8 +21164,8 @@ Dbdict::createFile_abortParse(Signal* si
   {
     FilePtr f_ptr;
     FilegroupPtr fg_ptr;
-    ndbrequire(c_file_hash.find(f_ptr, impl_req->file_id));
-    ndbrequire(c_filegroup_hash.find(fg_ptr, f_ptr.p->m_filegroup_id));
+    ndbrequire(find_object(f_ptr, impl_req->file_id));
+    ndbrequire(find_object(fg_ptr, f_ptr.p->m_filegroup_id));
     if (f_ptr.p->m_type == DictTabInfo::Datafile)
     {
       jam();
@@ -20959,7 +21179,7 @@ Dbdict::createFile_abortParse(Signal* si
     }
 
     release_object(f_ptr.p->m_obj_ptr_i);
-    c_file_hash.release(f_ptr);
+    c_file_pool.release(f_ptr);
   }
 
   sendTransConf(signal, op_ptr);
@@ -21062,8 +21282,8 @@ Dbdict::createFile_fromWriteObjInfo(Sign
   FilePtr f_ptr;
   FilegroupPtr fg_ptr;
 
-  ndbrequire(c_file_hash.find(f_ptr, impl_req->file_id));
-  ndbrequire(c_filegroup_hash.find(fg_ptr, f_ptr.p->m_filegroup_id));
+  ndbrequire(find_object(f_ptr, impl_req->file_id));
+  ndbrequire(find_object(fg_ptr, f_ptr.p->m_filegroup_id));
 
   req->senderData = op_ptr.p->op_key;
   req->senderRef = reference();
@@ -21122,8 +21342,8 @@ Dbdict::createFile_abortPrepare(Signal*
   getOpRec(op_ptr, createFileRecPtr);
   CreateFileImplReq* impl_req = &createFileRecPtr.p->m_request;
 
-  ndbrequire(c_file_hash.find(f_ptr, impl_req->file_id));
-  ndbrequire(c_filegroup_hash.find(fg_ptr, f_ptr.p->m_filegroup_id));
+  ndbrequire(find_object(f_ptr, impl_req->file_id));
+  ndbrequire(find_object(fg_ptr, f_ptr.p->m_filegroup_id));
 
   req->senderData = op_ptr.p->op_key;
   req->senderRef = reference();
@@ -21178,8 +21398,8 @@ Dbdict::createFile_commit(Signal* signal
   FilegroupPtr fg_ptr;
 
   jam();
-  ndbrequire(c_file_hash.find(f_ptr, impl_req->file_id));
-  ndbrequire(c_filegroup_hash.find(fg_ptr, f_ptr.p->m_filegroup_id));
+  ndbrequire(find_object(f_ptr, impl_req->file_id));
+  ndbrequire(find_object(fg_ptr, f_ptr.p->m_filegroup_id));
 
   req->senderData = op_ptr.p->op_key;
   req->senderRef = reference();
@@ -21467,7 +21687,7 @@ Dbdict::createFilegroup_parse(Signal* si
     fg_ptr.p->m_tablespace.m_default_logfile_group_id = fg.TS_LogfileGroupId;
 
     FilegroupPtr lg_ptr;
-    if (!c_filegroup_hash.find(lg_ptr, fg.TS_LogfileGroupId))
+    if (!find_object(lg_ptr, fg.TS_LogfileGroupId))
     {
       jam();
       setError(error, CreateFilegroupRef::NoSuchLogfileGroup, __LINE__);
@@ -21512,7 +21732,7 @@ Dbdict::createFilegroup_parse(Signal* si
   {
     jam();
 
-    Uint32 objId = getFreeObjId(0);
+    Uint32 objId = getFreeObjId();
     if (objId == RNIL)
     {
       jam();
@@ -21532,7 +21752,6 @@ Dbdict::createFilegroup_parse(Signal* si
   }
 
   fg_ptr.p->key = impl_req->filegroup_id;
-  fg_ptr.p->m_obj_ptr_i = obj_ptr.i;
   fg_ptr.p->m_type = fg.FilegroupType;
   fg_ptr.p->m_version = impl_req->filegroup_version;
   fg_ptr.p->m_name = obj_ptr.p->m_name;
@@ -21541,6 +21760,8 @@ Dbdict::createFilegroup_parse(Signal* si
   obj_ptr.p->m_type = fg.FilegroupType;
   obj_ptr.p->m_ref_count = 0;
 
+  ndbrequire(link_object(obj_ptr, fg_ptr));
+
   if (master)
   {
     jam();
@@ -21570,8 +21791,8 @@ Dbdict::createFilegroup_parse(Signal* si
     }
   }
 
-  c_obj_hash.add(obj_ptr);
-  c_filegroup_hash.add(fg_ptr);
+  c_obj_name_hash.add(obj_ptr);
+  c_obj_id_hash.add(obj_ptr);
 
   // save sections to DICT memory
   saveOpSection(op_ptr, handle, 0);
@@ -21584,7 +21805,7 @@ Dbdict::createFilegroup_parse(Signal* si
   createFilegroupPtr.p->m_parsed = true;
 
 #if defined VM_TRACE || defined ERROR_INSERT
-  ndbout_c("Dbdict: create name=%s,id=%u,obj_ptr_i=%d",
+  ndbout_c("Dbdict: %u: create name=%s,id=%u,obj_ptr_i=%d",__LINE__,
            fg.FilegroupName, impl_req->filegroup_id, fg_ptr.p->m_obj_ptr_i);
 #endif
 
@@ -21617,19 +21838,19 @@ Dbdict::createFilegroup_abortParse(Signa
     CreateFilegroupImplReq* impl_req = &createFilegroupPtr.p->m_request;
 
     FilegroupPtr fg_ptr;
-    ndbrequire(c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id));
+    ndbrequire(find_object(fg_ptr, impl_req->filegroup_id));
 
     if (fg_ptr.p->m_type == DictTabInfo::Tablespace)
     {
       jam();
       FilegroupPtr lg_ptr;
-      ndbrequire(c_filegroup_hash.find
+      ndbrequire(find_object
                  (lg_ptr, fg_ptr.p->m_tablespace.m_default_logfile_group_id));
       decrease_ref_count(lg_ptr.p->m_obj_ptr_i);
     }
 
     release_object(fg_ptr.p->m_obj_ptr_i);
-    c_filegroup_hash.release(fg_ptr);
+    c_filegroup_pool.release(fg_ptr);
   }
 
   sendTransConf(signal, op_ptr);
@@ -21740,7 +21961,8 @@ Dbdict::createFilegroup_fromWriteObjInfo
   req->filegroup_version = impl_req->filegroup_version;
 
   FilegroupPtr fg_ptr;
-  ndbrequire(c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id));
+
+  ndbrequire(find_object(fg_ptr, impl_req->filegroup_id));
 
   Uint32 ref= 0;
   Uint32 len= 0;
@@ -21956,7 +22178,7 @@ Dbdict::dropFile_parse(Signal* signal, b
   DropFileImplReq* impl_req = &dropFileRecPtr.p->m_request;
 
   FilePtr f_ptr;
-  if (!c_file_hash.find(f_ptr, impl_req->file_id))
+  if (!find_object(f_ptr, impl_req->file_id))
   {
     jam();
     setError(error, DropFileRef::NoSuchFile, __LINE__);
@@ -22131,11 +22353,11 @@ Dbdict::dropFile_complete(Signal* signal
   FilegroupPtr fg_ptr;
 
   jam();
-  ndbrequire(c_file_hash.find(f_ptr, impl_req->file_id));
-  ndbrequire(c_filegroup_hash.find(fg_ptr, f_ptr.p->m_filegroup_id));
+  ndbrequire(find_object(f_ptr, impl_req->file_id));
+  ndbrequire(find_object(fg_ptr, f_ptr.p->m_filegroup_id));
   decrease_ref_count(fg_ptr.p->m_obj_ptr_i);
   release_object(f_ptr.p->m_obj_ptr_i);
-  c_file_hash.release(f_ptr);
+  c_file_pool.release(f_ptr);
 
   sendTransConf(signal, op_ptr);
 }
@@ -22186,8 +22408,8 @@ Dbdict::send_drop_file(Signal* signal, U
   FilegroupPtr fg_ptr;
 
   jam();
-  ndbrequire(c_file_hash.find(f_ptr, fileId));
-  ndbrequire(c_filegroup_hash.find(fg_ptr, f_ptr.p->m_filegroup_id));
+  ndbrequire(find_object(f_ptr, fileId));
+  ndbrequire(find_object(fg_ptr, f_ptr.p->m_filegroup_id));
 
   req->senderData = op_key;
   req->senderRef = reference();
@@ -22314,7 +22536,7 @@ Dbdict::dropFilegroup_parse(Signal* sign
   DropFilegroupImplReq* impl_req = &dropFilegroupRecPtr.p->m_request;
 
   FilegroupPtr fg_ptr;
-  if (!c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id))
+  if (!find_object(fg_ptr, impl_req->filegroup_id))
   {
     jam();
     setError(error, DropFilegroupRef::NoSuchFilegroup, __LINE__);
@@ -22437,7 +22659,7 @@ Dbdict::dropFilegroup_prepare(Signal* si
                DropFilegroupImplReq::Prepare);
 
   FilegroupPtr fg_ptr;
-  ndbrequire(c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id));
+  ndbrequire(find_object(fg_ptr, impl_req->filegroup_id));
 
   if (fg_ptr.p->m_type == DictTabInfo::LogfileGroup)
   {
@@ -22475,7 +22697,7 @@ Dbdict::dropFilegroup_abortPrepare(Signa
                DropFilegroupImplReq::Abort);
 
   FilegroupPtr fg_ptr;
-  ndbrequire(c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id));
+  ndbrequire(find_object(fg_ptr, impl_req->filegroup_id));
 
   if (fg_ptr.p->m_type == DictTabInfo::LogfileGroup)
   {
@@ -22516,7 +22738,7 @@ Dbdict::dropFilegroup_commit(Signal* sig
                DropFilegroupImplReq::Commit);
 
   FilegroupPtr fg_ptr;
-  ndbrequire(c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id));
+  ndbrequire(find_object(fg_ptr, impl_req->filegroup_id));
 
   if (fg_ptr.p->m_type == DictTabInfo::LogfileGroup)
   {
@@ -22539,7 +22761,6 @@ Dbdict::dropFilegroup_commit(Signal* sig
       entry->m_transId = 0;
 
       release_object(objPtr.i, objPtr.p);
-      c_file_hash.remove(filePtr);
     }
     list.release();
   }
@@ -22547,8 +22768,7 @@ Dbdict::dropFilegroup_commit(Signal* sig
   {
     jam();
     FilegroupPtr lg_ptr;
-    ndbrequire(c_filegroup_hash.
-	       find(lg_ptr,
+    ndbrequire(find_object(lg_ptr,
 		    fg_ptr.p->m_tablespace.m_default_logfile_group_id));
 
     decrease_ref_count(lg_ptr.p->m_obj_ptr_i);
@@ -22568,10 +22788,10 @@ Dbdict::dropFilegroup_complete(Signal* s
   DropFilegroupImplReq* impl_req = &dropFilegroupRecPtr.p->m_request;
 
   FilegroupPtr fg_ptr;
-  ndbrequire(c_filegroup_hash.find(fg_ptr, impl_req->filegroup_id));
+  ndbrequire(find_object(fg_ptr, impl_req->filegroup_id));
 
   release_object(fg_ptr.p->m_obj_ptr_i);
-  c_filegroup_hash.release(fg_ptr);
+  c_filegroup_pool.release(fg_ptr);
 
   sendTransConf(signal, op_ptr);
 }
@@ -22621,7 +22841,7 @@ Dbdict::send_drop_fg(Signal* signal, Uin
   DropFilegroupImplReq* req = (DropFilegroupImplReq*)signal->getDataPtrSend();
 
   FilegroupPtr fg_ptr;
-  ndbrequire(c_filegroup_hash.find(fg_ptr, filegroupId));
+  ndbrequire(find_object(fg_ptr, filegroupId));
 
   req->senderData = op_key;
   req->senderRef = reference();
@@ -24318,12 +24538,12 @@ Dbdict::seizeDictObject(SchemaOpPtr op_p
 {
   D("seizeDictObject" << *op_ptr.p);
 
-  bool ok = c_obj_hash.seize(obj_ptr);
+  bool ok = c_obj_pool.seize(obj_ptr);
   ndbrequire(ok);
   new (obj_ptr.p) DictObject();
 
   obj_ptr.p->m_name = name;
-  c_obj_hash.add(obj_ptr);
+  c_obj_name_hash.add(obj_ptr);
   obj_ptr.p->m_ref_count = 0;
 
   linkDictObject(op_ptr, obj_ptr);
@@ -24576,7 +24796,7 @@ Dbdict::execSCHEMA_TRANS_BEGIN_REQ(Signa
     trans_ptr.p->m_clientRef = clientRef;
     trans_ptr.p->m_transId = transId;
     trans_ptr.p->m_requestInfo = requestInfo;
-    trans_ptr.p->m_obj_id = getFreeObjId(0);
+    trans_ptr.p->m_obj_id = getFreeObjId();
     if (localTrans)
     {
       /**
@@ -24586,7 +24806,7 @@ Dbdict::execSCHEMA_TRANS_BEGIN_REQ(Signa
        *   schema file so that we don't accidently allocate
        *   an objectId that should be used to recreate an object
        */
-      trans_ptr.p->m_obj_id = getFreeObjId(0, true);
+      trans_ptr.p->m_obj_id = getFreeObjId(true);
     }
 
     if (!localTrans)
@@ -26895,7 +27115,7 @@ Dbdict::execSCHEMA_TRANS_IMPL_REQ(Signal
     if (signal->getLength() < SchemaTransImplReq::SignalLengthStart)
     {
       jam();
-      reqCopy.start.objectId = getFreeObjId(0);
+      reqCopy.start.objectId = getFreeObjId();
     }
     slave_run_start(signal, req);
     return;
@@ -27044,9 +27264,9 @@ Dbdict::slave_run_start(Signal *signal,
   SchemaTransPtr trans_ptr;
   const Uint32 trans_key = req->transKey;
 
-  Uint32 objId = getFreeObjId(req->start.objectId);
-  if (objId != req->start.objectId)
-  {
+  Uint32 objId = req->start.objectId;
+  if (check_read_obj(objId,0) == 0)
+  { /* schema file id already in use */
     jam();
     setError(error, CreateTableRef::NoMoreTableRecords, __LINE__);
     goto err;
@@ -28442,7 +28662,7 @@ Dbdict::createHashMap_parse(Signal* sign
     }
 
     HashMapRecordPtr hm_ptr;
-    ndbrequire(c_hash_map_hash.find(hm_ptr, objptr->m_id));
+    ndbrequire(find_object(hm_ptr, objptr->m_id));
 
     impl_req->objectId = objptr->m_id;
     impl_req->objectVersion = hm_ptr.p->m_object_version;
@@ -28496,7 +28716,7 @@ Dbdict::createHashMap_parse(Signal* sign
       goto error;
     }
 
-    objId = impl_req->objectId = getFreeObjId(0);
+    objId = impl_req->objectId = getFreeObjId();
     if (objId == RNIL)
     {
       jam();
@@ -28538,7 +28758,8 @@ Dbdict::createHashMap_parse(Signal* sign
   obj_ptr.p->m_type = DictTabInfo::HashMap;
   obj_ptr.p->m_ref_count = 0;
   obj_ptr.p->m_name = name;
-  c_obj_hash.add(obj_ptr);
+  c_obj_name_hash.add(obj_ptr);
+  c_obj_id_hash.add(obj_ptr);
 
   if (ERROR_INSERTED(6209))
   {
@@ -28577,9 +28798,8 @@ Dbdict::createHashMap_parse(Signal* sign
   hm_ptr.p->m_object_id = objId;
   hm_ptr.p->m_object_version = objVersion;
   hm_ptr.p->m_name = name;
-  hm_ptr.p->m_obj_ptr_i = obj_ptr.i;
   hm_ptr.p->m_map_ptr_i = map_ptr.i;
-  c_hash_map_hash.add(hm_ptr);
+  link_object(obj_ptr, hm_ptr);
 
   /**
    * pack is stupid...and requires bytes!
@@ -28627,7 +28847,7 @@ Dbdict::createHashMap_parse(Signal* sign
   handle.m_cnt = 1;
 
 #if defined VM_TRACE || defined ERROR_INSERT
-  ndbout_c("Dbdict: create name=%s,id=%u,obj_ptr_i=%d",
+  ndbout_c("Dbdict: %u: create name=%s,id=%u,obj_ptr_i=%d",__LINE__,
            hm.HashMapName, objId, hm_ptr.p->m_obj_ptr_i);
 #endif
 
@@ -28639,7 +28859,7 @@ error:
   if (!hm_ptr.isNull())
   {
     jam();
-    c_hash_map_hash.release(hm_ptr);
+    c_hash_map_pool.release(hm_ptr);
   }
 
   if (!map_ptr.isNull())
@@ -28681,11 +28901,11 @@ Dbdict::createHashMap_abortParse(Signal*
     jam();
 
     HashMapRecordPtr hm_ptr;
-    ndbrequire(c_hash_map_hash.find(hm_ptr, impl_req->objectId));
+    ndbrequire(find_object(hm_ptr, impl_req->objectId));
 
     release_object(hm_ptr.p->m_obj_ptr_i);
     g_hash_map.release(hm_ptr.p->m_map_ptr_i);
-    c_hash_map_hash.release(hm_ptr);
+    c_hash_map_pool.release(hm_ptr);
   }
 
   // wl3600_todo probably nothing..
@@ -28990,11 +29210,11 @@ Dbdict::check_consistency()
   // schema file entries // mis-named "tables"
   TableRecordPtr tablePtr;
   for (tablePtr.i = 0;
-      tablePtr.i < c_tableRecordPool.getSize();
+      tablePtr.i < c_noOfMetaTables;
       tablePtr.i++) {
     if (check_read_obj(tablePtr.i,
 
-    c_tableRecordPool.getPtr(tablePtr);
+    c_tableRecordPool_.getPtr(tablePtr);
 
     switch (tablePtr.p->tabState) {
     case TableRecord::NOT_DEFINED:
@@ -29008,10 +29228,11 @@ Dbdict::check_consistency()
 
   // triggers // should be in schema file
   TriggerRecordPtr triggerPtr;
-  for (triggerPtr.i = 0;
-      triggerPtr.i < c_triggerRecordPool.getSize();
-      triggerPtr.i++) {
-    c_triggerRecordPool.getPtr(triggerPtr);
+  for (Uint32 id = 0;
+      id < c_triggerRecordPool_.getSize();
+      id++) {
+    bool ok = find_object(triggerPtr, id);
+    if (!ok) continue;
     switch (triggerPtr.p->triggerState) {
     case TriggerRecord::TS_NOT_DEFINED:
       continue;
@@ -29058,7 +29279,6 @@ void
 Dbdict::check_consistency_table(TableRecordPtr tablePtr)
 {
   D("table " << copyRope<SZ>(tablePtr.p->tableName));
-  ndbrequire(tablePtr.p->tableId == tablePtr.i);
 
   switch (tablePtr.p->tableType) {
   case DictTabInfo::SystemTable: // should just be "Table"
@@ -29100,9 +29320,8 @@ Dbdict::check_consistency_index(TableRec
   }
 
   TableRecordPtr tablePtr;
-  tablePtr.i = indexPtr.p->primaryTableId;
-  ndbrequire(tablePtr.i != RNIL);
-  c_tableRecordPool.getPtr(tablePtr);
+  bool ok = find_object(tablePtr, indexPtr.p->primaryTableId);
+  ndbrequire(ok);
   check_consistency_table(tablePtr);
 
   bool is_unique_index = false;
@@ -29120,13 +29339,10 @@ Dbdict::check_consistency_index(TableRec
   }
 
   TriggerRecordPtr triggerPtr;
-  triggerPtr.i = indexPtr.p->triggerId;
-  ndbrequire(triggerPtr.i != RNIL);
-  c_triggerRecordPool.getPtr(triggerPtr);
-
+  ok = find_object(triggerPtr, indexPtr.p->triggerId);
+  ndbrequire(ok);
   ndbrequire(triggerPtr.p->tableId == tablePtr.p->tableId);
   ndbrequire(triggerPtr.p->indexId == indexPtr.p->tableId);
-  ndbrequire(triggerPtr.p->triggerId == triggerPtr.i);
 
   check_consistency_trigger(triggerPtr);
 
@@ -29152,21 +29368,19 @@ Dbdict::check_consistency_trigger(Trigge
   {
     ndbrequire(triggerPtr.p->triggerState == TriggerRecord::TS_ONLINE);
   }
-  ndbrequire(triggerPtr.p->triggerId == triggerPtr.i);
 
   TableRecordPtr tablePtr;
-  tablePtr.i = triggerPtr.p->tableId;
-  ndbrequire(tablePtr.i != RNIL);
-  c_tableRecordPool.getPtr(tablePtr);
+  bool ok = find_object(tablePtr, triggerPtr.p->tableId);
+  ndbrequire(ok);
   check_consistency_table(tablePtr);
 
   if (triggerPtr.p->indexId != RNIL)
   {
     jam();
     TableRecordPtr indexPtr;
-    indexPtr.i = triggerPtr.p->indexId;
-    c_tableRecordPool.getPtr(indexPtr);
-    ndbrequire(check_read_obj(indexPtr.i) == 0);
+    ndbrequire(check_read_obj(triggerPtr.p->indexId) == 0);
+    bool ok = find_object(indexPtr, triggerPtr.p->indexId);
+    ndbrequire(ok);
     ndbrequire(indexPtr.p->indexState == TableRecord::IS_ONLINE);
     TriggerInfo ti;
     TriggerInfo::unpackTriggerInfo(triggerPtr.p->triggerInfo, ti);
@@ -29174,7 +29388,7 @@ Dbdict::check_consistency_trigger(Trigge
     case TriggerEvent::TE_CUSTOM:
       if (! (triggerPtr.p->triggerState == TriggerRecord::TS_FAKE_UPGRADE))
       {
-        ndbrequire(triggerPtr.i == indexPtr.p->triggerId);
+        ndbrequire(triggerPtr.p->triggerId == indexPtr.p->triggerId);
       }
       break;
     default:

=== modified file 'storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp'
--- a/storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp	2011-11-03 08:40:19 +0000
+++ b/storage/ndb/src/kernel/blocks/dbdict/Dbdict.hpp	2011-11-19 14:55:47 +0000
@@ -167,6 +167,7 @@ struct sysTab_NDBEVENTS_0 {
  */
 class Dbdict: public SimulatedBlock {
 public:
+
   /*
    *   2.3 RECORD AND FILESIZES
    */
@@ -229,7 +230,7 @@ public:
   };
   typedef Ptr<AttributeRecord> AttributeRecordPtr;
   typedef ArrayPool<AttributeRecord> AttributeRecord_pool;
-  typedef DLHashTable<AttributeRecord,AttributeRecord,AttributeRecord_pool> AttributeRecord_hash;
+  typedef DLMHashTable<AttributeRecord_pool, AttributeRecord> AttributeRecord_hash;
   typedef DLFifoList<AttributeRecord,AttributeRecord,AttributeRecord_pool> AttributeRecord_list;
   typedef LocalDLFifoList<AttributeRecord,AttributeRecord,AttributeRecord_pool> LocalAttributeRecord_list;
 
@@ -249,6 +250,8 @@ public:
 
   struct TableRecord {
     TableRecord(){ m_upgrade_trigger_handling.m_upgrade = false;}
+    static bool isCompatible(Uint32 type) { return DictTabInfo::isTable(type) || DictTabInfo::isIndex(type); }
+
     Uint32 maxRowsLow;
     Uint32 maxRowsHigh;
     Uint32 minRowsLow;
@@ -434,8 +437,9 @@ public:
     Uint32 indexStatBgRequest;
   };
 
-  TableRecord_pool c_tableRecordPool;
-  RSS_AP_SNAPSHOT(c_tableRecordPool);
+  TableRecord_pool c_tableRecordPool_;
+  RSS_AP_SNAPSHOT(c_tableRecordPool_);
+  TableRecord_pool& get_pool(TableRecordPtr) { return c_tableRecordPool_; }
 
   /**  Node Group and Tablespace id+version + range or list data.
     *  This is only stored temporarily in DBDICT during an ongoing
@@ -455,6 +459,7 @@ public:
    */
   struct TriggerRecord {
     TriggerRecord() {}
+    static bool isCompatible(Uint32 type) { return DictTabInfo::isTrigger(type); }
 
     /** Trigger state */
     enum TriggerState {
@@ -501,8 +506,9 @@ public:
   typedef ArrayPool<TriggerRecord> TriggerRecord_pool;
 
   Uint32 c_maxNoOfTriggers;
-  TriggerRecord_pool c_triggerRecordPool;
-  RSS_AP_SNAPSHOT(c_triggerRecordPool);
+  TriggerRecord_pool c_triggerRecordPool_;
+  TriggerRecord_pool& get_pool(TriggerRecordPtr) { return c_triggerRecordPool_;}
+  RSS_AP_SNAPSHOT(c_triggerRecordPool_);
 
   /**
    * Information for each FS connection.
@@ -604,6 +610,7 @@ public:
 
   struct File {
     File() {}
+    static bool isCompatible(Uint32 type) { return DictTabInfo::isFile(type); }
 
     Uint32 key;
     Uint32 m_magic;
@@ -620,19 +627,15 @@ public:
       Uint32 prevList;
       Uint32 nextPool;
     };
-    Uint32 nextHash, prevHash;
-
-    Uint32 hashValue() const { return key;}
-    bool equal(const File& obj) const { return key == obj.key;}
   };
   typedef Ptr<File> FilePtr;
   typedef RecordPool<File, RWPool> File_pool;
   typedef DLListImpl<File_pool, File> File_list;
   typedef LocalDLListImpl<File_pool, File> Local_file_list;
-  typedef KeyTableImpl<File_pool, File> File_hash;
 
   struct Filegroup {
     Filegroup(){}
+    static bool isCompatible(Uint32 type) { return DictTabInfo::isFilegroup(type); }
 
     Uint32 key;
     Uint32 m_obj_ptr_i;
@@ -657,25 +660,34 @@ public:
     union {
       Uint32 nextPool;
       Uint32 nextList;
-      Uint32 nextHash;
     };
-    Uint32 prevHash;
-
-    Uint32 hashValue() const { return key;}
-    bool equal(const Filegroup& obj) const { return key == obj.key;}
   };
   typedef Ptr<Filegroup> FilegroupPtr;
   typedef RecordPool<Filegroup, RWPool> Filegroup_pool;
-  typedef KeyTableImpl<Filegroup_pool, Filegroup> Filegroup_hash;
 
   File_pool c_file_pool;
   Filegroup_pool c_filegroup_pool;
-  File_hash c_file_hash;
-  Filegroup_hash c_filegroup_hash;
+
+  File_pool& get_pool(FilePtr) { return c_file_pool; }
+  Filegroup_pool& get_pool(FilegroupPtr) { return c_filegroup_pool; }
 
   RopePool c_rope_pool;
   RSS_AP_SNAPSHOT(c_rope_pool);
 
+  template <typename T, typename U = T> struct HashedById {
+    static Uint32& nextHash(U& t) { return t.nextHash_by_id; }
+    static Uint32& prevHash(U& t) { return t.prevHash_by_id; }
+    static Uint32 hashValue(T const& t) { return t.hashValue_by_id(); }
+    static bool equal(T const& lhs, T const& rhs) { return lhs.equal_by_id(rhs); }
+  };
+
+  template <typename T, typename U = T> struct HashedByName {
+    static Uint32& nextHash(U& t) { return t.nextHash_by_name; }
+    static Uint32& prevHash(U& t) { return t.prevHash_by_name; }
+    static Uint32 hashValue(T const& t) { return t.hashValue_by_name(); }
+    static bool equal(T const& lhs, T const& rhs) { return lhs.equal_by_name(rhs); }
+  };
+
   struct DictObject {
     DictObject() {
       m_trans_key = 0;
@@ -683,6 +695,7 @@ public:
     };
     Uint32 m_id;
     Uint32 m_type;
+    Uint32 m_object_ptr_i;
     Uint32 m_ref_count;
     RopeHandle m_name;
     union {
@@ -694,21 +707,34 @@ public:
       Uint32 nextPool;
       Uint32 nextList;
     };
-    Uint32 nextHash;
-    Uint32 prevHash;
 
-    Uint32 hashValue() const { return m_name.hashValue();}
-    bool equal(const DictObject& obj) const {
-      if(obj.hashValue() == hashValue()){
+    // SchemaOp -> DictObject -> SchemaTrans
+    Uint32 m_trans_key;
+    Uint32 m_op_ref_count;
+
+    // HashedById
+    Uint32 nextHash_by_id;
+    Uint32 prevHash_by_id;
+    Uint32 hashValue_by_id() const { return m_id; }
+    bool equal_by_id(DictObject const& obj) const {
+      bool isTrigger = DictTabInfo::isTrigger(m_type);
+      bool objIsTrigger = DictTabInfo::isTrigger(obj.m_type);
+      return (isTrigger == objIsTrigger) &&
+             (obj.m_id == m_id);
+    }
+
+    // HashedByName
+    Uint32 nextHash_by_name;
+    Uint32 prevHash_by_name;
+    Uint32 hashValue_by_name() const { return m_name.hashValue(); }
+    bool equal_by_name(DictObject const& obj) const {
+      if(obj.hashValue_by_name() == hashValue_by_name()){
 	ConstRope r(* m_key.m_pool, obj.m_name);
 	return r.compare(m_key.m_name_ptr, m_key.m_name_len) == 0;
       }
       return false;
     }
 
-    // SchemaOp -> DictObject -> SchemaTrans
-    Uint32 m_trans_key;
-    Uint32 m_op_ref_count;
 #ifdef VM_TRACE
     void print(NdbOut&) const;
 #endif
@@ -716,13 +742,77 @@ public:
 
   typedef Ptr<DictObject> DictObjectPtr;
   typedef ArrayPool<DictObject> DictObject_pool;
-  typedef DLHashTable<DictObject,DictObject,DictObject_pool> DictObject_hash;
+  typedef DLMHashTable<DictObject_pool, DictObject, HashedByName<DictObject> > DictObjectName_hash;
+  typedef DLMHashTable<DictObject_pool, DictObject, HashedById<DictObject> > DictObjectId_hash;
   typedef SLList<DictObject> DictObject_list;
 
-  DictObject_hash c_obj_hash; // Name
+  DictObjectName_hash c_obj_name_hash; // Name (not temporary TableRecords)
+  DictObjectId_hash c_obj_id_hash; // Schema file id / Trigger id
   DictObject_pool c_obj_pool;
   RSS_AP_SNAPSHOT(c_obj_pool);
 
+  template<typename T> bool find_object(DictObjectPtr& obj, Ptr<T>& object, Uint32 id)
+  {
+    if (!find_object(obj, id))
+    {
+      object.setNull();
+      return false;
+    }
+    if (!T::isCompatible(obj.p->m_type))
+    {
+      object.setNull();
+      return false;
+    }
+    get_pool(object).getPtr(object, obj.p->m_object_ptr_i);
+    return !object.isNull();
+  }
+
+  template<typename T> bool find_object(Ptr<T>& object, Uint32 id)
+  {
+    DictObjectPtr obj;
+    return find_object(obj, object, id);
+  }
+
+  bool find_object(DictObjectPtr& obj, Ptr<TriggerRecord>& object, Uint32 id)
+  {
+    if (!find_trigger_object(obj, id))
+    {
+      object.setNull();
+      return false;
+    }
+    get_pool(object).getPtr(object, obj.p->m_object_ptr_i);
+    return !object.isNull();
+  }
+
+  bool find_object(DictObjectPtr& object, Uint32 id)
+  {
+    DictObject key;
+    key.m_id = id;
+    key.m_type = 0; // Not a trigger atleast
+    bool ok = c_obj_id_hash.find(object, key);
+    return ok;
+  }
+
+  bool find_trigger_object(DictObjectPtr& object, Uint32 id)
+  {
+    DictObject key;
+    key.m_id = id;
+    key.m_type = DictTabInfo::HashIndexTrigger; // A trigger type
+    bool ok = c_obj_id_hash.find(object, key);
+    return ok;
+  }
+
+  template<typename T> bool link_object(DictObjectPtr obj, Ptr<T> object)
+  {
+    if (!T::isCompatible(obj.p->m_type))
+    {
+      return false;
+    }
+    obj.p->m_object_ptr_i = object.i;
+    object.p->m_obj_ptr_i = obj.i;
+    return true;
+  }
+
   // 1
   DictObject * get_object(const char * name){
     return get_object(name, Uint32(strlen(name) + 1));
@@ -1612,7 +1702,7 @@ private:
   };
 
   typedef RecordPool<SchemaOp,ArenaPool> SchemaOp_pool;
-  typedef DLHashTable<SchemaOp,SchemaOp,SchemaOp_pool> SchemaOp_hash;
+  typedef DLMHashTable<SchemaOp_pool, SchemaOp> SchemaOp_hash;
   typedef DLFifoList<SchemaOp,SchemaOp,SchemaOp_pool>::Head  SchemaOp_head;
   typedef LocalDLFifoList<SchemaOp,SchemaOp,SchemaOp_pool> LocalSchemaOp_list;
 
@@ -1857,7 +1947,7 @@ private:
       assert(false);
       return -1;
     }
-    // DLHashTable
+    // DLMHashTable
     Uint32 trans_key;
     Uint32 nextHash;
     Uint32 prevHash;
@@ -1975,7 +2065,7 @@ private:
   Uint32 check_write_obj(Uint32, Uint32, SchemaFile::EntryState, ErrorInfo&);
 
   typedef RecordPool<SchemaTrans,ArenaPool> SchemaTrans_pool;
-  typedef DLHashTable<SchemaTrans,SchemaTrans,SchemaTrans_pool> SchemaTrans_hash;
+  typedef DLMHashTable<SchemaTrans_pool, SchemaTrans> SchemaTrans_hash;
   typedef DLFifoList<SchemaTrans,SchemaTrans,SchemaTrans_pool> SchemaTrans_list;
 
   SchemaTrans_pool c_schemaTransPool;
@@ -2194,7 +2284,7 @@ private:
     // ArrayPool
     Uint32 nextPool;
 
-    // DLHashTable
+    // DLMHashTable
     Uint32 tx_key;
     Uint32 nextHash;
     Uint32 prevHash;
@@ -2246,7 +2336,7 @@ private:
   };
 
   typedef ArrayPool<TxHandle> TxHandle_pool;
-  typedef DLHashTable<TxHandle,TxHandle,TxHandle_pool> TxHandle_hash;
+  typedef DLMHashTable<TxHandle_pool, TxHandle> TxHandle_hash;
 
   TxHandle_pool c_txHandlePool;
   TxHandle_hash c_txHandleHash;
@@ -2929,6 +3019,7 @@ private:
 
   struct HashMapRecord {
     HashMapRecord(){}
+    static bool isCompatible(Uint32 type) { return DictTabInfo::isHashMap(type); }
 
     /* Table id (array index in DICT and other blocks) */
     union {
@@ -2944,24 +3035,15 @@ private:
      * ptr.i, in g_hash_map
      */
     Uint32 m_map_ptr_i;
-    union {
-      Uint32 nextPool;
-      Uint32 nextHash;
-    };
-    Uint32 prevHash;
-
-    Uint32 hashValue() const { return key;}
-    bool equal(const HashMapRecord& obj) const { return key == obj.key;}
-
+    Uint32 nextPool;
   };
   typedef Ptr<HashMapRecord> HashMapRecordPtr;
   typedef ArrayPool<HashMapRecord> HashMapRecord_pool;
-  typedef KeyTableImpl<HashMapRecord_pool, HashMapRecord> HashMapRecord_hash;
 
   HashMapRecord_pool c_hash_map_pool;
-  HashMapRecord_hash c_hash_map_hash;
   RSS_AP_SNAPSHOT(c_hash_map_pool);
   RSS_AP_SNAPSHOT(g_hash_map);
+  HashMapRecord_pool& get_pool(HashMapRecordPtr) { return c_hash_map_pool; }
 
   struct CreateHashMapRec;
   typedef RecordPool<CreateHashMapRec,ArenaPool> CreateHashMapRec_pool;
@@ -3668,14 +3750,17 @@ private:
   /* ------------------------------------------------------------ */
   // Drop Table Handling
   /* ------------------------------------------------------------ */
-  void releaseTableObject(Uint32 tableId, bool removeFromHash = true);
+  void releaseTableObject(Uint32 table_ptr_i, bool removeFromHash = true);
 
   /* ------------------------------------------------------------ */
   // General Stuff
   /* ------------------------------------------------------------ */
-  Uint32 getFreeObjId(Uint32 minId, bool both = false);
+  Uint32 getFreeObjId(bool both = false);
   Uint32 getFreeTableRecord();
+  bool seizeTableRecord(TableRecordPtr& tableRecord, Uint32& schemaFileId);
   Uint32 getFreeTriggerRecord();
+  bool seizeTriggerRecord(TriggerRecordPtr& tableRecord, Uint32 triggerId);
+  void releaseTriggerObject(Uint32 trigger_ptr_i);
   bool getNewAttributeRecord(TableRecordPtr tablePtr,
 			     AttributeRecordPtr & attrPtr);
   void packTableIntoPages(Signal* signal);
@@ -3928,10 +4013,8 @@ private:
   void initWriteSchemaRecord();
 
   void initNodeRecords();
-  void initTableRecords();
-  void initialiseTableRecord(TableRecordPtr tablePtr);
-  void initTriggerRecords();
-  void initialiseTriggerRecord(TriggerRecordPtr triggerPtr);
+  void initialiseTableRecord(TableRecordPtr tablePtr, Uint32 tableId);
+  void initialiseTriggerRecord(TriggerRecordPtr triggerPtr, Uint32 triggerId);
   void initPageRecords();
 
   Uint32 getFsConnRecord();
@@ -4037,6 +4120,7 @@ protected:
   virtual bool getParam(const char * param, Uint32 * retVal);
 private:
   ArenaAllocator c_arenaAllocator;
+  Uint32 c_noOfMetaTables;
 };
 
 inline bool

=== modified file 'storage/ndb/src/kernel/blocks/dbdih/Dbdih.hpp'
--- a/storage/ndb/src/kernel/blocks/dbdih/Dbdih.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dbdih/Dbdih.hpp	2011-11-14 09:18:48 +0000
@@ -520,7 +520,7 @@ public:
 // Each entry in this array contains a reference to 16 fragment records in a
 // row. Thus finding the correct record is very quick provided the fragment id.
 //-----------------------------------------------------------------------------
-    Uint32 startFid[MAX_NDB_NODES * MAX_FRAG_PER_NODE / NO_OF_FRAGS_PER_CHUNK];
+    Uint32 startFid[MAX_NDB_NODES * MAX_FRAG_PER_LQH / NO_OF_FRAGS_PER_CHUNK];
 
     Uint32 tabFile[2];
     Uint32 connectrec;                                    

=== modified file 'storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp	2011-10-28 14:17:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dbdih/DbdihMain.cpp	2011-11-17 08:33:23 +0000
@@ -7485,6 +7485,9 @@ void Dbdih::execCREATE_FRAGMENTATION_REQ
   Uint32 err = 0;
   const Uint32 defaultFragments = 
     c_fragments_per_node * cnoOfNodeGroups * cnoReplicas;
+  const Uint32 maxFragments =
+    MAX_FRAG_PER_LQH * (getLqhWorkers() ? getLqhWorkers() : 1) *
+    cnoOfNodeGroups * cnoReplicas;
 
   do {
     NodeGroupRecordPtr NGPtr;
@@ -7506,11 +7509,15 @@ void Dbdih::execCREATE_FRAGMENTATION_REQ
       case DictTabInfo::AllNodesMediumTable:
         jam();
         noOfFragments = 2 * defaultFragments;
+        if (noOfFragments > maxFragments)
+          noOfFragments = maxFragments;
         set_default_node_groups(signal, noOfFragments);
         break;
       case DictTabInfo::AllNodesLargeTable:
         jam();
         noOfFragments = 4 * defaultFragments;
+        if (noOfFragments > maxFragments)
+          noOfFragments = maxFragments;
         set_default_node_groups(signal, noOfFragments);
         break;
       case DictTabInfo::SingleFragment:
@@ -7863,7 +7870,7 @@ void Dbdih::execDIADDTABREQ(Signal* sign
   }
 
   union {
-    Uint16 fragments[2 + MAX_FRAG_PER_NODE*MAX_REPLICAS*MAX_NDB_NODES];
+    Uint16 fragments[2 + MAX_FRAG_PER_LQH*MAX_REPLICAS*MAX_NDB_NODES];
     Uint32 align;
   };
   (void)align; // kill warning

=== modified file 'storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp'
--- a/storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp	2011-10-28 09:56:57 +0000
+++ b/storage/ndb/src/kernel/blocks/dblqh/Dblqh.hpp	2011-11-14 12:02:56 +0000
@@ -111,6 +111,9 @@ class Lgman;
 #define ZPOS_PREV_PAGE_NO 19
 #define ZPOS_IN_FREE_LIST 20
 
+/* Specify number of log parts used to enable use of more LQH threads */
+#define ZPOS_NO_LOG_PARTS 21
+
 /* ------------------------------------------------------------------------- */
 /*       CONSTANTS FOR THE VARIOUS REPLICA AND NODE TYPES.                   */
 /* ------------------------------------------------------------------------- */
@@ -1929,8 +1932,8 @@ public:
       ,TABLE_READ_ONLY = 9
     };
     
-    UintR fragrec[MAX_FRAG_PER_NODE];
-    Uint16 fragid[MAX_FRAG_PER_NODE];
+    UintR fragrec[MAX_FRAG_PER_LQH];
+    Uint16 fragid[MAX_FRAG_PER_LQH];
     /**
      * Status of the table 
      */
@@ -2834,7 +2837,6 @@ private:
   UintR cfirstfreeLcpLoc;
   UintR clcpFileSize;
 
-#define ZLOG_PART_FILE_SIZE 4
   LogPartRecord *logPartRecord;
   LogPartRecordPtr logPartPtr;
   UintR clogPartFileSize;

=== modified file 'storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.cpp'
--- a/storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.cpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.cpp	2011-11-14 12:02:56 +0000
@@ -20,6 +20,7 @@
 
 NdbLogPartInfo::NdbLogPartInfo(Uint32 instanceNo)
 {
+  LogParts = globalData.ndbLogParts;
   lqhWorkers = globalData.ndbMtLqhWorkers;
   partCount = 0;
   partMask.clear();

=== modified file 'storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.hpp'
--- a/storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dblqh/DblqhCommon.hpp	2011-11-14 12:02:56 +0000
@@ -22,7 +22,10 @@
 #include <Bitmask.hpp>
 
 /*
- * Log part id is from DBDIH.  Number of log parts is fixed as 4.
+ * Log part id is from DBDIH.  Number of log parts is configurable with a
+ * maximum setting and minimum of 4 parts. The below description assumes
+ * 4 parts.
+ *
  * A log part is identified by log part number (0-3)
  *
  *   log part number = log part id % 4
@@ -38,12 +41,12 @@
  * instance (main instance 0 or worker instances 1-4).
  */
 struct NdbLogPartInfo {
-  enum { LogParts = 4 };
+  Uint32 LogParts;
   NdbLogPartInfo(Uint32 instanceNo);
   Uint32 lqhWorkers;
   Uint32 partCount;
-  Uint16 partNo[LogParts];
-  Bitmask<(LogParts+31)/32> partMask;
+  Uint16 partNo[NDB_MAX_LOG_PARTS];
+  Bitmask<(NDB_MAX_LOG_PARTS+31)/32> partMask;
   Uint32 partNoFromId(Uint32 lpid) const;
   bool partNoOwner(Uint32 lpno) const;
   bool partNoOwner(Uint32 tabId, Uint32 fragId);

=== modified file 'storage/ndb/src/kernel/blocks/dblqh/DblqhInit.cpp'
--- a/storage/ndb/src/kernel/blocks/dblqh/DblqhInit.cpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dblqh/DblqhInit.cpp	2011-11-16 11:05:46 +0000
@@ -117,7 +117,7 @@ void Dblqh::initRecords()
 
   logPartRecord = (LogPartRecord*)allocRecord("LogPartRecord",
 					      sizeof(LogPartRecord), 
-					      clogPartFileSize);
+					      NDB_MAX_LOG_PARTS);
 
   logFileRecord = (LogFileRecord*)allocRecord("LogFileRecord",
 					      sizeof(LogFileRecord),

=== modified file 'storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp	2011-10-28 14:17:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp	2011-11-16 21:58:12 +0000
@@ -1219,7 +1219,35 @@ void Dblqh::execREAD_CONFIG_REQ(Signal*
   const ndb_mgm_configuration_iterator * p = 
     m_ctx.m_config.getOwnConfigIterator();
   ndbrequire(p != 0);
-  
+
+
+  /**
+   * TODO move check of log-parts vs. ndbMtLqhWorkers to better place
+   * (Configuration.cpp ??)
+   */
+  ndbrequire(globalData.ndbLogParts <= NDB_MAX_LOG_PARTS);
+  if (globalData.ndbMtLqhWorkers > globalData.ndbLogParts)
+  {
+    char buf[255];
+    BaseString::snprintf(buf, sizeof(buf),
+      "Trying to start %d LQH workers with only %d log parts, try initial"
+      " node restart to be able to use more LQH workers.",
+      globalData.ndbMtLqhWorkers, globalData.ndbLogParts);
+    progError(__LINE__, NDBD_EXIT_INVALID_CONFIG, buf);
+  }
+
+  if (globalData.ndbLogParts != 4 &&
+      globalData.ndbLogParts != 8 &&
+      globalData.ndbLogParts != 16)
+  {
+    char buf[255];
+    BaseString::snprintf(buf, sizeof(buf),
+      "Trying to start with %d log parts, number of log parts can"
+      " only be set to 4, 8 or 16.",
+      globalData.ndbLogParts);
+    progError(__LINE__, NDBD_EXIT_INVALID_CONFIG, buf);
+  }
+
   cnoLogFiles = 8;
   ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_DB_NO_REDOLOG_FILES, 
 					&cnoLogFiles));
@@ -1247,7 +1275,7 @@ void Dblqh::execREAD_CONFIG_REQ(Signal*
   ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_LQH_TABLE, &ctabrecFileSize));
   ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_LQH_TC_CONNECT, 
 					&ctcConnectrecFileSize));
-  clogFileFileSize       = 4 * cnoLogFiles;
+  clogFileFileSize = clogPartFileSize * cnoLogFiles;
   ndbrequire(!ndb_mgm_get_int_parameter(p, CFG_LQH_SCAN, &cscanrecFileSize));
   cmaxAccOps = cscanrecFileSize * MAX_PARALLEL_OP_PER_SCAN;
 
@@ -1889,7 +1917,7 @@ void Dblqh::execLQHFRAGREQ(Signal* signa
     ptrCheckGuard(tTablePtr, ctabrecFileSize, tablerec);
     FragrecordPtr tFragPtr;
     tFragPtr.i = RNIL;
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tTablePtr.p->fragid); i++) {
       if (tTablePtr.p->fragid[i] == fragptr.p->fragId) {
         jam();
         tFragPtr.i = tTablePtr.p->fragrec[i];
@@ -2633,7 +2661,7 @@ void Dblqh::removeTable(Uint32 tableId)
   tabptr.i = tableId;
   ptrCheckGuard(tabptr, ctabrecFileSize, tablerec);
   
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragid); i++) {
     jam();
     if (tabptr.p->fragid[i] != ZNIL) {
       jam();
@@ -2778,7 +2806,7 @@ Dblqh::wait_reorg_suma_filter_enabled(Si
 void
 Dblqh::commit_reorg(TablerecPtr tablePtr)
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tablePtr.p->fragrec); i++)
   {
     jam();
     Ptr<Fragrecord> fragPtr;
@@ -11205,7 +11233,7 @@ void Dblqh::scanTupkeyConfLab(Signal* si
     tdata4 += sendKeyinfo20(signal, scanptr.p, tcConnectptr.p);
   }//if
   ndbrequire(scanptr.p->m_curr_batch_size_rows < MAX_PARALLEL_OP_PER_SCAN);
-  scanptr.p->m_curr_batch_size_bytes+= tdata4;
+  scanptr.p->m_curr_batch_size_bytes+= tdata4 * sizeof(Uint32);
   scanptr.p->m_curr_batch_size_rows = rows + 1;
   scanptr.p->m_last_row = tdata5;
   if (scanptr.p->check_scan_batch_completed() | tdata5){
@@ -11832,6 +11860,7 @@ void Dblqh::releaseScanrec(Signal* signa
 /* ------------------------------------------------------------------------
  * -------              SEND KEYINFO20 TO API                       ------- 
  *
+ * Return: Length in number of Uint32 words
  * ------------------------------------------------------------------------  */
 Uint32 Dblqh::sendKeyinfo20(Signal* signal, 
 			    ScanRecord * scanP, 
@@ -11968,7 +11997,9 @@ Uint32 Dblqh::sendKeyinfo20(Signal* sign
 void Dblqh::sendScanFragConf(Signal* signal, Uint32 scanCompleted) 
 {
   Uint32 completed_ops= scanptr.p->m_curr_batch_size_rows;
-  Uint32 total_len= scanptr.p->m_curr_batch_size_bytes;
+  Uint32 total_len= scanptr.p->m_curr_batch_size_bytes / sizeof(Uint32);
+  ndbassert((scanptr.p->m_curr_batch_size_bytes % sizeof(Uint32)) == 0);
+
   scanptr.p->scanTcWaiting = 0;
 
   if(ERROR_INSERTED(5037)){
@@ -14641,7 +14672,7 @@ void Dblqh::initGcpRecLab(Signal* signal
   }//for
   // initialize un-used part
   Uint32 Ti;
-  for (Ti = clogPartFileSize; Ti < ZLOG_PART_FILE_SIZE; Ti++) {
+  for (Ti = clogPartFileSize; Ti < NDB_MAX_LOG_PARTS; Ti++) {
     gcpPtr.p->gcpFilePtr[Ti] = ZNIL;
     gcpPtr.p->gcpPageNo[Ti] = ZNIL;
     gcpPtr.p->gcpSyncReady[Ti] = FALSE;
@@ -15695,7 +15726,10 @@ void Dblqh::initWriteEndLab(Signal* sign
 /*---------------------------------------------------------------------------*/
 /* PAGE ZERO IN FILE ZERO MUST SET LOG LAP TO ONE SINCE IT HAS STARTED       */
 /* WRITING TO THE LOG, ALSO GLOBAL CHECKPOINTS ARE SET TO ZERO.              */
+/* Set number of log parts used to ensure we use correct number of log parts */
+/* at system restart. Was previously hardcoded to 4.                         */
 /*---------------------------------------------------------------------------*/
+    logPagePtr.p->logPageWord[ZPOS_NO_LOG_PARTS]= globalData.ndbLogParts;
     logPagePtr.p->logPageWord[ZPOS_LOG_LAP] = 1;
     logPagePtr.p->logPageWord[ZPOS_MAX_GCI_STARTED] = 0;
     logPagePtr.p->logPageWord[ZPOS_MAX_GCI_COMPLETED] = 0;
@@ -15878,6 +15912,8 @@ void Dblqh::initLogpage(Signal* signal)
 {
   TcConnectionrecPtr ilpTcConnectptr;
 
+  /* Ensure all non-used header words are zero */
+  bzero(logPagePtr.p, sizeof(Uint32) * ZPAGE_HEADER_SIZE);
   logPagePtr.p->logPageWord[ZPOS_LOG_LAP] = logPartPtr.p->logLap;
   logPagePtr.p->logPageWord[ZPOS_MAX_GCI_COMPLETED] = 
         logPartPtr.p->logPartNewestCompletedGCI;
@@ -15885,6 +15921,7 @@ void Dblqh::initLogpage(Signal* signal)
   logPagePtr.p->logPageWord[ZPOS_VERSION] = NDB_VERSION;
   logPagePtr.p->logPageWord[ZPOS_NO_LOG_FILES] = logPartPtr.p->noLogFiles;
   logPagePtr.p->logPageWord[ZCURR_PAGE_INDEX] = ZPAGE_HEADER_SIZE;
+  logPagePtr.p->logPageWord[ZPOS_NO_LOG_PARTS]= globalData.ndbLogParts;
   ilpTcConnectptr.i = logPartPtr.p->firstLogTcrec;
   if (ilpTcConnectptr.i != RNIL) {
     jam();
@@ -16420,6 +16457,35 @@ void Dblqh::openSrFrontpageLab(Signal* s
  * -------------------------------------------------------------------------- */
 void Dblqh::readSrFrontpageLab(Signal* signal) 
 {
+  Uint32 num_parts_used;
+  if (!ndb_configurable_log_parts(logPagePtr.p->logPageWord[ZPOS_VERSION])) {
+    jam();
+    num_parts_used= 4;
+  }
+  else
+  {
+    jam();
+    num_parts_used = logPagePtr.p->logPageWord[ZPOS_NO_LOG_PARTS];
+  }
+  /* Verify that number of log parts >= number of LQH workers */
+  if (globalData.ndbMtLqhWorkers > num_parts_used) {
+    char buf[255];
+    BaseString::snprintf(buf, sizeof(buf),
+      "Trying to start %d LQH workers with only %d log parts, try initial"
+      " node restart to be able to use more LQH workers.",
+      globalData.ndbMtLqhWorkers, num_parts_used);
+    progError(__LINE__, NDBD_EXIT_INVALID_CONFIG, buf);
+  }
+  if (num_parts_used != globalData.ndbLogParts)
+  {
+    char buf[255];
+    BaseString::snprintf(buf, sizeof(buf),
+      "Can only change NoOfLogParts through initial node restart, old"
+      " value of NoOfLogParts = %d, tried using %d",
+      num_parts_used, globalData.ndbLogParts);
+    progError(__LINE__, NDBD_EXIT_INVALID_CONFIG, buf);
+  }
+
   Uint32 fileNo = logPagePtr.p->logPageWord[ZPAGE_HEADER_SIZE + ZPOS_FILE_NO];
   if (fileNo == 0) {
     jam();
@@ -20047,7 +20113,7 @@ void Dblqh::deleteFragrec(Uint32 fragId)
 {
   Uint32 indexFound= RNIL;
   fragptr.i = RNIL;
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragid); i++) {
     jam();
     if (tabptr.p->fragid[i] == fragId) {
       fragptr.i = tabptr.p->fragrec[i];
@@ -20262,7 +20328,7 @@ Dblqh::getFirstInLogQueue(Signal* signal
 /* ---------------------------------------------------------------- */
 bool Dblqh::getFragmentrec(Signal* signal, Uint32 fragId) 
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragid); i++) {
     jam();
     if (tabptr.p->fragid[i] == fragId) {
       fragptr.i = tabptr.p->fragrec[i];
@@ -20325,7 +20391,7 @@ void Dblqh::initialiseGcprec(Signal* sig
   if (cgcprecFileSize != 0) {
     for (gcpPtr.i = 0; gcpPtr.i < cgcprecFileSize; gcpPtr.i++) {
       ptrAss(gcpPtr, gcpRecord);
-      for (tigpIndex = 0; tigpIndex < ZLOG_PART_FILE_SIZE; tigpIndex++) {
+      for (tigpIndex = 0; tigpIndex < NDB_MAX_LOG_PARTS; tigpIndex++) {
         gcpPtr.p->gcpLogPartState[tigpIndex] = ZIDLE;
         gcpPtr.p->gcpSyncReady[tigpIndex] = ZFALSE;
       }//for
@@ -20613,7 +20679,7 @@ void Dblqh::initialiseTabrec(Signal* sig
       tabptr.p->tableStatus = Tablerec::NOT_DEFINED;
       tabptr.p->usageCountR = 0;
       tabptr.p->usageCountW = 0;
-      for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+      for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragid); i++) {
         tabptr.p->fragid[i] = ZNIL;
         tabptr.p->fragrec[i] = RNIL;
       }//for
@@ -20883,7 +20949,7 @@ bool Dblqh::insertFragrec(Signal* signal
     terrorCode = ZNO_FREE_FRAGMENTREC;
     return false;
   }
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabptr.p->fragid); i++) {
     jam();
     if (tabptr.p->fragid[i] == ZNIL) {
       jam();
@@ -22576,7 +22642,7 @@ Dblqh::execDUMP_STATE_ORD(Signal* signal
 		  i, tabPtr.p->tableStatus,
                   tabPtr.p->usageCountR, tabPtr.p->usageCountW);
 
-	for (Uint32 j = 0; j<MAX_FRAG_PER_NODE; j++)
+	for (Uint32 j = 0; j<NDB_ARRAY_SIZE(tabPtr.p->fragrec); j++)
 	{
 	  FragrecordPtr fragPtr;
 	  if ((fragPtr.i = tabPtr.p->fragrec[j]) != RNIL)

=== modified file 'storage/ndb/src/kernel/blocks/dbspj/Dbspj.hpp'
--- a/storage/ndb/src/kernel/blocks/dbspj/Dbspj.hpp	2011-09-29 11:43:27 +0000
+++ b/storage/ndb/src/kernel/blocks/dbspj/Dbspj.hpp	2011-11-09 08:54:55 +0000
@@ -871,6 +871,7 @@ public:
     Uint32 m_senderRef;
     Uint32 m_senderData;
     Uint32 m_rootResultData;
+    Uint32 m_rootFragId;
     Uint32 m_transId[2];
     TreeNode_list::Head m_nodes;
     TreeNodeCursor_list::Head m_cursor_nodes;

=== modified file 'storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp	2011-10-31 09:49:29 +0000
+++ b/storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp	2011-11-14 14:18:01 +0000
@@ -482,6 +482,7 @@ Dbspj::do_init(Request* requestP, const
   requestP->m_outstanding = 0;
   requestP->m_transId[0] = req->transId1;
   requestP->m_transId[1] = req->transId2;
+  requestP->m_rootFragId = LqhKeyReq::getFragmentId(req->fragmentData);
   bzero(requestP->m_lookup_node_data, sizeof(requestP->m_lookup_node_data));
 #ifdef SPJ_TRACE_TIME
   requestP->m_cnt_batches = 0;
@@ -777,6 +778,7 @@ Dbspj::do_init(Request* requestP, const
   requestP->m_transId[0] = req->transId1;
   requestP->m_transId[1] = req->transId2;
   requestP->m_rootResultData = req->resultData;
+  requestP->m_rootFragId = req->fragmentNoKeyLen;
   bzero(requestP->m_lookup_node_data, sizeof(requestP->m_lookup_node_data));
 #ifdef SPJ_TRACE_TIME
   requestP->m_cnt_batches = 0;
@@ -1530,7 +1532,17 @@ Dbspj::releaseNodeRows(Ptr<Request> requ
       releaseRow(requestPtr, pos);
       cnt++;
     }
-    treeNodePtr.p->m_row_map.init();
+
+    // Release the (now empty) RowMap
+    RowMap& map = treeNodePtr.p->m_row_map;
+    if (!map.isNull())
+    {
+      jam();
+      RowRef ref;
+      map.copyto(ref);
+      releaseRow(requestPtr, ref);  // Map was allocated in row memory
+      map.init();
+    }
     DEBUG("RowMapIterator: released " << cnt << " rows!");
   }
 }
@@ -2187,12 +2199,6 @@ Dbspj::storeRow(Ptr<Request> requestPtr,
     jam();
     return DbspjErr::OutOfRowMemory;
   }
-
-  row.m_type = RowPtr::RT_LINEAR;
-  row.m_row_data.m_linear.m_row_ref = ref;
-  row.m_row_data.m_linear.m_header = (RowPtr::Header*)(dstptr + linklen);
-  row.m_row_data.m_linear.m_data = dstptr + linklen + headlen;
-
   memcpy(dstptr + linklen, headptr, 4 * headlen);
   copy(dstptr + linklen + headlen, dataPtr);
 
@@ -2205,9 +2211,30 @@ Dbspj::storeRow(Ptr<Request> requestPtr,
   else
   {
     jam();
-    return add_to_map(requestPtr, treeNodePtr, row.m_src_correlation, ref);
+    Uint32 error = add_to_map(requestPtr, treeNodePtr, row.m_src_correlation, ref);
+    if (unlikely(error))
+      return error;
+  }
+
+  /**
+   * Refetch pointer to alloc'ed row memory  before creating RowPtr 
+   * as above add_to_xxx may mave reorganized memory causing
+   * alloced row to be moved.
+   */
+  Uint32 * rowptr = 0;
+  if (ref.m_allocator == 0)
+  {
+    jam();
+    rowptr = get_row_ptr_stack(ref);
+  }
+  else
+  {
+    jam();
+    rowptr = get_row_ptr_var(ref);
   }
 
+//ndbrequire(rowptr==dstptr);  // It moved which we now do handle
+  setupRowPtr(treeNodePtr, row, ref, rowptr);
   return 0;
 }
 
@@ -4615,12 +4642,17 @@ Dbspj::execDIH_SCAN_TAB_CONF(Signal* sig
   Ptr<Request> requestPtr;
   m_request_pool.getPtr(requestPtr, treeNodePtr.p->m_requestPtrI);
 
+  // Add a skew in the fragment lists such that we don't scan 
+  // the same subset of frags fram all SPJ requests in case of
+  // the scan not being ' T_SCAN_PARALLEL'
+  Uint16 fragNoOffs = requestPtr.p->m_rootFragId % fragCount;
+
   Ptr<ScanFragHandle> fragPtr;
   Local_ScanFragHandle_list list(m_scanfraghandle_pool, data.m_fragments);
   if (likely(m_scanfraghandle_pool.seize(requestPtr.p->m_arena, fragPtr)))
   {
     jam();
-    fragPtr.p->init(0);
+    fragPtr.p->init(fragNoOffs);
     fragPtr.p->m_treeNodePtrI = treeNodePtr.i;
     list.addLast(fragPtr);
   }
@@ -4686,10 +4718,11 @@ Dbspj::execDIH_SCAN_TAB_CONF(Signal* sig
     {
       jam();
       Ptr<ScanFragHandle> fragPtr;
+      Uint16 fragNo = (fragNoOffs+i) % fragCount;
       if (likely(m_scanfraghandle_pool.seize(requestPtr.p->m_arena, fragPtr)))
       {
         jam();
-        fragPtr.p->init(i);
+        fragPtr.p->init(fragNo);
         fragPtr.p->m_treeNodePtrI = treeNodePtr.i;
         list.addLast(fragPtr);
       }
@@ -5192,6 +5225,7 @@ Dbspj::scanIndex_send(Signal* signal,
   jam();
   ndbassert(bs_bytes > 0);
   ndbassert(bs_rows > 0);
+  ndbassert(bs_rows <= bs_bytes);
   /**
    * if (m_bits & prunemask):
    * - Range keys sliced out to each ScanFragHandle
@@ -5420,6 +5454,7 @@ Dbspj::scanIndex_execSCAN_FRAGCONF(Signa
 
   Uint32 rows = conf->completedOps;
   Uint32 done = conf->fragmentCompleted;
+  Uint32 bytes = conf->total_len * sizeof(Uint32);
 
   Uint32 state = fragPtr.p->m_state;
   ScanIndexData& data = treeNodePtr.p->m_scanindex_data;
@@ -5435,9 +5470,9 @@ Dbspj::scanIndex_execSCAN_FRAGCONF(Signa
 
   requestPtr.p->m_rows += rows;
   data.m_totalRows += rows;
-  data.m_totalBytes += conf->total_len;
+  data.m_totalBytes += bytes;
   data.m_largestBatchRows = MAX(data.m_largestBatchRows, rows);
-  data.m_largestBatchBytes = MAX(data.m_largestBatchBytes, conf->total_len);
+  data.m_largestBatchBytes = MAX(data.m_largestBatchBytes, bytes);
 
   if (!treeNodePtr.p->isLeaf())
   {
@@ -5532,37 +5567,43 @@ Dbspj::scanIndex_execSCAN_FRAGCONF(Signa
         org->batch_size_rows / data.m_parallelism * (data.m_parallelism - 1)
         + data.m_totalRows;
       
-      // Number of rows that we can still fetch in this batch.
+      // Number of rows & bytes that we can still fetch in this batch.
       const Int32 remainingRows 
         = static_cast<Int32>(org->batch_size_rows - maxCorrVal);
-      
+      const Int32 remainingBytes 
+        = static_cast<Int32>(org->batch_size_bytes - data.m_totalBytes);
+
       if (remainingRows >= data.m_frags_not_started &&
+          remainingBytes >= data.m_frags_not_started &&
           /**
            * Check that (remaning row capacity)/(remaining fragments) is 
            * greater or equal to (rows read so far)/(finished fragments).
            */
           remainingRows * static_cast<Int32>(data.m_parallelism) >=
-          static_cast<Int32>(data.m_totalRows * data.m_frags_not_started) &&
-          (org->batch_size_bytes - data.m_totalBytes) * data.m_parallelism >=
-          data.m_totalBytes * data.m_frags_not_started)
+            static_cast<Int32>(data.m_totalRows * data.m_frags_not_started) &&
+          remainingBytes * static_cast<Int32>(data.m_parallelism) >=
+            static_cast<Int32>(data.m_totalBytes * data.m_frags_not_started))
       {
         jam();
         Uint32 batchRange = maxCorrVal;
+        Uint32 bs_rows  = remainingRows / data.m_frags_not_started;
+        Uint32 bs_bytes = remainingBytes / data.m_frags_not_started;
+
         DEBUG("::scanIndex_execSCAN_FRAGCONF() first batch was not full."
               " Asking for new batches from " << data.m_frags_not_started <<
               " fragments with " << 
-              remainingRows / data.m_frags_not_started 
-              <<" rows and " << 
-              (org->batch_size_bytes - data.m_totalBytes)
-              / data.m_frags_not_started 
-              << " bytes.");
+              bs_rows  <<" rows and " << 
+              bs_bytes << " bytes.");
+
+        if (unlikely(bs_rows > bs_bytes))
+          bs_rows = bs_bytes;
+
         scanIndex_send(signal,
                        requestPtr,
                        treeNodePtr,
                        data.m_frags_not_started,
-                       (org->batch_size_bytes - data.m_totalBytes)
-                       / data.m_frags_not_started,
-                       remainingRows / data.m_frags_not_started,
+                       bs_bytes,
+                       bs_rows,
                        batchRange);
         return;
       }

=== modified file 'storage/ndb/src/kernel/blocks/dbtc/Dbtc.hpp'
--- a/storage/ndb/src/kernel/blocks/dbtc/Dbtc.hpp	2011-10-28 09:56:57 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtc/Dbtc.hpp	2011-11-17 08:49:40 +0000
@@ -1769,6 +1769,7 @@ private:
     Uint64 cabortCount;
     Uint64 c_scan_count;
     Uint64 c_range_scan_count;
+    Uint64 clocalReadCount;
 
     // Resource usage counter(not monotonic)
     Uint32 cconcurrentOp;
@@ -1783,6 +1784,7 @@ private:
      cabortCount(0),
      c_scan_count(0),
      c_range_scan_count(0),
+     clocalReadCount(0),
      cconcurrentOp(0) {}
 
     Uint32 build_event_rep(Signal* signal)
@@ -1800,6 +1802,7 @@ private:
       const Uint32 abortCount =       diff(signal, 13, cabortCount);
       const Uint32 scan_count =       diff(signal, 15, c_scan_count);
       const Uint32 range_scan_count = diff(signal, 17, c_range_scan_count);
+      const Uint32 localread_count = diff(signal, 19, clocalReadCount);
 
       signal->theData[0] = NDB_LE_TransReportCounters;
       signal->theData[1] = transCount;
@@ -1812,7 +1815,8 @@ private:
       signal->theData[8] = abortCount;
       signal->theData[9] = scan_count;
       signal->theData[10] = range_scan_count;
-      return 11;
+      signal->theData[11] = localread_count;
+      return 12;
     }
 
     Uint32 build_continueB(Signal* signal) const
@@ -1821,7 +1825,9 @@ private:
       const Uint64* vars[] = {
         &cattrinfoCount, &ctransCount, &ccommitCount,
         &creadCount, &csimpleReadCount, &cwriteCount,
-        &cabortCount, &c_scan_count, &c_range_scan_count };
+        &cabortCount, &c_scan_count, &c_range_scan_count,
+        &clocalReadCount
+      };
       const size_t num = sizeof(vars)/sizeof(vars[0]);
 
       for (size_t i = 0; i < num; i++)

=== modified file 'storage/ndb/src/kernel/blocks/dbtc/DbtcMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtc/DbtcMain.cpp	2011-10-23 08:34:49 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtc/DbtcMain.cpp	2011-11-17 08:49:40 +0000
@@ -3356,7 +3356,10 @@ void Dbtc::tckeyreq050Lab(Signal* signal
     jam();
     regTcPtr->lastReplicaNo = 0;
     regTcPtr->noOfNodes = 1;
-  } 
+
+    if (regTcPtr->tcNodedata[0] == getOwnNodeId())
+      c_counters.clocalReadCount++;
+  }
   else if (Toperation == ZUNLOCK)
   {
     regTcPtr->m_special_op_flags &= ~TcConnectRecord::SOF_REORG_MOVING;
@@ -13260,7 +13263,8 @@ void Dbtc::execDBINFO_SCANREQ(Signal *si
       { Ndbinfo::WRITES_COUNTER, c_counters.cwriteCount },
       { Ndbinfo::ABORTS_COUNTER, c_counters.cabortCount },
       { Ndbinfo::TABLE_SCANS_COUNTER, c_counters.c_scan_count },
-      { Ndbinfo::RANGE_SCANS_COUNTER, c_counters.c_range_scan_count }
+      { Ndbinfo::RANGE_SCANS_COUNTER, c_counters.c_range_scan_count },
+      { Ndbinfo::LOCAL_READ_COUNTER, c_counters.clocalReadCount }
     };
     const size_t num_counters = sizeof(counters) / sizeof(counters[0]);
 

=== modified file 'storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp'
--- a/storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp	2011-10-07 16:12:13 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtup/Dbtup.hpp	2011-11-14 09:18:48 +0000
@@ -1135,8 +1135,8 @@ ArrayPool<TupTriggerData> c_triggerPool;
     // List of ordered indexes
     DLList<TupTriggerData> tuxCustomTriggers;
     
-    Uint32 fragid[MAX_FRAG_PER_NODE];
-    Uint32 fragrec[MAX_FRAG_PER_NODE];
+    Uint32 fragid[MAX_FRAG_PER_LQH];
+    Uint32 fragrec[MAX_FRAG_PER_LQH];
 
     union {
       struct {

=== modified file 'storage/ndb/src/kernel/blocks/dbtup/DbtupDiskAlloc.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtup/DbtupDiskAlloc.cpp	2011-02-01 23:27:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtup/DbtupDiskAlloc.cpp	2011-11-14 09:18:48 +0000
@@ -1546,7 +1546,7 @@ Dbtup::disk_restart_undo(Signal* signal,
     Ptr<Tablerec> tabPtr;
     tabPtr.i= rec->m_table;
     ptrCheckGuard(tabPtr, cnoOfTablerec, tablerec);
-    for(Uint32 i = 0; i<MAX_FRAG_PER_NODE; i++)
+    for(Uint32 i = 0; i<NDB_ARRAY_SIZE(tabPtr.p->fragrec); i++)
       if (tabPtr.p->fragrec[i] != RNIL)
 	disk_restart_undo_lcp(tabPtr.i, tabPtr.p->fragid[i], 
 			      Fragrecord::UC_CREATE, 0);
@@ -1566,7 +1566,7 @@ Dbtup::disk_restart_undo(Signal* signal,
     Ptr<Tablerec> tabPtr;
     tabPtr.i= rec->m_table;
     ptrCheckGuard(tabPtr, cnoOfTablerec, tablerec);
-    for(Uint32 i = 0; i<MAX_FRAG_PER_NODE; i++)
+    for(Uint32 i = 0; i<NDB_ARRAY_SIZE(tabPtr.p->fragrec); i++)
       if (tabPtr.p->fragrec[i] != RNIL)
 	disk_restart_undo_lcp(tabPtr.i, tabPtr.p->fragid[i], 
 			      Fragrecord::UC_CREATE, 0);

=== modified file 'storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp	2011-10-07 16:12:13 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtup/DbtupExecQuery.cpp	2011-11-14 09:18:48 +0000
@@ -3920,7 +3920,7 @@ Dbtup::validate_page(Tablerec* regTabPtr
   if(mm_vars == 0)
     return;
   
-  for(Uint32 F= 0; F<MAX_FRAG_PER_NODE; F++)
+  for(Uint32 F= 0; F<NDB_ARRAY_SIZE(regTabPtr->fragrec); F++)
   {
     FragrecordPtr fragPtr;
 

=== modified file 'storage/ndb/src/kernel/blocks/dbtup/DbtupGen.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtup/DbtupGen.cpp	2011-10-07 16:12:13 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtup/DbtupGen.cpp	2011-11-14 09:18:48 +0000
@@ -43,9 +43,10 @@ extern EventLogger * g_eventLogger;
 
 void Dbtup::initData() 
 {
-  cnoOfFragrec = MAX_FRAG_PER_NODE;
-  cnoOfFragoprec = MAX_FRAG_PER_NODE;
-  cnoOfAlterTabOps = MAX_FRAG_PER_NODE;
+  TablerecPtr tablePtr;
+  cnoOfFragrec = NDB_ARRAY_SIZE(tablePtr.p->fragrec);
+  cnoOfFragoprec = NDB_ARRAY_SIZE(tablePtr.p->fragrec);
+  cnoOfAlterTabOps = NDB_ARRAY_SIZE(tablePtr.p->fragrec);
   c_maxTriggersPerTable = ZDEFAULT_MAX_NO_TRIGGERS_PER_TABLE;
   c_noOfBuildIndexRec = 32;
 
@@ -772,7 +773,7 @@ void Dbtup::initializeTablerec()
 void
 Dbtup::initTab(Tablerec* const regTabPtr)
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr->fragid); i++) {
     regTabPtr->fragid[i] = RNIL;
     regTabPtr->fragrec[i] = RNIL;
   }//for
@@ -870,7 +871,7 @@ void Dbtup::execTUPSEIZEREQ(Signal* sign
   return;
 }//Dbtup::execTUPSEIZEREQ()
 
-#define printFragment(t){ for(Uint32 i = 0; i < MAX_FRAG_PER_NODE;i++){\
+#define printFragment(t){ for(Uint32 i = 0; i < NDB_ARRAY_SIZE(t.p->fragid);i++){ \
   ndbout_c("table = %d fragid[%d] = %d fragrec[%d] = %d", \
            t.i, t.p->fragid[i], i, t.p->fragrec[i]); }}
 

=== modified file 'storage/ndb/src/kernel/blocks/dbtup/DbtupIndex.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtup/DbtupIndex.cpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtup/DbtupIndex.cpp	2011-11-14 09:18:48 +0000
@@ -552,14 +552,14 @@ Dbtup::buildIndex(Signal* signal, Uint32
   do {
     // get fragment
     FragrecordPtr fragPtr;
-    if (buildPtr.p->m_fragNo == MAX_FRAG_PER_NODE) {
+    if (buildPtr.p->m_fragNo == NDB_ARRAY_SIZE(tablePtr.p->fragrec)) {
       jam();
       // build ready
       buildIndexReply(signal, buildPtr.p);
       c_buildIndexList.release(buildPtr);
       return;
     }
-    ndbrequire(buildPtr.p->m_fragNo < MAX_FRAG_PER_NODE);
+    ndbrequire(buildPtr.p->m_fragNo < NDB_ARRAY_SIZE(tablePtr.p->fragrec));
     fragPtr.i= tablePtr.p->fragrec[buildPtr.p->m_fragNo];
     if (fragPtr.i == RNIL) {
       jam();
@@ -809,7 +809,8 @@ Dbtup::execALTER_TAB_CONF(Signal* signal
   else
   {
     jam();
-    ndbrequire(buildPtr.p->m_fragNo >= MAX_FRAG_PER_NODE);
+    TablerecPtr tablePtr;
+    ndbrequire(buildPtr.p->m_fragNo >= NDB_ARRAY_SIZE(tablePtr.p->fragid));
     buildIndexReply(signal, buildPtr.p);
     c_buildIndexList.release(buildPtr);
     return;
@@ -830,7 +831,7 @@ Dbtup::buildIndexOffline_table_readonly(
   tablePtr.i= buildReq->tableId;
   ptrCheckGuard(tablePtr, cnoOfTablerec, tablerec);
 
-  for (;buildPtr.p->m_fragNo < MAX_FRAG_PER_NODE;
+  for (;buildPtr.p->m_fragNo < NDB_ARRAY_SIZE(tablePtr.p->fragrec);
        buildPtr.p->m_fragNo++)
   {
     jam();
@@ -906,7 +907,7 @@ Dbtup::mt_scan_init(Uint32 tableId, Uint
 
   FragrecordPtr fragPtr;
   fragPtr.i = RNIL;
-  for (Uint32 i = 0; i<MAX_FRAG_PER_NODE; i++)
+  for (Uint32 i = 0; i<NDB_ARRAY_SIZE(tablePtr.p->fragid); i++)
   {
     if (tablePtr.p->fragid[i] == fragId)
     {
@@ -1011,8 +1012,10 @@ Dbtup::execBUILD_INDX_IMPL_REF(Signal* s
   ndbrequire(buildPtr.p->m_outstanding);
   buildPtr.p->m_outstanding--;
 
+  TablerecPtr tablePtr;
   buildPtr.p->m_errorCode = (BuildIndxImplRef::ErrorCode)err;
-  buildPtr.p->m_fragNo = MAX_FRAG_PER_NODE; // No point in starting any more
+  // No point in starting any more
+  buildPtr.p->m_fragNo = NDB_ARRAY_SIZE(tablePtr.p->fragrec);
   buildIndexOffline_table_readonly(signal, ptr);
 }
 

=== modified file 'storage/ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp	2011-09-01 18:42:31 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp	2011-11-14 09:18:48 +0000
@@ -910,7 +910,7 @@ bool Dbtup::addfragtotab(Tablerec* const
                          Uint32 fragId,
                          Uint32 fragIndex)
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr->fragid); i++) {
     jam();
     if (regTabPtr->fragid[i] == RNIL) {
       jam();
@@ -926,7 +926,7 @@ void Dbtup::getFragmentrec(FragrecordPtr
                            Uint32 fragId,
                            Tablerec* const regTabPtr)
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr->fragid); i++) {
     jam();
     if (regTabPtr->fragid[i] == fragId) {
       jam();
@@ -1015,7 +1015,7 @@ Dbtup::execALTER_TAB_REQ(Signal *signal)
   case AlterTabReq::AlterTableSumaEnable:
   {
     FragrecordPtr regFragPtr;
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr.p->fragrec); i++)
     {
       jam();
       if ((regFragPtr.i = regTabPtr.p->fragrec[i]) != RNIL)
@@ -1044,7 +1044,7 @@ Dbtup::execALTER_TAB_REQ(Signal *signal)
     Uint32 gci = signal->theData[signal->getLength() - 1];
     regTabPtr.p->m_reorg_suma_filter.m_gci_hi = gci;
     FragrecordPtr regFragPtr;
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr.p->fragrec); i++)
     {
       jam();
       if ((regFragPtr.i = regTabPtr.p->fragrec[i]) != RNIL)
@@ -1320,7 +1320,7 @@ Dbtup::handleAlterTableCommit(Signal *si
   if (AlterTableReq::getReorgFragFlag(req->changeMask))
   {
     FragrecordPtr regFragPtr;
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr->fragrec); i++)
     {
       jam();
       if ((regFragPtr.i = regTabPtr->fragrec[i]) != RNIL)
@@ -1363,7 +1363,7 @@ Dbtup::handleAlterTableComplete(Signal *
   if (AlterTableReq::getReorgCompleteFlag(req->changeMask))
   {
     FragrecordPtr regFragPtr;
-    for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+    for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr->fragrec); i++)
     {
       jam();
       if ((regFragPtr.i = regTabPtr->fragrec[i]) != RNIL)
@@ -1892,7 +1892,7 @@ void Dbtup::releaseAlterTabOpRec(AlterTa
 
 void Dbtup::deleteFragTab(Tablerec* const regTabPtr, Uint32 fragId) 
 {
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(regTabPtr->fragid); i++) {
     jam();
     if (regTabPtr->fragid[i] == fragId) {
       jam();
@@ -1991,7 +1991,7 @@ void Dbtup::releaseFragment(Signal* sign
   Uint32 fragIndex = RNIL;
   Uint32 fragId = RNIL;
   Uint32 i = 0;
-  for (i = 0; i < MAX_FRAG_PER_NODE; i++) {
+  for (i = 0; i < NDB_ARRAY_SIZE(tabPtr.p->fragid); i++) {
     jam();
     if (tabPtr.p->fragid[i] != RNIL) {
       jam();
@@ -2464,11 +2464,11 @@ Dbtup::drop_fragment_fsremove_done(Signa
   Uint32 logfile_group_id = fragPtr.p->m_logfile_group_id ;
 
   Uint32 i;
-  for(i= 0; i<MAX_FRAG_PER_NODE; i++)
+  for(i= 0; i<NDB_ARRAY_SIZE(tabPtr.p->fragrec); i++)
     if(tabPtr.p->fragrec[i] == fragPtr.i)
       break;
 
-  ndbrequire(i != MAX_FRAG_PER_NODE);
+  ndbrequire(i != NDB_ARRAY_SIZE(tabPtr.p->fragrec));
   tabPtr.p->fragid[i]= RNIL;
   tabPtr.p->fragrec[i]= RNIL;
   releaseFragrec(fragPtr);
@@ -2694,7 +2694,7 @@ Dbtup::execDROP_FRAG_REQ(Signal* signal)
   tabPtr.p->m_dropTable.tabUserPtr = req->senderData;
 
   Uint32 fragIndex = RNIL;
-  for (Uint32 i = 0; i < MAX_FRAG_PER_NODE; i++)
+  for (Uint32 i = 0; i < NDB_ARRAY_SIZE(tabPtr.p->fragid); i++)
   {
     jam();
     if (tabPtr.p->fragid[i] == req->fragId)

=== modified file 'storage/ndb/src/kernel/blocks/dbtux/Dbtux.hpp'
--- a/storage/ndb/src/kernel/blocks/dbtux/Dbtux.hpp	2011-10-13 09:02:21 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtux/Dbtux.hpp	2011-11-14 09:18:48 +0000
@@ -120,7 +120,7 @@ public:
 
 private:
   // sizes are in words (Uint32)
-  STATIC_CONST( MaxIndexFragments = MAX_FRAG_PER_NODE );
+  STATIC_CONST( MaxIndexFragments = MAX_FRAG_PER_LQH );
   STATIC_CONST( MaxIndexAttributes = MAX_ATTRIBUTES_IN_INDEX );
   STATIC_CONST( MaxAttrDataSize = 2 * MAX_ATTRIBUTES_IN_INDEX + MAX_KEY_SIZE_IN_WORDS );
   STATIC_CONST( MaxXfrmDataSize = MaxAttrDataSize * MAX_XFRM_MULTIPLY);

=== modified file 'storage/ndb/src/kernel/blocks/dbtux/DbtuxScan.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtux/DbtuxScan.cpp	2011-10-13 09:02:21 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtux/DbtuxScan.cpp	2011-11-11 08:42:31 +0000
@@ -173,8 +173,8 @@ Dbtux::execTUX_BOUND_INFO(Signal* signal
   c_scanOpPool.getPtr(scanPtr);
   ScanOp& scan = *scanPtr.p;
   const Index& index = *c_indexPool.getPtr(scan.m_indexId);
-  const DescHead& descHead = getDescHead(index);
-  const KeyType* keyTypes = getKeyTypes(descHead);
+  // compiler warning unused: const DescHead& descHead = getDescHead(index);
+  // compiler warning unused: const KeyType* keyTypes = getKeyTypes(descHead);
   // data passed in Signal
   const Uint32* const boundData = &req->data[0];
   Uint32 boundLen = req->boundAiLength;

=== modified file 'storage/ndb/src/kernel/blocks/dbtux/DbtuxStat.cpp'
--- a/storage/ndb/src/kernel/blocks/dbtux/DbtuxStat.cpp	2011-07-04 13:37:56 +0000
+++ b/storage/ndb/src/kernel/blocks/dbtux/DbtuxStat.cpp	2011-11-11 08:42:31 +0000
@@ -123,7 +123,7 @@ Dbtux::getEntriesBeforeOrAfter(Frag& fra
   Uint16 path[MaxTreeDepth + 1];
   unsigned depth = getPathToNode(node, path);
   ndbrequire(depth != 0 && depth <= MaxTreeDepth);
-  TreeHead& tree = frag.m_tree;
+  // compiler warning unused: TreeHead& tree = frag.m_tree;
   Uint32 cnt = 0;
   Uint32 tot = (Uint32)frag.m_entryCount;
   unsigned i = 0;

=== modified file 'storage/ndb/src/kernel/blocks/ndbfs/Ndbfs.cpp'
--- a/storage/ndb/src/kernel/blocks/ndbfs/Ndbfs.cpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/blocks/ndbfs/Ndbfs.cpp	2011-11-16 11:05:46 +0000
@@ -275,6 +275,22 @@ Ndbfs::execREAD_CONFIG_REQ(Signal* signa
   Uint32 noIdleFiles = 27;
 
   ndb_mgm_get_int_parameter(p, CFG_DB_INITIAL_OPEN_FILES, &noIdleFiles);
+
+  {
+    /**
+     * each logpart keeps up to 3 logfiles open at any given time...
+     *   (bound)
+     * make sure noIdleFiles is atleast 4 times #logparts
+     */
+    Uint32 logParts = NDB_DEFAULT_LOG_PARTS;
+    ndb_mgm_get_int_parameter(p, CFG_DB_NO_REDOLOG_PARTS, &logParts);
+    Uint32 logfiles = 4 * logParts;
+    if (noIdleFiles < logfiles)
+    {
+      noIdleFiles = logfiles;
+    }
+  }
+
   // Make sure at least "noIdleFiles" files can be created
   if (noIdleFiles > m_maxFiles && m_maxFiles != 0)
     m_maxFiles = noIdleFiles;

=== modified file 'storage/ndb/src/kernel/blocks/trix/Trix.cpp'
--- a/storage/ndb/src/kernel/blocks/trix/Trix.cpp	2011-07-04 13:37:56 +0000
+++ b/storage/ndb/src/kernel/blocks/trix/Trix.cpp	2011-11-19 07:56:25 +0000
@@ -2462,7 +2462,7 @@ Trix::statCleanExecute(Signal* signal, S
   ndbrequire(data.m_indexVersion == av[1]);
   data.m_sampleVersion = av[2];
   data.m_statKey = &av[3];
-  const char* kp = (const char*)data.m_statKey;
+  const unsigned char* kp = (const unsigned char*)data.m_statKey;
   const Uint32 kb = kp[0] + (kp[1] << 8);
   // key is not empty
   ndbrequire(kb != 0);
@@ -2633,8 +2633,8 @@ Trix::statScanExecute(Signal* signal, St
   ::copy(av, ptr1);
   data.m_statKey = &av[0];
   data.m_statValue = &av[kz];
-  const char* kp = (const char*)data.m_statKey;
-  const char* vp = (const char*)data.m_statValue;
+  const unsigned char* kp = (const unsigned char*)data.m_statKey;
+  const unsigned char* vp = (const unsigned char*)data.m_statValue;
   const Uint32 kb = kp[0] + (kp[1] << 8);
   const Uint32 vb = vp[0] + (vp[1] << 8);
   // key and value are not empty

=== added file 'storage/ndb/src/kernel/blocks/trpman.cpp'
--- a/storage/ndb/src/kernel/blocks/trpman.cpp	1970-01-01 00:00:00 +0000
+++ b/storage/ndb/src/kernel/blocks/trpman.cpp	2011-11-16 15:38:25 +0000
@@ -0,0 +1,648 @@
+/*
+  Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+
+  This program is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; version 2 of the License.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; if not, write to the Free Software
+  Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA
+*/
+
+#include "trpman.hpp"
+#include <TransporterRegistry.hpp>
+#include <signaldata/CloseComReqConf.hpp>
+#include <signaldata/DisconnectRep.hpp>
+#include <signaldata/EnableCom.hpp>
+#include <signaldata/RouteOrd.hpp>
+#include <signaldata/DumpStateOrd.hpp>
+
+Trpman::Trpman(Block_context & ctx, Uint32 instanceno) :
+  SimulatedBlock(TRPMAN, ctx, instanceno)
+{
+  BLOCK_CONSTRUCTOR(Trpman);
+
+  addRecSignal(GSN_CLOSE_COMREQ, &Trpman::execCLOSE_COMREQ);
+  addRecSignal(GSN_OPEN_COMREQ, &Trpman::execOPEN_COMREQ);
+  addRecSignal(GSN_ENABLE_COMREQ, &Trpman::execENABLE_COMREQ);
+  addRecSignal(GSN_DISCONNECT_REP, &Trpman::execDISCONNECT_REP);
+  addRecSignal(GSN_CONNECT_REP, &Trpman::execCONNECT_REP);
+  addRecSignal(GSN_ROUTE_ORD, &Trpman::execROUTE_ORD);
+
+  addRecSignal(GSN_NDB_TAMPER, &Trpman::execNDB_TAMPER, true);
+  addRecSignal(GSN_DUMP_STATE_ORD, &Trpman::execDUMP_STATE_ORD);
+  addRecSignal(GSN_DBINFO_SCANREQ, &Trpman::execDBINFO_SCANREQ);
+}
+
+Trpman::~Trpman()
+{
+}
+
+BLOCK_FUNCTIONS(Trpman)
+
+#ifdef ERROR_INSERT
+NodeBitmask c_error_9000_nodes_mask;
+extern Uint32 MAX_RECEIVED_SIGNALS;
+#endif
+
+void
+Trpman::execOPEN_COMREQ(Signal* signal)
+{
+  // Connect to the specifed NDB node, only QMGR allowed communication
+  // so far with the node
+
+  const BlockReference userRef = signal->theData[0];
+  Uint32 tStartingNode = signal->theData[1];
+  Uint32 tData2 = signal->theData[2];
+  jamEntry();
+
+  const Uint32 len = signal->getLength();
+  if (len == 2)
+  {
+#ifdef ERROR_INSERT
+    if (! ((ERROR_INSERTED(9000) || ERROR_INSERTED(9002))
+	   && c_error_9000_nodes_mask.get(tStartingNode)))
+#endif
+    {
+      globalTransporterRegistry.do_connect(tStartingNode);
+      globalTransporterRegistry.setIOState(tStartingNode, HaltIO);
+
+      //-----------------------------------------------------
+      // Report that the connection to the node is opened
+      //-----------------------------------------------------
+      signal->theData[0] = NDB_LE_CommunicationOpened;
+      signal->theData[1] = tStartingNode;
+      sendSignal(CMVMI_REF, GSN_EVENT_REP, signal, 2, JBB);
+      //-----------------------------------------------------
+    }
+  }
+  else
+  {
+    for(unsigned int i = 1; i < MAX_NODES; i++ )
+    {
+      jam();
+      if (i != getOwnNodeId() && getNodeInfo(i).m_type == tData2)
+      {
+	jam();
+
+#ifdef ERROR_INSERT
+	if ((ERROR_INSERTED(9000) || ERROR_INSERTED(9002))
+	    && c_error_9000_nodes_mask.get(i))
+	  continue;
+#endif
+	globalTransporterRegistry.do_connect(i);
+	globalTransporterRegistry.setIOState(i, HaltIO);
+
+	signal->theData[0] = NDB_LE_CommunicationOpened;
+	signal->theData[1] = i;
+	sendSignal(CMVMI_REF, GSN_EVENT_REP, signal, 2, JBB);
+      }
+    }
+  }
+
+  if (userRef != 0)
+  {
+    jam();
+    signal->theData[0] = tStartingNode;
+    signal->theData[1] = tData2;
+    sendSignal(userRef, GSN_OPEN_COMCONF, signal, len - 1,JBA);
+  }
+}
+
+void
+Trpman::execCONNECT_REP(Signal *signal)
+{
+  const Uint32 hostId = signal->theData[0];
+  jamEntry();
+
+  const NodeInfo::NodeType type = (NodeInfo::NodeType)getNodeInfo(hostId).m_type;
+  ndbrequire(type != NodeInfo::INVALID);
+  globalData.m_nodeInfo[hostId].m_version = 0;
+  globalData.m_nodeInfo[hostId].m_mysql_version = 0;
+
+  /**
+   * Inform QMGR that client has connected
+   */
+  signal->theData[0] = hostId;
+  if (ERROR_INSERTED(9005))
+  {
+    sendSignalWithDelay(QMGR_REF, GSN_CONNECT_REP, signal, 50, 1);
+  }
+  else
+  {
+    sendSignal(QMGR_REF, GSN_CONNECT_REP, signal, 1, JBA);
+  }
+
+  /* Automatically subscribe events for MGM nodes.
+   */
+  if (type == NodeInfo::MGM)
+  {
+    jam();
+    globalTransporterRegistry.setIOState(hostId, NoHalt);
+  }
+
+  //------------------------------------------
+  // Also report this event to the Event handler
+  //------------------------------------------
+  signal->theData[0] = NDB_LE_Connected;
+  signal->theData[1] = hostId;
+  sendSignal(CMVMI_REF, GSN_EVENT_REP, signal, 2, JBB);
+}
+
+void
+Trpman::execCLOSE_COMREQ(Signal* signal)
+{
+  // Close communication with the node and halt input/output from
+  // other blocks than QMGR
+
+  CloseComReqConf * const closeCom = (CloseComReqConf *)&signal->theData[0];
+
+  const BlockReference userRef = closeCom->xxxBlockRef;
+  Uint32 requestType = closeCom->requestType;
+  Uint32 failNo = closeCom->failNo;
+//  Uint32 noOfNodes = closeCom->noOfNodes;
+
+  jamEntry();
+  for (unsigned i = 0; i < MAX_NODES; i++)
+  {
+    if (NodeBitmask::get(closeCom->theNodes, i))
+    {
+      jam();
+
+      //-----------------------------------------------------
+      // Report that the connection to the node is closed
+      //-----------------------------------------------------
+      signal->theData[0] = NDB_LE_CommunicationClosed;
+      signal->theData[1] = i;
+      sendSignal(CMVMI_REF, GSN_EVENT_REP, signal, 2, JBB);
+
+      globalTransporterRegistry.setIOState(i, HaltIO);
+      globalTransporterRegistry.do_disconnect(i);
+    }
+  }
+
+  if (requestType != CloseComReqConf::RT_NO_REPLY)
+  {
+    ndbassert((requestType == CloseComReqConf::RT_API_FAILURE) ||
+              ((requestType == CloseComReqConf::RT_NODE_FAILURE) &&
+               (failNo != 0)));
+    jam();
+    CloseComReqConf* closeComConf = (CloseComReqConf *)signal->getDataPtrSend();
+    closeComConf->xxxBlockRef = userRef;
+    closeComConf->requestType = requestType;
+    closeComConf->failNo = failNo;
+
+    /* Note assumption that noOfNodes and theNodes
+     * bitmap is not trampled above
+     * signals received from the remote node.
+     */
+    sendSignal(QMGR_REF, GSN_CLOSE_COMCONF, signal, 19, JBA);
+  }
+}
+
+void
+Trpman::execENABLE_COMREQ(Signal* signal)
+{
+  jamEntry();
+  const EnableComReq *enableComReq = (const EnableComReq *)signal->getDataPtr();
+
+  /* Need to copy out signal data to not clobber it with sendSignal(). */
+  Uint32 senderRef = enableComReq->m_senderRef;
+  Uint32 senderData = enableComReq->m_senderData;
+  Uint32 nodes[NodeBitmask::Size];
+  MEMCOPY_NO_WORDS(nodes, enableComReq->m_nodeIds, NodeBitmask::Size);
+
+  /* Enable communication with all our NDB blocks to these nodes. */
+  Uint32 search_from = 0;
+  for (;;)
+  {
+    Uint32 tStartingNode = NodeBitmask::find(nodes, search_from);
+    if (tStartingNode == NodeBitmask::NotFound)
+      break;
+    search_from = tStartingNode + 1;
+
+    globalTransporterRegistry.setIOState(tStartingNode, NoHalt);
+    setNodeInfo(tStartingNode).m_connected = true;
+
+    //-----------------------------------------------------
+    // Report that the version of the node
+    //-----------------------------------------------------
+    signal->theData[0] = NDB_LE_ConnectedApiVersion;
+    signal->theData[1] = tStartingNode;
+    signal->theData[2] = getNodeInfo(tStartingNode).m_version;
+    signal->theData[3] = getNodeInfo(tStartingNode).m_mysql_version;
+
+    sendSignal(CMVMI_REF, GSN_EVENT_REP, signal, 4, JBB);
+    //-----------------------------------------------------
+  }
+
+  EnableComConf *enableComConf = (EnableComConf *)signal->getDataPtrSend();
+  enableComConf->m_senderRef = reference();
+  enableComConf->m_senderData = senderData;
+  MEMCOPY_NO_WORDS(enableComConf->m_nodeIds, nodes, NodeBitmask::Size);
+  sendSignal(senderRef, GSN_ENABLE_COMCONF, signal,
+             EnableComConf::SignalLength, JBA);
+}
+
+void
+Trpman::execDISCONNECT_REP(Signal *signal)
+{
+  const DisconnectRep * const rep = (DisconnectRep *)&signal->theData[0];
+  const Uint32 hostId = rep->nodeId;
+  jamEntry();
+
+  setNodeInfo(hostId).m_connected = false;
+  setNodeInfo(hostId).m_connectCount++;
+  const NodeInfo::NodeType type = getNodeInfo(hostId).getType();
+  ndbrequire(type != NodeInfo::INVALID);
+
+  sendSignal(QMGR_REF, GSN_DISCONNECT_REP, signal,
+             DisconnectRep::SignalLength, JBA);
+
+  signal->theData[0] = hostId;
+  sendSignal(CMVMI_REF, GSN_CANCEL_SUBSCRIPTION_REQ, signal, 1, JBB);
+
+  signal->theData[0] = NDB_LE_Disconnected;
+  signal->theData[1] = hostId;
+  sendSignal(CMVMI_REF, GSN_EVENT_REP, signal, 2, JBB);
+}
+
+/**
+ * execROUTE_ORD
+ * Allows other blocks to route signals as if they
+ * came from TRPMAN
+ * Useful in ndbmtd for synchronising signals w.r.t
+ * external signals received from other nodes which
+ * arrive from the same thread that runs TRPMAN
+ */
+void
+Trpman::execROUTE_ORD(Signal* signal)
+{
+  jamEntry();
+  if (!assembleFragments(signal))
+  {
+    jam();
+    return;
+  }
+
+  SectionHandle handle(this, signal);
+
+  RouteOrd* ord = (RouteOrd*)signal->getDataPtr();
+  Uint32 dstRef = ord->dstRef;
+  Uint32 srcRef = ord->srcRef;
+  Uint32 gsn = ord->gsn;
+  /* ord->cnt ignored */
+
+  Uint32 nodeId = refToNode(dstRef);
+
+  if (likely((nodeId == 0) ||
+             getNodeInfo(nodeId).m_connected))
+  {
+    jam();
+    Uint32 secCount = handle.m_cnt;
+    ndbrequire(secCount >= 1 && secCount <= 3);
+
+    jamLine(secCount);
+
+    /**
+     * Put section 0 in signal->theData
+     */
+    Uint32 sigLen = handle.m_ptr[0].sz;
+    ndbrequire(sigLen <= 25);
+    copy(signal->theData, handle.m_ptr[0]);
+
+    SegmentedSectionPtr save = handle.m_ptr[0];
+    for (Uint32 i = 0; i < secCount - 1; i++)
+      handle.m_ptr[i] = handle.m_ptr[i+1];
+    handle.m_cnt--;
+
+    sendSignal(dstRef, gsn, signal, sigLen, JBB, &handle);
+
+    handle.m_cnt = 1;
+    handle.m_ptr[0] = save;
+    releaseSections(handle);
+    return ;
+  }
+
+  releaseSections(handle);
+  warningEvent("Unable to route GSN: %d from %x to %x",
+	       gsn, srcRef, dstRef);
+}
+
+void
+Trpman::execDBINFO_SCANREQ(Signal *signal)
+{
+  DbinfoScanReq req= *(DbinfoScanReq*)signal->theData;
+  const Ndbinfo::ScanCursor* cursor =
+    CAST_CONSTPTR(Ndbinfo::ScanCursor, DbinfoScan::getCursorPtr(&req));
+  Ndbinfo::Ratelimit rl;
+
+  jamEntry();
+
+  switch(req.tableId){
+  case Ndbinfo::TRANSPORTERS_TABLEID:
+  {
+    jam();
+    Uint32 rnode = cursor->data[0];
+    if (rnode == 0)
+      rnode++; // Skip node 0
+
+    while (rnode < MAX_NODES)
+    {
+      switch(getNodeInfo(rnode).m_type)
+      {
+      default:
+      {
+        jam();
+        Ndbinfo::Row row(signal, req);
+        row.write_uint32(getOwnNodeId()); // Node id
+        row.write_uint32(rnode); // Remote node id
+        row.write_uint32(globalTransporterRegistry.getPerformState(rnode)); // State
+        ndbinfo_send_row(signal, req, row, rl);
+       break;
+      }
+
+      case NodeInfo::INVALID:
+        jam();
+       break;
+      }
+
+      rnode++;
+      if (rl.need_break(req))
+      {
+        jam();
+        ndbinfo_send_scan_break(signal, req, rl, rnode);
+        return;
+      }
+    }
+    break;
+  }
+
+  default:
+    break;
+  }
+
+  ndbinfo_send_scan_conf(signal, req, rl);
+}
+
+void
+Trpman::execNDB_TAMPER(Signal* signal)
+{
+  jamEntry();
+#ifdef ERROR_INSERT
+  if (signal->theData[0] == 9003)
+  {
+    if (MAX_RECEIVED_SIGNALS < 1024)
+    {
+      MAX_RECEIVED_SIGNALS = 1024;
+    }
+    else
+    {
+      MAX_RECEIVED_SIGNALS = 1 + (rand() % 128);
+    }
+    ndbout_c("MAX_RECEIVED_SIGNALS: %d", MAX_RECEIVED_SIGNALS);
+    CLEAR_ERROR_INSERT_VALUE;
+  }
+#endif
+}//execNDB_TAMPER()
+
+void
+Trpman::execDUMP_STATE_ORD(Signal* signal)
+{
+  DumpStateOrd * const & dumpState = (DumpStateOrd *)&signal->theData[0];
+  Uint32 arg = dumpState->args[0]; (void)arg;
+
+#ifdef ERROR_INSERT
+  if (arg == 9000 || arg == 9002)
+  {
+    SET_ERROR_INSERT_VALUE(arg);
+    for (Uint32 i = 1; i<signal->getLength(); i++)
+      c_error_9000_nodes_mask.set(signal->theData[i]);
+  }
+
+  if (arg == 9001)
+  {
+    CLEAR_ERROR_INSERT_VALUE;
+    if (signal->getLength() == 1 || signal->theData[1])
+    {
+      for (Uint32 i = 0; i<MAX_NODES; i++)
+      {
+	if (c_error_9000_nodes_mask.get(i))
+	{
+	  signal->theData[0] = 0;
+	  signal->theData[1] = i;
+          execOPEN_COMREQ(signal);
+	}
+      }
+    }
+    c_error_9000_nodes_mask.clear();
+  }
+
+  if (arg == 9004 && signal->getLength() == 2)
+  {
+    SET_ERROR_INSERT_VALUE(9004);
+    c_error_9000_nodes_mask.clear();
+    c_error_9000_nodes_mask.set(signal->theData[1]);
+  }
+
+  if (arg == 9005 && signal->getLength() == 2 && ERROR_INSERTED(9004))
+  {
+    Uint32 db = signal->theData[1];
+    Uint32 i = c_error_9000_nodes_mask.find(0);
+    signal->theData[0] = i;
+    sendSignal(calcQmgrBlockRef(db),GSN_API_FAILREQ, signal, 1, JBA);
+    ndbout_c("stopping %u using %u", i, db);
+    CLEAR_ERROR_INSERT_VALUE;
+  }
+#endif
+
+#ifdef ERROR_INSERT
+  /* <Target NodeId> dump 9992 <NodeId list>
+   * On Target NodeId, block receiving signals from NodeId list
+   *
+   * <Target NodeId> dump 9993 <NodeId list>
+   * On Target NodeId, resume receiving signals from NodeId list
+   *
+   * <Target NodeId> dump 9991
+   * On Target NodeId, resume receiving signals from any blocked node
+   *
+   *
+   * See also code in QMGR for blocking receive from nodes based
+   * on HB roles.
+   *
+   */
+  if((arg == 9993) ||  /* Unblock recv from nodeid */
+     (arg == 9992))    /* Block recv from nodeid */
+  {
+    bool block = (arg == 9992);
+    for (Uint32 n = 1; n < signal->getLength(); n++)
+    {
+      Uint32 nodeId = signal->theData[n];
+
+      if ((nodeId > 0) &&
+          (nodeId < MAX_NODES))
+      {
+        if (block)
+        {
+          ndbout_c("CMVMI : Blocking receive from node %u", nodeId);
+
+          globalTransporterRegistry.blockReceive(nodeId);
+        }
+        else
+        {
+          ndbout_c("CMVMI : Unblocking receive from node %u", nodeId);
+
+          globalTransporterRegistry.unblockReceive(nodeId);
+        }
+      }
+      else
+      {
+        ndbout_c("CMVMI : Ignoring dump %u for node %u",
+                 arg, nodeId);
+      }
+    }
+  }
+  if (arg == 9990) /* Block recv from all ndbd matching pattern */
+  {
+    Uint32 pattern = 0;
+    if (signal->getLength() > 1)
+    {
+      pattern = signal->theData[1];
+      ndbout_c("CMVMI : Blocking receive from all ndbds matching pattern -%s-",
+               ((pattern == 1)? "Other side":"Unknown"));
+    }
+
+    for (Uint32 node = 1; node < MAX_NDB_NODES; node++)
+    {
+      if (globalTransporterRegistry.is_connected(node))
+      {
+        if (getNodeInfo(node).m_type == NodeInfo::DB)
+        {
+          if (!globalTransporterRegistry.isBlocked(node))
+          {
+            switch (pattern)
+            {
+            case 1:
+            {
+              /* Match if given node is on 'other side' of
+               * 2-replica cluster
+               */
+              if ((getOwnNodeId() & 1) != (node & 1))
+              {
+                /* Node is on the 'other side', match */
+                break;
+              }
+              /* Node is on 'my side', don't match */
+              continue;
+            }
+            default:
+              break;
+            }
+            ndbout_c("CMVMI : Blocking receive from node %u", node);
+            globalTransporterRegistry.blockReceive(node);
+          }
+        }
+      }
+    }
+  }
+  if (arg == 9991) /* Unblock recv from all blocked */
+  {
+    for (Uint32 node = 0; node < MAX_NODES; node++)
+    {
+      if (globalTransporterRegistry.isBlocked(node))
+      {
+        ndbout_c("CMVMI : Unblocking receive from node %u", node);
+        globalTransporterRegistry.unblockReceive(node);
+      }
+    }
+  }
+#endif
+}
+
+TrpmanProxy::TrpmanProxy(Block_context & ctx) :
+  LocalProxy(TRPMAN, ctx)
+{
+  addRecSignal(GSN_CLOSE_COMREQ, &TrpmanProxy::execCLOSE_COMREQ);
+  addRecSignal(GSN_OPEN_COMREQ, &TrpmanProxy::execOPEN_COMREQ);
+  addRecSignal(GSN_ENABLE_COMREQ, &TrpmanProxy::execENABLE_COMREQ);
+  addRecSignal(GSN_DISCONNECT_REP, &TrpmanProxy::execDISCONNECT_REP);
+  addRecSignal(GSN_CONNECT_REP, &TrpmanProxy::execCONNECT_REP);
+  addRecSignal(GSN_ROUTE_ORD, &TrpmanProxy::execROUTE_ORD);
+}
+
+TrpmanProxy::~TrpmanProxy()
+{
+}
+
+SimulatedBlock*
+TrpmanProxy::newWorker(Uint32 instanceNo)
+{
+  return new Trpman(m_ctx, instanceNo);
+}
+
+BLOCK_FUNCTIONS(TrpmanProxy);
+
+/**
+ * TODO TrpmanProxy need to have operation records
+ *      to support splicing a request onto several Trpman-instances
+ *      according to how receive-threads are assigned to instances
+ */
+void
+TrpmanProxy::execOPEN_COMREQ(Signal* signal)
+{
+  jamEntry();
+  SectionHandle handle(this, signal);
+  sendSignal(workerRef(0), GSN_OPEN_COMREQ, signal,
+             signal->getLength(), JBB, &handle);
+}
+
+void
+TrpmanProxy::execCONNECT_REP(Signal *signal)
+{
+  jamEntry();
+  SectionHandle handle(this, signal);
+  sendSignal(workerRef(0), GSN_CONNECT_REP, signal,
+             signal->getLength(), JBB, &handle);
+}
+
+void
+TrpmanProxy::execCLOSE_COMREQ(Signal* signal)
+{
+  jamEntry();
+  SectionHandle handle(this, signal);
+  sendSignal(workerRef(0), GSN_CLOSE_COMREQ, signal,
+             signal->getLength(), JBB, &handle);
+}
+
+void
+TrpmanProxy::execENABLE_COMREQ(Signal* signal)
+{
+  jamEntry();
+  SectionHandle handle(this, signal);
+  sendSignal(workerRef(0), GSN_ENABLE_COMREQ, signal,
+             signal->getLength(), JBB, &handle);
+}
+
+void
+TrpmanProxy::execDISCONNECT_REP(Signal *signal)
+{
+  jamEntry();
+  SectionHandle handle(this, signal);
+  sendSignal(workerRef(0), GSN_DISCONNECT_REP, signal,
+             signal->getLength(), JBB, &handle);
+}
+
+void
+TrpmanProxy::execROUTE_ORD(Signal* signal)
+{
+  jamEntry();
+  SectionHandle handle(this, signal);
+  sendSignal(workerRef(0), GSN_ROUTE_ORD, signal,
+             signal->getLength(), JBB, &handle);
+}

=== added file 'storage/ndb/src/kernel/blocks/trpman.hpp'
--- a/storage/ndb/src/kernel/blocks/trpman.hpp	1970-01-01 00:00:00 +0000
+++ b/storage/ndb/src/kernel/blocks/trpman.hpp	2011-11-16 15:38:25 +0000
@@ -0,0 +1,66 @@
+/*
+   Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; version 2 of the License.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, write to the Free Software
+   Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA
+*/
+
+#ifndef TRPMAN_H
+#define TRPMAN_H
+
+#include <pc.hpp>
+#include <SimulatedBlock.hpp>
+#include <LocalProxy.hpp>
+
+class Trpman : public SimulatedBlock
+{
+public:
+  Trpman(Block_context& ctx, Uint32 instanceNumber = 0);
+  virtual ~Trpman();
+  BLOCK_DEFINES(Trpman);
+
+  void execCLOSE_COMREQ(Signal *signal);
+  void execOPEN_COMREQ(Signal *signal);
+  void execENABLE_COMREQ(Signal *signal);
+  void execDISCONNECT_REP(Signal *signal);
+  void execCONNECT_REP(Signal *signal);
+  void execROUTE_ORD(Signal* signal);
+
+  void execDBINFO_SCANREQ(Signal*);
+
+  void execNDB_TAMPER(Signal*);
+  void execDUMP_STATE_ORD(Signal*);
+protected:
+
+};
+
+class TrpmanProxy : public LocalProxy
+{
+public:
+  TrpmanProxy(Block_context& ctx);
+  virtual ~TrpmanProxy();
+  BLOCK_DEFINES(TrpmanProxy);
+
+  void execCLOSE_COMREQ(Signal *signal);
+  void execOPEN_COMREQ(Signal *signal);
+  void execENABLE_COMREQ(Signal *signal);
+  void execDISCONNECT_REP(Signal *signal);
+  void execCONNECT_REP(Signal *signal);
+  void execROUTE_ORD(Signal* signal);
+
+  void execNDB_TAMPER(Signal*);
+  void execDUMP_STATE_ORD(Signal*);
+protected:
+  virtual SimulatedBlock* newWorker(Uint32 instanceNo);
+};
+#endif

=== modified file 'storage/ndb/src/kernel/ndbd.cpp'
--- a/storage/ndb/src/kernel/ndbd.cpp	2011-10-07 13:15:08 +0000
+++ b/storage/ndb/src/kernel/ndbd.cpp	2011-11-16 11:05:46 +0000
@@ -161,9 +161,13 @@ init_global_memory_manager(EmulatorData
     ed.m_mem_manager->set_resource_limit(rl);
   }
 
-  Uint32 maxopen = 4 * 4; // 4 redo parts, max 4 files per part
+  Uint32 logParts = NDB_DEFAULT_LOG_PARTS;
+  ndb_mgm_get_int_parameter(p, CFG_DB_NO_REDOLOG_PARTS, &logParts);
+
+  Uint32 maxopen = logParts * 4; // 4 redo parts, max 4 files per part
   Uint32 filebuffer = NDB_FILE_BUFFER_SIZE;
   Uint32 filepages = (filebuffer / GLOBAL_PAGE_SIZE) * maxopen;
+  globalData.ndbLogParts = logParts;
 
   {
     /**

=== modified file 'storage/ndb/src/kernel/vm/DLFifoList.hpp'
--- a/storage/ndb/src/kernel/vm/DLFifoList.hpp	2011-10-07 11:46:40 +0000
+++ b/storage/ndb/src/kernel/vm/DLFifoList.hpp	2011-11-11 07:47:19 +0000
@@ -21,6 +21,7 @@
 
 #include <ndb_global.h>
 #include <kernel_types.h>
+#include "ArrayPool.hpp"
 #include "Pool.hpp"
 
 /**

=== modified file 'storage/ndb/src/kernel/vm/DLHashTable.hpp'
--- a/storage/ndb/src/kernel/vm/DLHashTable.hpp	2011-10-13 09:25:13 +0000
+++ b/storage/ndb/src/kernel/vm/DLHashTable.hpp	2011-11-11 07:46:17 +0000
@@ -1,6 +1,5 @@
 /*
-   Copyright (c) 2003-2006, 2008 MySQL AB, 2009, 2010 Sun Microsystems, Inc.
-   Use is subject to license terms.
+   Copyright (c) 2003-2006, 2008, 2011, Oracle and/or its affiliates. All rights reserved.
 
    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
@@ -20,30 +19,38 @@
 #define DL_HASHTABLE_HPP
 
 #include <ndb_global.h>
-#include "ArrayPool.hpp"
 
 /**
- * DLHashTable implements a hashtable using chaining
+ * DLMHashTable implements a hashtable using chaining
  *   (with a double linked list)
  *
- * The entries in the hashtable must have the following methods:
- *  -# bool U::equal(const class U &) const;
- *     Which should return equal if the to objects have the same key
- *  -# Uint32 U::hashValue() const;
- *     Which should return a 32 bit hashvalue
+ * The entries in the (uninstansiated) meta class passed to the
+ * hashtable must have the following methods:
  *
- * and the following members:
- *  -# Uint32 U::nextHash;
- *  -# Uint32 U::prevHash;
+ *  -# nextHash(T&) returning a reference to the next link
+ *  -# prevHash(T&) returning a reference to the prev link
+ *  -# bool equal(T const&,T const&) returning equality of the objects keys
+ *  -# hashValue(T) calculating the hash value
  */
 
-template <typename P, typename T, typename U = T>
-class DLHashTableImpl 
+template <typename T, typename U = T> struct DLHashTableDefaultMethods {
+static Uint32& nextHash(U& t) { return t.nextHash; }
+static Uint32& prevHash(U& t) { return t.prevHash; }
+static Uint32 hashValue(T const& t) { return t.hashValue(); }
+static bool equal(T const& lhs, T const& rhs) { return lhs.equal(rhs); }
+};
+
+template <typename P, typename T, typename M = DLHashTableDefaultMethods<T> >
+class DLMHashTable
 {
 public:
-  DLHashTableImpl(P & thePool);
-  ~DLHashTableImpl();
-  
+  explicit DLMHashTable(P & thePool);
+  ~DLMHashTable();
+private:
+  DLMHashTable(const DLMHashTable&);
+  DLMHashTable&  operator=(const DLMHashTable&);
+
+public:
   /**
    * Set the no of bucket in the hashtable
    *
@@ -63,9 +70,9 @@ public:
    * Add an object to the hashtable
    */
   void add(Ptr<T> &);
-  
+
   /**
-   * Find element key in hashtable update Ptr (i & p) 
+   * Find element key in hashtable update Ptr (i & p)
    *   (using key.equal(...))
    * @return true if found and false otherwise
    */
@@ -108,7 +115,7 @@ public:
    * Remove all elements, but dont return them to pool
    */
   void removeAll();
-  
+
   /**
    * Remove element and return to pool
    */
@@ -118,7 +125,7 @@ public:
    * Remove element and return to pool
    */
   void release(Ptr<T> &);
-  
+
   class Iterator {
   public:
     Ptr<T> curr;
@@ -136,7 +143,7 @@ public:
    * First element in bucket
    */
   bool first(Iterator & iter) const;
-  
+
   /**
    * Next Element
    *
@@ -151,16 +158,16 @@ public:
    * @param iter - An "uninitialized" iterator
    */
   bool next(Uint32 bucket, Iterator & iter) const;
-  
+
 private:
   Uint32 mask;
   Uint32 * hashValues;
   P & thePool;
 };
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
-DLHashTableImpl<P, T, U>::DLHashTableImpl(P & _pool)
+DLMHashTable<P, T, M>::DLMHashTable(P & _pool)
   : thePool(_pool)
 {
   // Require user defined constructor on T since we fiddle
@@ -171,23 +178,23 @@ DLHashTableImpl<P, T, U>::DLHashTableImp
   hashValues = 0;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
-DLHashTableImpl<P, T, U>::~DLHashTableImpl()
+DLMHashTable<P, T, M>::~DLMHashTable()
 {
-  if(hashValues != 0)
+  if (hashValues != 0)
     delete [] hashValues;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 bool
-DLHashTableImpl<P, T, U>::setSize(Uint32 size)
+DLMHashTable<P, T, M>::setSize(Uint32 size)
 {
   Uint32 i = 1;
-  while(i < size) i *= 2;
+  while (i < size) i *= 2;
 
-  if(mask == (i - 1))
+  if (mask == (i - 1))
   {
     /**
      * The size is already set to <b>size</b>
@@ -195,43 +202,43 @@ DLHashTableImpl<P, T, U>::setSize(Uint32
     return true;
   }
 
-  if(mask != 0)
+  if (mask != 0)
   {
     /**
      * The mask is already set
      */
     return false;
   }
-  
+
   mask = (i - 1);
   hashValues = new Uint32[i];
-  for(Uint32 j = 0; j<i; j++)
+  for (Uint32 j = 0; j<i; j++)
     hashValues[j] = RNIL;
-  
+
   return true;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 void
-DLHashTableImpl<P, T, U>::add(Ptr<T> & obj)
+DLMHashTable<P, T, M>::add(Ptr<T> & obj)
 {
-  const Uint32 hv = obj.p->U::hashValue() & mask;
+  const Uint32 hv = M::hashValue(*obj.p) & mask;
   const Uint32 i  = hashValues[hv];
-  
-  if(i == RNIL)
+
+  if (i == RNIL)
   {
     hashValues[hv] = obj.i;
-    obj.p->U::nextHash = RNIL;
-    obj.p->U::prevHash = RNIL;
-  } 
-  else 
+    M::nextHash(*obj.p) = RNIL;
+    M::prevHash(*obj.p) = RNIL;
+  }
+  else
   {
     T * tmp = thePool.getPtr(i);
-    tmp->U::prevHash = obj.i;
-    obj.p->U::nextHash = i;
-    obj.p->U::prevHash = RNIL;
-    
+    M::prevHash(*tmp) = obj.i;
+    M::nextHash(*obj.p) = i;
+    M::prevHash(*obj.p) = RNIL;
+
     hashValues[hv] = obj.i;
   }
 }
@@ -239,62 +246,62 @@ DLHashTableImpl<P, T, U>::add(Ptr<T> & o
 /**
  * First element
  */
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 bool
-DLHashTableImpl<P, T, U>::first(Iterator & iter) const 
+DLMHashTable<P, T, M>::first(Iterator & iter) const
 {
   Uint32 i = 0;
-  while(i <= mask && hashValues[i] == RNIL) i++;
-  if(i <= mask)
+  while (i <= mask && hashValues[i] == RNIL) i++;
+  if (i <= mask)
   {
     iter.bucket = i;
     iter.curr.i = hashValues[i];
     iter.curr.p = thePool.getPtr(iter.curr.i);
     return true;
   }
-  else 
+  else
   {
     iter.curr.i = RNIL;
   }
   return false;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 bool
-DLHashTableImpl<P, T, U>::next(Iterator & iter) const 
+DLMHashTable<P, T, M>::next(Iterator & iter) const
 {
-  if(iter.curr.p->U::nextHash == RNIL)
+  if (M::nextHash(*iter.curr.p) == RNIL)
   {
     Uint32 i = iter.bucket + 1;
-    while(i <= mask && hashValues[i] == RNIL) i++;
-    if(i <= mask)
+    while (i <= mask && hashValues[i] == RNIL) i++;
+    if (i <= mask)
     {
       iter.bucket = i;
       iter.curr.i = hashValues[i];
       iter.curr.p = thePool.getPtr(iter.curr.i);
       return true;
     }
-    else 
+    else
     {
       iter.curr.i = RNIL;
       return false;
     }
   }
-  
-  iter.curr.i = iter.curr.p->U::nextHash;
+
+  iter.curr.i = M::nextHash(*iter.curr.p);
   iter.curr.p = thePool.getPtr(iter.curr.i);
   return true;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 void
-DLHashTableImpl<P, T, U>::remove(Ptr<T> & ptr, const T & key)
+DLMHashTable<P, T, M>::remove(Ptr<T> & ptr, const T & key)
 {
-  const Uint32 hv = key.U::hashValue() & mask;  
-  
+  const Uint32 hv = M::hashValue(key) & mask;
+
   Uint32 i;
   T * p;
   Ptr<T> prev;
@@ -302,42 +309,42 @@ DLHashTableImpl<P, T, U>::remove(Ptr<T>
   prev.i = RNIL;
 
   i = hashValues[hv];
-  while(i != RNIL)
+  while (i != RNIL)
   {
     p = thePool.getPtr(i);
-    if(key.U::equal(* p))
+    if (M::equal(key, * p))
     {
-      const Uint32 next = p->U::nextHash;
-      if(prev.i == RNIL)
+      const Uint32 next = M::nextHash(*p);
+      if (prev.i == RNIL)
       {
-	hashValues[hv] = next;
-      } 
-      else 
+        hashValues[hv] = next;
+      }
+      else
       {
-	prev.p->U::nextHash = next;
+        M::nextHash(*prev.p) = next;
       }
-      
-      if(next != RNIL)
+
+      if (next != RNIL)
       {
-	T * nextP = thePool.getPtr(next);
-	nextP->U::prevHash = prev.i;
+        T * nextP = thePool.getPtr(next);
+        M::prevHash(*nextP) = prev.i;
       }
-      
+
       ptr.i = i;
       ptr.p = p;
       return;
     }
     prev.p = p;
     prev.i = i;
-    i = p->U::nextHash;
+    i = M::nextHash(*p);
   }
   ptr.i = RNIL;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 void
-DLHashTableImpl<P, T, U>::remove(Uint32 i)
+DLMHashTable<P, T, M>::remove(Uint32 i)
 {
   Ptr<T> tmp;
   tmp.i = i;
@@ -345,10 +352,10 @@ DLHashTableImpl<P, T, U>::remove(Uint32
   remove(tmp);
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 void
-DLHashTableImpl<P, T, U>::release(Uint32 i)
+DLMHashTable<P, T, M>::release(Uint32 i)
 {
   Ptr<T> tmp;
   tmp.i = i;
@@ -356,22 +363,22 @@ DLHashTableImpl<P, T, U>::release(Uint32
   release(tmp);
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
-void 
-DLHashTableImpl<P, T, U>::remove(Ptr<T> & ptr)
+void
+DLMHashTable<P, T, M>::remove(Ptr<T> & ptr)
 {
-  const Uint32 next = ptr.p->U::nextHash;
-  const Uint32 prev = ptr.p->U::prevHash;
+  const Uint32 next = M::nextHash(*ptr.p);
+  const Uint32 prev = M::prevHash(*ptr.p);
 
-  if(prev != RNIL)
+  if (prev != RNIL)
   {
     T * prevP = thePool.getPtr(prev);
-    prevP->U::nextHash = next;
-  } 
-  else 
+    M::nextHash(*prevP) = next;
+  }
+  else
   {
-    const Uint32 hv = ptr.p->U::hashValue() & mask;  
+    const Uint32 hv = M::hashValue(*ptr.p) & mask;
     if (hashValues[hv] == ptr.i)
     {
       hashValues[hv] = next;
@@ -382,30 +389,30 @@ DLHashTableImpl<P, T, U>::remove(Ptr<T>
       assert(false);
     }
   }
-  
-  if(next != RNIL)
+
+  if (next != RNIL)
   {
     T * nextP = thePool.getPtr(next);
-    nextP->U::prevHash = prev;
+    M::prevHash(*nextP) = prev;
   }
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
-void 
-DLHashTableImpl<P, T, U>::release(Ptr<T> & ptr)
+void
+DLMHashTable<P, T, M>::release(Ptr<T> & ptr)
 {
-  const Uint32 next = ptr.p->U::nextHash;
-  const Uint32 prev = ptr.p->U::prevHash;
+  const Uint32 next = M::nextHash(*ptr.p);
+  const Uint32 prev = M::prevHash(*ptr.p);
 
-  if(prev != RNIL)
+  if (prev != RNIL)
   {
     T * prevP = thePool.getPtr(prev);
-    prevP->U::nextHash = next;
-  } 
-  else 
+    M::nextHash(*prevP) = next;
+  }
+  else
   {
-    const Uint32 hv = ptr.p->U::hashValue() & mask;  
+    const Uint32 hv = M::hashValue(*ptr.p) & mask;
     if (hashValues[hv] == ptr.i)
     {
       hashValues[hv] = next;
@@ -416,104 +423,104 @@ DLHashTableImpl<P, T, U>::release(Ptr<T>
       // Will add assert in 5.1
     }
   }
-  
-  if(next != RNIL)
+
+  if (next != RNIL)
   {
     T * nextP = thePool.getPtr(next);
-    nextP->U::prevHash = prev;
+    M::prevHash(*nextP) = prev;
   }
-  
+
   thePool.release(ptr);
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
-void 
-DLHashTableImpl<P, T, U>::removeAll()
+void
+DLMHashTable<P, T, M>::removeAll()
 {
-  for(Uint32 i = 0; i<=mask; i++)
+  for (Uint32 i = 0; i<=mask; i++)
     hashValues[i] = RNIL;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 bool
-DLHashTableImpl<P, T, U>::next(Uint32 bucket, Iterator & iter) const 
+DLMHashTable<P, T, M>::next(Uint32 bucket, Iterator & iter) const
 {
-  while (bucket <= mask && hashValues[bucket] == RNIL) 
-    bucket++; 
-  
-  if (bucket > mask) 
+  while (bucket <= mask && hashValues[bucket] == RNIL)
+    bucket++;
+
+  if (bucket > mask)
   {
     iter.bucket = bucket;
     iter.curr.i = RNIL;
     return false;
   }
-  
+
   iter.bucket = bucket;
   iter.curr.i = hashValues[bucket];
   iter.curr.p = thePool.getPtr(iter.curr.i);
   return true;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 bool
-DLHashTableImpl<P, T, U>::seize(Ptr<T> & ptr)
+DLMHashTable<P, T, M>::seize(Ptr<T> & ptr)
 {
-  if(thePool.seize(ptr)){
-    ptr.p->U::nextHash = ptr.p->U::prevHash = RNIL;
+  if (thePool.seize(ptr)){
+    M::nextHash(*ptr.p) = M::prevHash(*ptr.p) = RNIL;
     return true;
   }
   return false;
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 void
-DLHashTableImpl<P, T, U>::getPtr(Ptr<T> & ptr, Uint32 i) const 
+DLMHashTable<P, T, M>::getPtr(Ptr<T> & ptr, Uint32 i) const
 {
   ptr.i = i;
   ptr.p = thePool.getPtr(i);
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 void
-DLHashTableImpl<P, T, U>::getPtr(Ptr<T> & ptr) const 
+DLMHashTable<P, T, M>::getPtr(Ptr<T> & ptr) const
 {
   thePool.getPtr(ptr);
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
-T * 
-DLHashTableImpl<P, T, U>::getPtr(Uint32 i) const 
+T *
+DLMHashTable<P, T, M>::getPtr(Uint32 i) const
 {
   return thePool.getPtr(i);
 }
 
-template <typename P, typename T, typename U>
+template <typename P, typename T, typename M>
 inline
 bool
-DLHashTableImpl<P, T, U>::find(Ptr<T> & ptr, const T & key) const 
+DLMHashTable<P, T, M>::find(Ptr<T> & ptr, const T & key) const
 {
-  const Uint32 hv = key.U::hashValue() & mask;  
-  
+  const Uint32 hv = M::hashValue(key) & mask;
+
   Uint32 i;
   T * p;
 
   i = hashValues[hv];
-  while(i != RNIL)
+  while (i != RNIL)
   {
     p = thePool.getPtr(i);
-    if(key.U::equal(* p))
+    if (M::equal(key, * p))
     {
       ptr.i = i;
       ptr.p = p;
       return true;
     }
-    i = p->U::nextHash;
+    i = M::nextHash(*p);
   }
   ptr.i = RNIL;
   ptr.p = NULL;
@@ -522,11 +529,26 @@ DLHashTableImpl<P, T, U>::find(Ptr<T> &
 
 // Specializations
 
+#include "ArrayPool.hpp"
+
+template <typename P, typename T, typename U = T >
+class DLHashTableImpl: public DLMHashTable<P, T, DLHashTableDefaultMethods<T, U> >
+{
+public:
+  explicit DLHashTableImpl(P & p): DLMHashTable<P, T, DLHashTableDefaultMethods<T, U> >(p) { }
+private:
+  DLHashTableImpl(const DLHashTableImpl&);
+  DLHashTableImpl&  operator=(const DLHashTableImpl&);
+};
+
 template <typename T, typename U = T, typename P = ArrayPool<T> >
-class DLHashTable : public DLHashTableImpl<P, T, U>
+class DLHashTable: public DLMHashTable<P, T, DLHashTableDefaultMethods<T, U> >
 {
 public:
-  DLHashTable(P & p) : DLHashTableImpl<P, T, U>(p) {}
+  explicit DLHashTable(P & p): DLMHashTable<P, T, DLHashTableDefaultMethods<T, U> >(p) { }
+private:
+  DLHashTable(const DLHashTable&);
+  DLHashTable&  operator=(const DLHashTable&);
 };
 
 #endif

=== modified file 'storage/ndb/src/kernel/vm/GlobalData.hpp'
--- a/storage/ndb/src/kernel/vm/GlobalData.hpp	2011-09-15 20:21:59 +0000
+++ b/storage/ndb/src/kernel/vm/GlobalData.hpp	2011-11-14 12:02:56 +0000
@@ -75,6 +75,7 @@ struct GlobalData {
   Uint32     ndbMtLqhWorkers;
   Uint32     ndbMtLqhThreads;
   Uint32     ndbMtTcThreads;
+  Uint32     ndbLogParts;
   
   GlobalData(){ 
     theSignalId = 0; 
@@ -85,6 +86,7 @@ struct GlobalData {
     ndbMtLqhWorkers = 0;
     ndbMtLqhThreads = 0;
     ndbMtTcThreads = 0;
+    ndbLogParts = 0;
 #ifdef GCP_TIMER_HACK
     gcp_timer_limit = 0;
 #endif

=== modified file 'storage/ndb/src/kernel/vm/Ndbinfo.hpp'
--- a/storage/ndb/src/kernel/vm/Ndbinfo.hpp	2011-10-11 08:11:15 +0000
+++ b/storage/ndb/src/kernel/vm/Ndbinfo.hpp	2011-11-17 08:49:40 +0000
@@ -194,7 +194,8 @@ public:
     SPJ_SCAN_BATCHES_RETURNED_COUNTER = 20,
     SPJ_SCAN_ROWS_RETURNED_COUNTER = 21,
     SPJ_PRUNED_RANGE_SCANS_RECEIVED_COUNTER = 22,
-    SPJ_CONST_PRUNED_RANGE_SCANS_RECEIVED_COUNTER = 23
+    SPJ_CONST_PRUNED_RANGE_SCANS_RECEIVED_COUNTER = 23,
+    LOCAL_READ_COUNTER = 24
   };
 
   struct counter_entry {

=== modified file 'storage/ndb/src/kernel/vm/SimulatedBlock.hpp'
--- a/storage/ndb/src/kernel/vm/SimulatedBlock.hpp	2011-10-07 08:07:21 +0000
+++ b/storage/ndb/src/kernel/vm/SimulatedBlock.hpp	2011-11-11 07:46:17 +0000
@@ -507,7 +507,7 @@ protected:
     };
     Uint32 prevHash;
     
-    inline bool equal(FragmentInfo & p) const {
+    inline bool equal(FragmentInfo const & p) const {
       return m_senderRef == p.m_senderRef && m_fragmentId == p.m_fragmentId;
     }
     

=== modified file 'storage/ndb/src/kernel/vm/mt.cpp'
--- a/storage/ndb/src/kernel/vm/mt.cpp	2011-10-07 08:07:21 +0000
+++ b/storage/ndb/src/kernel/vm/mt.cpp	2011-11-16 16:23:37 +0000
@@ -76,7 +76,7 @@ static const Uint32 MAX_SIGNALS_BEFORE_W
 #define MAX_THREADS (NUM_MAIN_THREADS +       \
                      MAX_NDBMT_LQH_THREADS +  \
                      MAX_NDBMT_TC_THREADS + 1)
-#define MAX_BLOCK_INSTANCES (MAX_THREADS)
+#define MAX_BLOCK_INSTANCES (MAX_THREADS+1)
 
 /* If this is too small it crashes before first signal. */
 #define MAX_INSTANCES_PER_THREAD (16 + 8 * MAX_NDBMT_LQH_THREADS)

=== modified file 'storage/ndb/src/kernel/vm/pc.hpp'
--- a/storage/ndb/src/kernel/vm/pc.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/kernel/vm/pc.hpp	2011-11-14 09:18:48 +0000
@@ -165,7 +165,7 @@
 // need large value.
 /* ------------------------------------------------------------------------- */
 #define NO_OF_FRAG_PER_NODE 1
-#define MAX_FRAG_PER_NODE 8
+#define MAX_FRAG_PER_LQH 8
 
 /**
 * DIH allocates fragments in chunk for fast find of fragment record.

=== modified file 'storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp'
--- a/storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp	2011-10-21 08:59:23 +0000
+++ b/storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp	2011-11-09 13:10:53 +0000
@@ -6795,8 +6795,6 @@ NdbDictionaryImpl::initialiseColumnData(
   recCol->orgAttrSize= col->m_orgAttrSize;
   if (recCol->offset+recCol->maxSize > rec->m_row_size)
     rec->m_row_size= recCol->offset+recCol->maxSize;
-  /* Round data size to whole words + 4 bytes of AttributeHeader. */
-  rec->m_max_transid_ai_bytes+= (recCol->maxSize+7) & ~3;
   recCol->charset_info= col->m_cs;
   recCol->compare_function= NdbSqlUtil::getType(col->m_type).m_cmp;
   recCol->flags= 0;
@@ -6985,7 +6983,6 @@ NdbDictionaryImpl::createRecord(const Nd
   }
 
   rec->m_row_size= 0;
-  rec->m_max_transid_ai_bytes= 0;
   for (i= 0; i<length; i++)
   {
     const NdbDictionary::RecordSpecification *rs= &recSpec[i];

=== modified file 'storage/ndb/src/ndbapi/NdbQueryOperation.cpp'
--- a/storage/ndb/src/ndbapi/NdbQueryOperation.cpp	2011-10-28 13:38:36 +0000
+++ b/storage/ndb/src/ndbapi/NdbQueryOperation.cpp	2011-11-09 13:10:53 +0000
@@ -3058,20 +3058,16 @@ NdbQueryImpl::doSend(int nodeId, bool la
     scanTabReq->transId2 = (Uint32) (transId >> 32);
 
     Uint32 batchRows = root.getMaxBatchRows();
-    Uint32 batchByteSize, firstBatchRows;
+    Uint32 batchByteSize;
     NdbReceiver::calculate_batch_size(* ndb.theImpl,
-                                      root.m_ndbRecord,
-                                      root.m_firstRecAttr,
-                                      0, // Key size.
                                       getRootFragCount(),
                                       batchRows,
-                                      batchByteSize,
-                                      firstBatchRows);
+                                      batchByteSize);
     assert(batchRows==root.getMaxBatchRows());
-    assert(batchRows==firstBatchRows);
+    assert(batchRows<=batchByteSize);
     ScanTabReq::setScanBatch(reqInfo, batchRows);
     scanTabReq->batch_byte_size = batchByteSize;
-    scanTabReq->first_batch_size = firstBatchRows;
+    scanTabReq->first_batch_size = batchRows;
 
     ScanTabReq::setViaSPJFlag(reqInfo, 1);
     ScanTabReq::setPassAllConfsFlag(reqInfo, 1);
@@ -4361,11 +4357,11 @@ NdbQueryOperationImpl
      * We must thus make sure that we do not set a batch size for the scan 
      * that exceeds what any of its scan descendants can use.
      *
-     * Ignore calculated 'batchByteSize' and 'firstBatchRows' 
+     * Ignore calculated 'batchByteSize' 
      * here - Recalculated when building signal after max-batchRows has been 
      * determined.
      */
-    Uint32 batchByteSize, firstBatchRows;
+    Uint32 batchByteSize;
     /**
      * myClosestScan->m_maxBatchRows may be zero to indicate that we
      * should use default values, or non-zero if the application had an 
@@ -4373,18 +4369,14 @@ NdbQueryOperationImpl
      */
     maxBatchRows = myClosestScan->m_maxBatchRows;
     NdbReceiver::calculate_batch_size(* ndb.theImpl,
-                                      m_ndbRecord,
-                                      m_firstRecAttr,
-                                      0, // Key size.
                                       getRoot().m_parallelism
-                                      == Parallelism_max ?
-                                      m_queryImpl.getRootFragCount() :
-                                      getRoot().m_parallelism,
+                                      == Parallelism_max
+                                      ? m_queryImpl.getRootFragCount()
+                                      : getRoot().m_parallelism,
                                       maxBatchRows,
-                                      batchByteSize,
-                                      firstBatchRows);
+                                      batchByteSize);
     assert(maxBatchRows > 0);
-    assert(firstBatchRows == maxBatchRows);
+    assert(maxBatchRows <= batchByteSize);
   }
 
   // Find the largest value that is acceptable to all lookup descendants.
@@ -4554,17 +4546,13 @@ NdbQueryOperationImpl::prepareAttrInfo(U
     Ndb& ndb = *m_queryImpl.getNdbTransaction().getNdb();
 
     Uint32 batchRows = getMaxBatchRows();
-    Uint32 batchByteSize, firstBatchRows;
+    Uint32 batchByteSize;
     NdbReceiver::calculate_batch_size(* ndb.theImpl,
-                                      m_ndbRecord,
-                                      m_firstRecAttr,
-                                      0, // Key size.
                                       m_queryImpl.getRootFragCount(),
                                       batchRows,
-                                      batchByteSize,
-                                      firstBatchRows);
-    assert(batchRows == firstBatchRows);
+                                      batchByteSize);
     assert(batchRows == getMaxBatchRows());
+    assert(batchRows <= batchByteSize);
     assert(m_parallelism == Parallelism_max ||
            m_parallelism == Parallelism_adaptive);
     if (m_parallelism == Parallelism_max)

=== modified file 'storage/ndb/src/ndbapi/NdbReceiver.cpp'
--- a/storage/ndb/src/ndbapi/NdbReceiver.cpp	2011-08-17 12:36:56 +0000
+++ b/storage/ndb/src/ndbapi/NdbReceiver.cpp	2011-11-09 13:10:53 +0000
@@ -155,88 +155,57 @@ NdbReceiver::prepareRead(char *buf, Uint
   Compute the batch size (rows between each NEXT_TABREQ / SCAN_TABCONF) to
   use, taking into account limits in the transporter, user preference, etc.
 
-  Hm, there are some magic overhead numbers (4 bytes/attr, 32 bytes/row) here,
-  would be nice with some explanation on how these numbers were derived.
+  It is the responsibility of the batch producer (LQH+TUP) to
+  stay within these 'batch_size' and 'batch_byte_size' limits.:
 
-  TODO : Check whether these numbers need to be revised w.r.t. read packed
+  - It should stay strictly within the 'batch_size' (#rows) limit.
+  - It is allowed to overallocate the 'batch_byte_size' (slightly)
+    in order to complete the current row when it hit the limit.
+
+  The client should be prepared to receive, and buffer, upto 
+  'batch_size' rows from each fragment.
+  ::ndbrecord_rowsize() might be usefull for calculating the
+  buffersize to allocate for this resultset.
 */
 //static
 void
 NdbReceiver::calculate_batch_size(const NdbImpl& theImpl,
-                                  const NdbRecord *record,
-                                  const NdbRecAttr *first_rec_attr,
-                                  Uint32 key_size,
                                   Uint32 parallelism,
                                   Uint32& batch_size,
-                                  Uint32& batch_byte_size,
-                                  Uint32& first_batch_size)
+                                  Uint32& batch_byte_size)
 {
   const NdbApiConfig & cfg = theImpl.get_ndbapi_config_parameters();
   const Uint32 max_scan_batch_size= cfg.m_scan_batch_size;
   const Uint32 max_batch_byte_size= cfg.m_batch_byte_size;
   const Uint32 max_batch_size= cfg.m_batch_size;
 
-  Uint32 tot_size= (key_size ? (key_size + 32) : 0); //key + signal overhead
-  if (record)
-  {
-    tot_size+= record->m_max_transid_ai_bytes;
-  }
-
-  const NdbRecAttr *rec_attr= first_rec_attr;
-  while (rec_attr != NULL) {
-    Uint32 attr_size= rec_attr->getColumn()->getSizeInBytes();
-    attr_size= ((attr_size + 4 + 3) >> 2) << 2; //Even to word + overhead
-    tot_size+= attr_size;
-    rec_attr= rec_attr->next();
+  batch_byte_size= max_batch_byte_size;
+  if (batch_byte_size * parallelism > max_scan_batch_size) {
+    batch_byte_size= max_scan_batch_size / parallelism;
   }
 
-  tot_size+= 32; //include signal overhead
-
-  /**
-   * Now we calculate the batch size by trying to get upto SCAN_BATCH_SIZE
-   * bytes sent for each batch from each node. We do however ensure that
-   * no more than MAX_SCAN_BATCH_SIZE is sent from all nodes in total per
-   * batch.
-   */
-  if (batch_size == 0)
-  {
-    batch_byte_size= max_batch_byte_size;
+  if (batch_size == 0 || batch_size > max_batch_size) {
+    batch_size= max_batch_size;
   }
-  else
-  {
-    batch_byte_size= batch_size * tot_size;
+  if (unlikely(batch_size > MAX_PARALLEL_OP_PER_SCAN)) {
+    batch_size= MAX_PARALLEL_OP_PER_SCAN;
   }
-  
-  if (batch_byte_size * parallelism > max_scan_batch_size) {
-    batch_byte_size= max_scan_batch_size / parallelism;
-  }
-  batch_size= batch_byte_size / tot_size;
-  if (batch_size == 0) {
-    batch_size= 1;
-  } else {
-    if (batch_size > max_batch_size) {
-      batch_size= max_batch_size;
-    } else if (batch_size > MAX_PARALLEL_OP_PER_SCAN) {
-      batch_size= MAX_PARALLEL_OP_PER_SCAN;
-    }
+  if (unlikely(batch_size > batch_byte_size)) {
+    batch_size= batch_byte_size;
   }
-  first_batch_size= batch_size;
+
   return;
 }
 
 void
-NdbReceiver::calculate_batch_size(Uint32 key_size,
-                                  Uint32 parallelism,
+NdbReceiver::calculate_batch_size(Uint32 parallelism,
                                   Uint32& batch_size,
-                                  Uint32& batch_byte_size,
-                                  Uint32& first_batch_size,
-                                  const NdbRecord *record) const
+                                  Uint32& batch_byte_size) const
 {
   calculate_batch_size(* m_ndb->theImpl,
-                       record,
-                       theFirstRecAttr,
-                       key_size, parallelism, batch_size, batch_byte_size,
-                       first_batch_size);
+                       parallelism,
+                       batch_size,
+                       batch_byte_size);
 }
 
 void

=== modified file 'storage/ndb/src/ndbapi/NdbRecord.hpp'
--- a/storage/ndb/src/ndbapi/NdbRecord.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/ndbapi/NdbRecord.hpp	2011-11-09 13:10:53 +0000
@@ -189,8 +189,6 @@ public:
   Uint32 tableVersion;
   /* Copy of table->m_keyLenInWords. */
   Uint32 m_keyLenInWords;
-  /* Total maximum size of TRANSID_AI data (for computing batch size). */
-  Uint32 m_max_transid_ai_bytes;
   /**
    * Number of distribution keys (usually == number of primary keys).
    *

=== modified file 'storage/ndb/src/ndbapi/NdbScanOperation.cpp'
--- a/storage/ndb/src/ndbapi/NdbScanOperation.cpp	2011-05-17 12:47:21 +0000
+++ b/storage/ndb/src/ndbapi/NdbScanOperation.cpp	2011-11-09 13:10:53 +0000
@@ -2284,16 +2284,13 @@ int NdbScanOperation::prepareSendScan(Ui
    */
   ScanTabReq * req = CAST_PTR(ScanTabReq, theSCAN_TABREQ->getDataPtrSend());
   Uint32 batch_size = req->first_batch_size; // User specified
-  Uint32 batch_byte_size, first_batch_size;
-  theReceiver.calculate_batch_size(key_size,
-                                   theParallelism,
+  Uint32 batch_byte_size;
+  theReceiver.calculate_batch_size(theParallelism,
                                    batch_size,
-                                   batch_byte_size,
-                                   first_batch_size,
-                                   m_attribute_record);
+                                   batch_byte_size);
   ScanTabReq::setScanBatch(req->requestInfo, batch_size);
   req->batch_byte_size= batch_byte_size;
-  req->first_batch_size= first_batch_size;
+  req->first_batch_size= batch_size;
 
   /**
    * Set keyinfo, nodisk and distribution key flags in 

=== modified file 'storage/ndb/test/ndbapi/flexAsynch.cpp'
--- a/storage/ndb/test/ndbapi/flexAsynch.cpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/test/ndbapi/flexAsynch.cpp	2011-11-11 08:35:14 +0000
@@ -37,7 +37,7 @@
 #define MAX_SEEK 16 
 #define MAXSTRLEN 16 
 #define MAXATTR 64
-#define MAXTABLES 64
+#define MAXTABLES 1
 #define NDB_MAXTHREADS 128
 /*
   NDB_MAXTHREADS used to be just MAXTHREADS, which collides with a
@@ -59,6 +59,16 @@ enum StartType {
   stStop 
 } ;
 
+enum RunType {
+  RunInsert,
+  RunRead,
+  RunUpdate,
+  RunDelete,
+  RunCreateTable,
+  RunDropTable,
+  RunAll
+};
+
 struct ThreadNdb
 {
   int NoOfOps;
@@ -70,6 +80,7 @@ extern "C" { static void* threadLoop(voi
 static void setAttrNames(void);
 static void setTableNames(void);
 static int readArguments(int argc, const char** argv);
+static void dropTables(Ndb* pMyNdb);
 static int createTables(Ndb*);
 static void defineOperation(NdbConnection* aTransObject, StartType aType, 
                             Uint32 base, Uint32 aIndex);
@@ -77,6 +88,8 @@ static void defineNdbRecordOperation(Thr
                             Uint32 base, Uint32 aIndex);
 static void execute(StartType aType);
 static bool executeThread(ThreadNdb*, StartType aType, Ndb* aNdbObject, unsigned int);
+static bool executeTransLoop(ThreadNdb* pThread, StartType aType, Ndb* aNdbObject,
+                             unsigned int threadBase, int threadNo);
 static void executeCallback(int result, NdbConnection* NdbObject,
                             void* aObject);
 static bool error_handler(const NdbError & err);
@@ -92,9 +105,15 @@ ErrorData * flexAsynchErrorData;
 static NdbThread*               threadLife[NDB_MAXTHREADS];
 static int                              tNodeId;
 static int                              ThreadReady[NDB_MAXTHREADS];
+static longlong                 ThreadExecutions[NDB_MAXTHREADS];
 static StartType                ThreadStart[NDB_MAXTHREADS];
 static char                             tableName[MAXTABLES][MAXSTRLEN+1];
 static char                             attrName[MAXATTR][MAXSTRLEN+1];
+static RunType                          tRunType = RunAll;
+static int                              tStdTableNum = 0;
+static int                              tWarmupTime = 10; //Seconds
+static int                              tExecutionTime = 30; //Seconds
+static int                              tCooldownTime = 10; //Seconds
 
 // Program Parameters
 static NdbRecord * g_record[MAXTABLES];
@@ -126,9 +145,10 @@ static int
 
 #define START_REAL_TIME
 #define STOP_REAL_TIME
-#define START_TIMER { NdbTimer timer; timer.doStart();
+#define DEFINE_TIMER NdbTimer timer
+#define START_TIMER timer.doStart();
 #define STOP_TIMER timer.doStop();
-#define PRINT_TIMER(text, trans, opertrans) timer.printTransactionStatistics(text, trans, opertrans); }; 
+#define PRINT_TIMER(text, trans, opertrans) timer.printTransactionStatistics(text, trans, opertrans)
 
 NDBT_Stats a_i, a_u, a_d, a_r;
 
@@ -183,6 +203,7 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
   ThreadNdb*            pThreadData;
   int                   tLoops=0;
   int                   returnValue = NDBT_OK;
+  DEFINE_TIMER;
 
   flexAsynchErrorData = new ErrorData;
   flexAsynchErrorData->resetErrorCounters();
@@ -201,7 +222,13 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
   ndbout << "  " << tNoOfParallelTrans;
   ndbout << " number of parallel operation per thread " << endl;
   ndbout << "  " << tNoOfTransactions << " transaction(s) per round " << endl;
-  ndbout << "  " << tNoOfLoops << " iterations " << endl;
+  if (tRunType == RunAll){
+    ndbout << "  " << tNoOfLoops << " iterations " << endl;
+  } else if (tRunType == RunRead || tRunType == RunUpdate){
+    ndbout << "  Warmup time is " << tWarmupTime << endl;
+    ndbout << "  Execution time is " << tExecutionTime << endl;
+    ndbout << "  Cooldown time is " << tCooldownTime << endl;
+  }
   ndbout << "  " << "Load Factor is " << tLoadFactor << "%" << endl;
   ndbout << "  " << tNoOfAttributes << " attributes per table " << endl;
   ndbout << "  " << tAttributeSize;
@@ -262,10 +289,20 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
   if (pNdb->waitUntilReady(10000) != 0){
     ndbout << "NDB is not ready" << endl;
     ndbout << "Benchmark failed!" << endl;
-    returnValue = NDBT_FAILED;
+    return NDBT_ProgramExit(NDBT_FAILED);
   }
 
-  if(returnValue == NDBT_OK){
+  if (tRunType == RunCreateTable)
+  {
+    if (createTables(pNdb) != 0){
+      returnValue = NDBT_FAILED;
+    }
+  }
+  else if (tRunType == RunDropTable)
+  {
+    dropTables(pNdb);
+  }
+  else if(returnValue == NDBT_OK){
     if (createTables(pNdb) != 0){
       returnValue = NDBT_FAILED;
     }
@@ -282,14 +319,15 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
     }
   }
 
-  if(returnValue == NDBT_OK){
+  if(returnValue == NDBT_OK &&
+     tRunType != RunCreateTable &&
+     tRunType != RunDropTable){
     /****************************************************************
      *  Create NDB objects.                                   *
      ****************************************************************/
     resetThreads();
     for (Uint32 i = 0; i < tNoOfThreads ; i++) {
-      pThreadData[i].ThreadNo = i
-;
+      pThreadData[i].ThreadNo = i;
       threadLife[i] = NdbThread_Create(threadLoop,
                                        (void**)&pThreadData[i],
                                        32768,
@@ -312,76 +350,86 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
        ****************************************************************/
           
       failed = 0 ;
-
-      START_TIMER;
-      execute(stInsert);
-      STOP_TIMER;
-      a_i.addObservation((1000*noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
-      PRINT_TIMER("insert", noOfTransacts, tNoOfOpsPerTrans);
-
-      if (0 < failed) {
-        int i = retry_opt ;
-        int ci = 1 ;
-        while (0 < failed && 0 < i){
-          ndbout << failed << " of the transactions returned errors!" 
-                 << endl << endl;
-          ndbout << "Attempting to redo the failed transactions now..." 
-                 << endl ;
-          ndbout << "Redo attempt " << ci <<" out of " << retry_opt 
-                 << endl << endl;
-          failed = 0 ;
-          START_TIMER;
-          execute(stInsert);
-          STOP_TIMER;
-          PRINT_TIMER("insert", noOfTransacts, tNoOfOpsPerTrans);
-          i-- ;
-          ci++;
-        }
-        if(0 == failed ){
-          ndbout << endl <<"Redo attempt succeeded" << endl << endl;
-        }else{
-          ndbout << endl <<"Redo attempt failed, moving on now..." << endl 
-                 << endl;
+      if (tRunType == RunAll || tRunType == RunInsert){
+        ndbout << "Executing inserts" << endl;
+        START_TIMER;
+        execute(stInsert);
+        STOP_TIMER;
+      }
+      if (tRunType == RunAll){
+        a_i.addObservation((1000*noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
+        PRINT_TIMER("insert", noOfTransacts, tNoOfOpsPerTrans);
+
+        if (0 < failed) {
+          int i = retry_opt ;
+          int ci = 1 ;
+          while (0 < failed && 0 < i){
+            ndbout << failed << " of the transactions returned errors!" 
+                   << endl << endl;
+            ndbout << "Attempting to redo the failed transactions now..." 
+                   << endl ;
+            ndbout << "Redo attempt " << ci <<" out of " << retry_opt 
+                   << endl << endl;
+            failed = 0 ;
+            START_TIMER;
+            execute(stInsert);
+            STOP_TIMER;
+            PRINT_TIMER("insert", noOfTransacts, tNoOfOpsPerTrans);
+            i-- ;
+            ci++;
+          }
+          if(0 == failed ){
+            ndbout << endl <<"Redo attempt succeeded" << endl << endl;
+          }else{
+            ndbout << endl <<"Redo attempt failed, moving on now..." << endl 
+                   << endl;
+          }//if
         }//if
-      }//if
-          
+      }//if  
       /****************************************************************
        * Perform read.                                                *
        ****************************************************************/
       
       failed = 0 ;
 
-      for (int ll = 0; ll < 1 + tExtraReadLoop; ll++)
-      {
-        START_TIMER;
-        execute(stRead);
-        STOP_TIMER;
-        a_r.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
-        PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
-      }
-
-      if (0 < failed) {
-        int i = retry_opt ;
-        int cr = 1;
-        while (0 < failed && 0 < i){
-          ndbout << failed << " of the transactions returned errors!"<<endl ;
-          ndbout << endl;
-          ndbout <<"Attempting to redo the failed transactions now..." << endl;
-          ndbout << endl;
-          ndbout <<"Redo attempt " << cr <<" out of ";
-          ndbout << retry_opt << endl << endl;
-          failed = 0 ;
+      if (tRunType == RunAll || tRunType == RunRead){
+        for (int ll = 0; ll < 1 + tExtraReadLoop; ll++)
+        {
+          ndbout << "Executing reads" << endl;
           START_TIMER;
           execute(stRead);
           STOP_TIMER;
-          PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
-          i-- ;
-          cr++ ;
-        }//while
-        if(0 == failed ) {
-          ndbout << endl <<"Redo attempt succeeded" << endl << endl ;
-        }else{
-          ndbout << endl <<"Redo attempt failed, moving on now..." << endl << endl ;
+          if (tRunType == RunAll){
+            a_r.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
+            PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
+          }//if
+        }//for
+      }//if
+
+      if (tRunType == RunAll){
+        if (0 < failed) {
+          int i = retry_opt ;
+          int cr = 1;
+          while (0 < failed && 0 < i){
+            ndbout << failed << " of the transactions returned errors!"<<endl ;
+            ndbout << endl;
+            ndbout <<"Attempting to redo the failed transactions now..." << endl;
+            ndbout << endl;
+            ndbout <<"Redo attempt " << cr <<" out of ";
+            ndbout << retry_opt << endl << endl;
+            failed = 0 ;
+            START_TIMER;
+            execute(stRead);
+            STOP_TIMER;
+            PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
+            i-- ;
+            cr++ ;
+          }//while
+          if(0 == failed ) {
+            ndbout << endl <<"Redo attempt succeeded" << endl << endl ;
+          }else{
+            ndbout << endl <<"Redo attempt failed, moving on now..." << endl << endl ;
+          }//if
         }//if
       }//if
           
@@ -391,35 +439,40 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
        ****************************************************************/
       
       failed = 0 ;
-          
-      START_TIMER;
-      execute(stUpdate);
-      STOP_TIMER;
-      a_u.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
-      PRINT_TIMER("update", noOfTransacts, tNoOfOpsPerTrans) ;
-
-      if (0 < failed) {
-        int i = retry_opt ;
-        int cu = 1 ;
-        while (0 < failed && 0 < i){
-          ndbout << failed << " of the transactions returned errors!"<<endl ;
-          ndbout << endl;
-          ndbout <<"Attempting to redo the failed transactions now..." << endl;
-          ndbout << endl <<"Redo attempt " << cu <<" out of ";
-          ndbout << retry_opt << endl << endl;
-          failed = 0 ;
-          START_TIMER;
-          execute(stUpdate);
-          STOP_TIMER;
-          PRINT_TIMER("update", noOfTransacts, tNoOfOpsPerTrans);
-          i-- ;
-          cu++ ;
-        }//while
-        if(0 == failed ){
-          ndbout << endl <<"Redo attempt succeeded" << endl << endl;
-        } else {
-          ndbout << endl;
-          ndbout <<"Redo attempt failed, moving on now..." << endl << endl;
+
+      if (tRunType == RunAll || tRunType == RunUpdate){
+        ndbout << "Executing updates" << endl;
+        START_TIMER;
+        execute(stUpdate);
+        STOP_TIMER;
+      }//if
+      if (tRunType == RunAll){
+        a_u.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
+        PRINT_TIMER("update", noOfTransacts, tNoOfOpsPerTrans) ;
+
+        if (0 < failed) {
+          int i = retry_opt ;
+          int cu = 1 ;
+          while (0 < failed && 0 < i){
+            ndbout << failed << " of the transactions returned errors!"<<endl ;
+            ndbout << endl;
+            ndbout <<"Attempting to redo the failed transactions now..." << endl;
+            ndbout << endl <<"Redo attempt " << cu <<" out of ";
+            ndbout << retry_opt << endl << endl;
+            failed = 0 ;
+            START_TIMER;
+            execute(stUpdate);
+            STOP_TIMER;
+            PRINT_TIMER("update", noOfTransacts, tNoOfOpsPerTrans);
+            i-- ;
+            cu++ ;
+          }//while
+          if(0 == failed ){
+            ndbout << endl <<"Redo attempt succeeded" << endl << endl;
+          } else {
+            ndbout << endl;
+            ndbout <<"Redo attempt failed, moving on now..." << endl << endl;
+          }//if
         }//if
       }//if
           
@@ -428,38 +481,41 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
        ****************************************************************/
       
       failed = 0 ;
-          
-      for (int ll = 0; ll < 1 + tExtraReadLoop; ll++)
-      {
-        START_TIMER;
-        execute(stRead);
-        STOP_TIMER;
-        a_r.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
-        PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
-      }        
-
-      if (0 < failed) {
-        int i = retry_opt ;
-        int cr2 = 1 ;
-        while (0 < failed && 0 < i){
-          ndbout << failed << " of the transactions returned errors!"<<endl ;
-          ndbout << endl;
-          ndbout <<"Attempting to redo the failed transactions now..." << endl;
-          ndbout << endl <<"Redo attempt " << cr2 <<" out of ";
-          ndbout << retry_opt << endl << endl;
-          failed = 0 ;
+
+      if (tRunType == RunAll){
+        for (int ll = 0; ll < 1 + tExtraReadLoop; ll++)
+        {
+          ndbout << "Executing reads" << endl;
           START_TIMER;
           execute(stRead);
           STOP_TIMER;
+          a_r.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
           PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
-          i-- ;
-          cr2++ ;
-        }//while
-        if(0 == failed ){
-          ndbout << endl <<"Redo attempt succeeded" << endl << endl;
-        }else{
-          ndbout << endl;
-          ndbout << "Redo attempt failed, moving on now..." << endl << endl;
+        }        
+
+        if (0 < failed) {
+          int i = retry_opt ;
+          int cr2 = 1 ;
+          while (0 < failed && 0 < i){
+            ndbout << failed << " of the transactions returned errors!"<<endl ;
+            ndbout << endl;
+            ndbout <<"Attempting to redo the failed transactions now..." << endl;
+            ndbout << endl <<"Redo attempt " << cr2 <<" out of ";
+            ndbout << retry_opt << endl << endl;
+            failed = 0 ;
+            START_TIMER;
+            execute(stRead);
+            STOP_TIMER;
+            PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
+            i-- ;
+            cr2++ ;
+          }//while
+          if(0 == failed ){
+            ndbout << endl <<"Redo attempt succeeded" << endl << endl;
+          }else{
+            ndbout << endl;
+            ndbout << "Redo attempt failed, moving on now..." << endl << endl;
+          }//if
         }//if
       }//if
           
@@ -470,34 +526,39 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
       
       failed = 0 ;
           
-      START_TIMER;
-      execute(stDelete);
-      STOP_TIMER;
-      a_d.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
-      PRINT_TIMER("delete", noOfTransacts, tNoOfOpsPerTrans);
-
-      if (0 < failed) {
-        int i = retry_opt ;
-        int cd = 1 ;
-        while (0 < failed && 0 < i){
-          ndbout << failed << " of the transactions returned errors!"<< endl ;
-          ndbout << endl;
-          ndbout <<"Attempting to redo the failed transactions now:" << endl ;
-          ndbout << endl <<"Redo attempt " << cd <<" out of ";
-          ndbout << retry_opt << endl << endl;
-          failed = 0 ;
-          START_TIMER;
-          execute(stDelete);
-          STOP_TIMER;
-          PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
-          i-- ;
-          cd++ ;
-        }//while
-        if(0 == failed ){
-          ndbout << endl <<"Redo attempt succeeded" << endl << endl ;
-        }else{
-          ndbout << endl;
-          ndbout << "Redo attempt failed, moving on now..." << endl << endl;
+      if (tRunType == RunAll || tRunType == RunDelete){
+        ndbout << "Executing deletes" << endl;
+        START_TIMER;
+        execute(stDelete);
+        STOP_TIMER;
+      }//if
+      if (tRunType == RunAll){
+        a_d.addObservation((1000 * noOfTransacts * tNoOfOpsPerTrans) / timer.elapsedTime());
+        PRINT_TIMER("delete", noOfTransacts, tNoOfOpsPerTrans);
+
+        if (0 < failed) {
+          int i = retry_opt ;
+          int cd = 1 ;
+          while (0 < failed && 0 < i){
+            ndbout << failed << " of the transactions returned errors!"<< endl ;
+            ndbout << endl;
+            ndbout <<"Attempting to redo the failed transactions now:" << endl ;
+            ndbout << endl <<"Redo attempt " << cd <<" out of ";
+            ndbout << retry_opt << endl << endl;
+            failed = 0 ;
+            START_TIMER;
+            execute(stDelete);
+            STOP_TIMER;
+            PRINT_TIMER("read", noOfTransacts, tNoOfOpsPerTrans);
+            i-- ;
+            cd++ ;
+          }//while
+          if(0 == failed ){
+            ndbout << endl <<"Redo attempt succeeded" << endl << endl ;
+          }else{
+            ndbout << endl;
+            ndbout << "Redo attempt failed, moving on now..." << endl << endl;
+          }//if
         }//if
       }//if
           
@@ -516,17 +577,61 @@ NDB_COMMAND(flexAsynch, "flexAsynch", "f
       NdbThread_WaitFor(threadLife[i], &tmp);
       NdbThread_Destroy(&threadLife[i]);
     }
-  } 
+  }
+
+  if (tRunType == RunAll)
+  {
+    dropTables(pNdb);
+  }
   delete [] pThreadData;
   delete pNdb;
 
-  //printing errorCounters
-  flexAsynchErrorData->printErrorCounters(ndbout);
-
-  print("insert", a_i);
-  print("update", a_u);
-  print("delete", a_d);
-  print("read  ", a_r);
+  if (tRunType == RunAll ||
+      tRunType == RunInsert ||
+      tRunType == RunDelete ||
+      tRunType == RunUpdate ||
+      tRunType == RunRead)
+  {
+    //printing errorCounters
+    flexAsynchErrorData->printErrorCounters(ndbout);
+    if (tRunType == RunAll) {
+      print("insert", a_i);
+      print("update", a_u);
+      print("delete", a_d);
+      print("read  ", a_r);
+    }
+  }
+  if (tRunType == RunInsert ||
+      tRunType == RunRead ||
+      tRunType == RunUpdate ||
+      tRunType == RunDelete)
+  {
+    longlong total_executions = 0;
+    longlong total_transactions;
+    longlong exec_time;
+
+    if (tRunType == RunInsert || tRunType == RunDelete) {
+      total_executions = (longlong)tNoOfTransactions;
+      total_executions *= (longlong)tNoOfThreads;
+    } else {
+      for (Uint32 i = 0; i < tNoOfThreads; i++){
+        total_executions += ThreadExecutions[i];
+      }
+    }
+    total_transactions = total_executions * tNoOfParallelTrans;
+    if (tRunType == RunInsert || tRunType == RunDelete) {
+      exec_time = (longlong)timer.elapsedTime();
+    } else {
+      exec_time = (longlong)tExecutionTime * 1000;
+    }
+    ndbout << "Total number of transactions is " << total_transactions;
+    ndbout << endl;
+    ndbout << "Execution time is " << exec_time << " milliseconds" << endl;
+
+    total_transactions = (total_transactions * 1000) / exec_time;
+    int trans_per_sec = (int)total_transactions;
+    ndbout << "Total transactions per second " << trans_per_sec << endl;
+  }
 
   delete [] g_cluster_connection;
 
@@ -551,7 +656,7 @@ threadLoop(void* ThreadData)
   localNdb = new Ndb(g_cluster_connection+(threadNo % tConnections), "TEST_DB");
   localNdb->init(1024);
   localNdb->waitUntilReady(10000);
-  unsigned int threadBase = (threadNo << 16) + tNodeId ;
+  unsigned int threadBase = (threadNo << 16);
   
   for (;;){
     while (ThreadStart[threadNo] == stIdle) {
@@ -565,8 +670,14 @@ threadLoop(void* ThreadData)
 
     tType = ThreadStart[threadNo];
     ThreadStart[threadNo] = stIdle;
-    if(!executeThread(tabThread, tType, localNdb, threadBase)){
-      break;
+    if (tRunType == RunAll || tRunType == RunInsert || tRunType == RunDelete){
+      if(!executeThread(tabThread, tType, localNdb, threadBase)){
+        break;
+      }
+    } else {
+      if(!executeTransLoop(tabThread, tType, localNdb, threadBase, threadNo)){
+        break;
+      }
     }
     ThreadReady[threadNo] = 1;
   }//for
@@ -577,80 +688,131 @@ threadLoop(void* ThreadData)
   return NULL;
 }//threadLoop()
 
-static 
-bool
-executeThread(ThreadNdb* pThread, 
-	      StartType aType, Ndb* aNdbObject, unsigned int threadBase) {
+static int error_count = 0;
 
+static bool
+executeTrans(ThreadNdb* pThread,
+             StartType aType,
+             Ndb* aNdbObject,
+             unsigned int threadBase,
+             unsigned int i)
+{
   NdbConnection* tConArray[1024];
   unsigned int tBase;
   unsigned int tBase2;
 
-  unsigned int extraLoops= 0; // (aType == stRead) ? 100000 : 0;
-
-  for (unsigned int ex= 0; ex < (1 + extraLoops); ex++)
-  {
-    for (unsigned int i = 0; i < tNoOfTransactions; i++) {
-      if (tLocal == false) {
-        tBase = i * tNoOfParallelTrans * tNoOfOpsPerTrans;
-      } else {
-        tBase = i * tNoOfParallelTrans * MAX_SEEK;
-      }//if
-      START_REAL_TIME;
-      for (unsigned int j = 0; j < tNoOfParallelTrans; j++) {
-        if (tLocal == false) {
-          tBase2 = tBase + (j * tNoOfOpsPerTrans);
-        } else {
-          tBase2 = tBase + (j * MAX_SEEK);
-          tBase2 = getKey(threadBase, tBase2);
-        }//if
-        if (startTransGuess == true) {
-	  union {
-            Uint64 Tkey64;
-            Uint32 Tkey32[2];
-	  };
-          Tkey32[0] = threadBase;
-          Tkey32[1] = tBase2;
-          tConArray[j] = aNdbObject->startTransaction((Uint32)0, //Priority
-                                                      (const char*)&Tkey64, //Main PKey
-                                                      (Uint32)4);           //Key Length
-        } else {
-          tConArray[j] = aNdbObject->startTransaction();
-        }//if
-        if (tConArray[j] == NULL && 
-            !error_handler(aNdbObject->getNdbError()) ){
-          ndbout << endl << "Unable to recover! Quiting now" << endl ;
-          return false;
-        }//if
-        
-        for (unsigned int k = 0; k < tNoOfOpsPerTrans; k++) {
-          //-------------------------------------------------------
-          // Define the operation, but do not execute it yet.
-          //-------------------------------------------------------
-          if (tNdbRecord)
-            defineNdbRecordOperation(pThread, 
-                                     tConArray[j], aType, threadBase,(tBase2+k));
-          else
-            defineOperation(tConArray[j], aType, threadBase, (tBase2 + k));
-        }//for
-        
-        tConArray[j]->executeAsynchPrepare(Commit, &executeCallback, NULL);
-      }//for
-      STOP_REAL_TIME;
+  if (tLocal == false) {
+    tBase = i * tNoOfParallelTrans * tNoOfOpsPerTrans;
+  } else {
+    tBase = i * tNoOfParallelTrans * MAX_SEEK;
+  }//if
+  START_REAL_TIME;
+  for (unsigned int j = 0; j < tNoOfParallelTrans; j++) {
+    if (tLocal == false) {
+      tBase2 = tBase + (j * tNoOfOpsPerTrans);
+    } else {
+      tBase2 = tBase + (j * MAX_SEEK);
+      tBase2 = getKey(threadBase, tBase2);
+    }//if
+    if (startTransGuess == true) {
+      union {
+        Uint64 Tkey64;
+        Uint32 Tkey32[2];
+      };
+      Tkey32[0] = threadBase;
+      Tkey32[1] = tBase2;
+      tConArray[j] = aNdbObject->startTransaction((Uint32)0, //Priority
+                                                  (const char*)&Tkey64, //Main PKey
+                                                  (Uint32)4);           //Key Length
+    } else {
+      tConArray[j] = aNdbObject->startTransaction();
+    }//if
+    if (tConArray[j] == NULL){
+      error_handler(aNdbObject->getNdbError());
+      ndbout << endl << "Unable to recover! Quiting now" << endl ;
+      return false;
+    }//if
+    
+    for (unsigned int k = 0; k < tNoOfOpsPerTrans; k++) {
       //-------------------------------------------------------
-      // Now we have defined a set of operations, it is now time
-      // to execute all of them.
+      // Define the operation, but do not execute it yet.
       //-------------------------------------------------------
-      int Tcomp = aNdbObject->sendPollNdb(3000, 0, 0);
-      while (unsigned(Tcomp) < tNoOfParallelTrans) {
-        int TlocalComp = aNdbObject->pollNdb(3000, 0);
-        Tcomp += TlocalComp;
-      }//while
-      for (unsigned int j = 0 ; j < tNoOfParallelTrans ; j++) {
-        aNdbObject->closeTransaction(tConArray[j]);
-      }//for
+      if (tNdbRecord)
+        defineNdbRecordOperation(pThread, 
+                                 tConArray[j], aType, threadBase,(tBase2+k));
+      else
+        defineOperation(tConArray[j], aType, threadBase, (tBase2 + k));
     }//for
-  } // for
+    
+    tConArray[j]->executeAsynchPrepare(Commit, &executeCallback, NULL);
+  }//for
+  STOP_REAL_TIME;
+  //-------------------------------------------------------
+  // Now we have defined a set of operations, it is now time
+  // to execute all of them.
+  //-------------------------------------------------------
+  int Tcomp = aNdbObject->sendPollNdb(3000, 0, 0);
+  while (unsigned(Tcomp) < tNoOfParallelTrans) {
+    int TlocalComp = aNdbObject->pollNdb(3000, 0);
+    Tcomp += TlocalComp;
+  }//while
+  for (unsigned int j = 0 ; j < tNoOfParallelTrans ; j++) {
+    if (aNdbObject->getNdbError().code != 0 && error_count < 10000){
+      error_count++;
+      ndbout << "i = " << i << ", j = " << j << ", error = ";
+      ndbout << aNdbObject->getNdbError().code << ", threadBase = ";
+      ndbout << hex << threadBase << endl;
+    }
+    aNdbObject->closeTransaction(tConArray[j]);
+  }//for
+  return true;
+}
+
+static 
+bool
+executeTransLoop(ThreadNdb* pThread, 
+                 StartType aType,
+                 Ndb* aNdbObject,
+                 unsigned int threadBase,
+                 int threadNo) {
+  bool continue_flag = true;
+  int time_expired;
+  longlong executions = 0;
+  unsigned int i = 0;
+  DEFINE_TIMER;
+
+  ThreadExecutions[threadNo] = 0;
+  START_TIMER;
+  do
+  {
+    if (!executeTrans(pThread, aType, aNdbObject, threadBase, i++))
+      return false;
+    STOP_TIMER;
+    time_expired = (int)(timer.elapsedTime() / 1000);
+    if (time_expired < tWarmupTime)
+      ; //Do nothing
+    else if (time_expired < (tWarmupTime + tExecutionTime)){
+      executions++; //Count measurement
+    }
+    else if (time_expired < (tWarmupTime + tExecutionTime + tCooldownTime))
+      ; //Do nothing
+    else
+      continue_flag = false; //Time expired
+    if (i == tNoOfTransactions) /* Make sure the record exists */
+      i = 0;
+  } while (continue_flag);
+  ThreadExecutions[threadNo] = executions;
+  return true;
+}//executeTransLoop()
+
+static 
+bool
+executeThread(ThreadNdb* pThread, 
+	      StartType aType, Ndb* aNdbObject, unsigned int threadBase) {
+  for (unsigned int i = 0; i < tNoOfTransactions; i++) {
+    if (!executeTrans(pThread, aType, aNdbObject, threadBase, i))
+      return false;
+  }//for
   return true;
 }//executeThread()
 
@@ -880,8 +1042,20 @@ static void setTableNames()
       BaseString::snprintf(tableName[i], MAXSTRLEN, "TAB%d_%u", i, 
                (unsigned)(NdbTick_CurrentMillisecond()+rand()));
     } else {
-      BaseString::snprintf(tableName[i], MAXSTRLEN, "TAB%d", i);
+      BaseString::snprintf(tableName[i], MAXSTRLEN, "TAB%d", tStdTableNum);
     }
+    ndbout << "Using table name " << tableName[0] << endl;
+  }
+}
+
+static void
+dropTables(Ndb* pMyNdb)
+{
+  int i;
+  for (i = 0; i < MAXTABLES; i++)
+  {
+    ndbout << "Dropping table " << tableName[i] << "..." << endl;
+    pMyNdb->getDictionary()->dropTable(tableName[i]);
   }
 }
 
@@ -893,8 +1067,8 @@ createTables(Ndb* pMyNdb){
   NdbSchemaOp           *MySchemaOp;
   int                   check;
 
-  if (theTableCreateFlag == 0) {
-    for(int i=0; i < 1 ;i++) {
+  if (theTableCreateFlag == 0 || tRunType == RunCreateTable) {
+    for(int i=0; i < MAXTABLES ;i++) {
       ndbout << "Creating " << tableName[i] << "..." << endl;
       MySchemaTransaction = NdbSchemaCon::startSchemaTrans(pMyNdb);
       
@@ -953,31 +1127,35 @@ createTables(Ndb* pMyNdb){
         return -1;
       
       NdbSchemaCon::closeSchemaTrans(MySchemaTransaction);
+    }
+  }
+  if (tNdbRecord)
+  {
+    for(int i=0; i < MAXTABLES ;i++) {
+      NdbDictionary::Dictionary* pDict = pMyNdb->getDictionary();
+      const NdbDictionary::Table * pTab = pDict->getTable(tableName[i]);
 
-      if (tNdbRecord)
+      if (pTab == NULL){
+        error_handler(pDict->getNdbError());
+        return -1;
+      }
+      int off = 0;
+      Vector<NdbDictionary::RecordSpecification> spec;
+      for (Uint32 j = 0; j<unsigned(pTab->getNoOfColumns()); j++)
       {
-	NdbDictionary::Dictionary* pDict = pMyNdb->getDictionary();
-	const NdbDictionary::Table * pTab = pDict->getTable(tableName[i]);
-	
-	int off = 0;
-	Vector<NdbDictionary::RecordSpecification> spec;
-	for (Uint32 j = 0; j<unsigned(pTab->getNoOfColumns()); j++)
-	{
-	  NdbDictionary::RecordSpecification r0;
-	  r0.column = pTab->getColumn(j);
-	  r0.offset = off;
-	  off += (r0.column->getSizeInBytes() + 3) & ~(Uint32)3;
-	  spec.push_back(r0);
-	}
-	g_record[i] = 
+        NdbDictionary::RecordSpecification r0;
+        r0.column = pTab->getColumn(j);
+        r0.offset = off;
+        off += (r0.column->getSizeInBytes() + 3) & ~(Uint32)3;
+        spec.push_back(r0);
+      }
+      g_record[i] = 
 	  pDict->createRecord(pTab, spec.getBase(), 
 			      spec.size(),
 			      sizeof(NdbDictionary::RecordSpecification));
-	assert(g_record[i]);
-      }
+      assert(g_record[i]);
     }
   }
-  
   return 0;
 }
 
@@ -996,6 +1174,14 @@ bool error_handler(const NdbError & err)
   return false ; // return false to abort
 }
 
+static void
+setAggregateRun(void)
+{
+  tNoOfLoops = 1;
+  tExtraReadLoop = 0;
+  theTableCreateFlag = 1;
+}
+
 static
 int 
 readArguments(int argc, const char** argv){
@@ -1110,6 +1296,43 @@ readArguments(int argc, const char** arg
       tExtraReadLoop = atoi(argv[i+1]);
     } else if (strcmp(argv[i], "-con") == 0){
       tConnections = atoi(argv[i+1]);
+    } else if (strcmp(argv[i], "-insert") == 0){
+      setAggregateRun();
+      tRunType = RunInsert;
+      argc++;
+      i--;
+    } else if (strcmp(argv[i], "-read") == 0){
+      setAggregateRun();
+      tRunType = RunRead;
+      argc++;
+      i--;
+    } else if (strcmp(argv[i], "-update") == 0){
+      setAggregateRun();
+      tRunType = RunUpdate;
+      argc++;
+      i--;
+    } else if (strcmp(argv[i], "-delete") == 0){
+      setAggregateRun();
+      tRunType = RunDelete;
+      argc++;
+      i--;
+    } else if (strcmp(argv[i], "-create_table") == 0){
+      tRunType = RunCreateTable;
+      argc++;
+      i--;
+    } else if (strcmp(argv[i], "-drop_table") == 0){
+      tRunType = RunDropTable;
+      argc++;
+      i--;
+    } else if (strcmp(argv[i], "-warmup_time") == 0){
+      tWarmupTime = atoi(argv[i+1]);
+    } else if (strcmp(argv[i], "-execution_time") == 0){
+      tExecutionTime = atoi(argv[i+1]);
+    } else if (strcmp(argv[i], "-cooldown_time") == 0){
+      tCooldownTime = atoi(argv[i+1]);
+    } else if (strcmp(argv[i], "-table") == 0){
+      tStdTableNum = atoi(argv[i+1]);
+      theStdTableNameFlag = 1;
     } else {
       return -1;
     }
@@ -1131,7 +1354,6 @@ readArguments(int argc, const char** arg
 static
 void
 input_error(){
-  
   ndbout_c("FLEXASYNCH");
   ndbout_c("   Perform benchmark of insert, update and delete transactions");
   ndbout_c(" ");
@@ -1156,7 +1378,18 @@ input_error(){
   ndbout_c("   -force Force send when communicating");
   ndbout_c("   -non_adaptive Send at a 10 millisecond interval");
   ndbout_c("   -local Number of part, only use keys in one part out of 16");
-  ndbout_c("   -ndbrecord");
+  ndbout_c("   -ndbrecord Use NDB Record");
+  ndbout_c("   -r Number of extra loops");
+  ndbout_c("   -insert Only run inserts on standard table");
+  ndbout_c("   -read Only run reads on standard table");
+  ndbout_c("   -update Only run updates on standard table");
+  ndbout_c("   -delete Only run deletes on standard table");
+  ndbout_c("   -create_table Only run Create Table of standard table");
+  ndbout_c("   -drop_table Only run Drop Table on standard table");
+  ndbout_c("   -warmup_time Warmup Time before measurement starts");
+  ndbout_c("   -execution_time Execution Time where measurement is done");
+  ndbout_c("   -cooldown_time Cooldown time after measurement completed");
+  ndbout_c("   -table Number of standard table, default 0");
 }
   
 template class Vector<NdbDictionary::RecordSpecification>;

=== modified file 'storage/ndb/tools/ndbinfo_sql.cpp'
--- a/storage/ndb/tools/ndbinfo_sql.cpp	2011-10-28 09:56:57 +0000
+++ b/storage/ndb/tools/ndbinfo_sql.cpp	2011-11-17 08:49:40 +0000
@@ -134,6 +134,7 @@ struct view {
     "  WHEN 21 THEN \"SCAN_ROWS_RETURNED\""
     "  WHEN 22 THEN \"PRUNED_RANGE_SCANS_RECEIVED\""
     "  WHEN 23 THEN \"CONST_PRUNED_RANGE_SCANS_RECEIVED\""
+    "  WHEN 24 THEN \"LOCAL_READS\""
     "  ELSE \"<unknown>\" "
     " END AS counter_name, "
     "val "

No bundle (reason: useless for push emails).
Thread
bzr push into mysql-5.1-telco-7.0 branch (pekka.nousiainen:4646 to 4677) Pekka Nousiainen21 Nov