Dear MySQL users,
MySQL Cluster 7.2.10, has been released and can be downloaded from
http://www.mysql.com/downloads/cluster/ - where you will also find "Quick Start" guides
to help you get your first MySQL Cluster database up and running. To configure more
sophisticated Clusters, try out the prototype, browser-based MySQL Cluster Auto-Installer
The release notes are available from
MySQL Cluster enables users to meet the database challenges of next generation web, cloud,
and communications services with uncompromising scalability, uptime and agility. An
introductory demo video can be viewed at
http://www.oracle.com/pls/ebn/swf_viewer.load?p_shows_id=11464419 and more details can be
found at http://www.mysql.com/products/cluster/
Best Regards, Andrew Morgan.
MySQL High Availability Principal Product Manager
Functionality Added or Changed
Added several new columns to the transporters table and counters for the counters table of
the ndbinfo information database. The information provided may help in troubleshooting of
transport overloads and problems with send buffer memory allocation. For more
information, see the descriptions of these tables. (Bug #15935206)
To provide information which can help in assessing the current state of arbitration in a
MySQL Cluster as well as in diagnosing and correcting arbitration problems, 3 new
tables-membership, arbitrator_validity_detail, and arbitrator_validity_summary-have been
added to the ndbinfo information database. (Bug #13336549)
The multithreaded job scheduler could be suspended prematurely when there were
insufficient free job buffers to allow the threads to continue. The general rule in the
job thread is that any queued messages should be sent before the thread is allowed to
suspend itself, which guarantees that no other threads or API clients are kept waiting
for operations which have already completed. However, the number of messages in the queue
was specified incorrectly, leading to increased latency in delivering signals, sluggish
response, or otherwise suboptimal performance. (Bug #15908684)
The management client command ALL REPORT BackupStatus failed with an error when used with
data nodes having multiple LQH worker threads (ndbmtd data nodes). The issue did not
effect the node_id REPORT BackupStatus form of this command. (Bug #15908907)
The setting for the DefaultOperationRedoProblemAction API node configuration parameter was
ignored, and the default value used instead. (Bug #15855588)
Node failure during the dropping of a table could lead to the node hanging when attempting
to restart. (Bug #14787522)
The recently added LCP fragment scan watchdog occasionally reported problems with LCP
fragment scans having very high table id, fragment id, and row count values.
This was due to the watchdog not accounting for the time spent draining the backup buffer
used to buffer rows before writing to the fragment checkpoint file.
Now, in the final stage of an LCP fragment scan, the watchdog switches from monitoring
rows scanned to monitoring the buffer size in bytes. The buffer size should decrease as
data is written to the file, after which the file should be promptly closed. (Bug
Job buffers act as the internal queues for work requests (signals) between block threads
in ndbmtd and could be exhausted if too many signals are sent to a block thread.
Performing pushed joins in the DBSPJ kernel block can execute multiple branches of the
query tree in parallel, which means that the number of signals being sent can increase as
more branches are executed. If DBSPJ execution cannot be completed before the job buffers
are filled, the data node can fail.
This problem could be identified by by multiple instances of the message sleeploop 10!! in
the cluster out log, possibly followed by job buffer full. If the job buffers overflowed
more gradually, there could also be failures due to error 1205 (Lock wait timeout
exceeded), shutdowns initiated by the watchdog timer, or other timeout related errors.
These were due to the slowdown caused by the 'sleeploop'.
Normally up to a 1:4 fanout ratio between consumed and produced signals is permitted.
However, since there can be a potentially unlimited number of rows returned from the scan
(and multiple scans of this type executing in parallel), any ratio greater 1:1 in such
cases makes it possible to overflow the job buffers.
The fix for this issue defers any lookup child which otherwise would have been executed in
parallel with another is deferred, to resume when its parallel child completes one of its
own requests. This restricts the fanout ratio for bushy scan-lookup joins to 1:1. (Bug
References: See also Bug #14648712.
Under certain rare circumstances, MySQL Cluster data nodes could crash in conjunction with
a configuration change on the data nodes from a single-threaded to a multi-threaded
transaction coordinator (using the ThreadConfig configuration parameter for ndbmtd). The
problem occurred when a mysqld that had been started prior to the change was shut down
following the rolling restart of the data nodes required to effect the configuration
change. (Bug #14609774)
Cluster Replication: Setting slave_allow_batching had no effect. (Bug #15953730)
|• MySQL Cluster 7.2.10 has been released||Andrew Morgan||7 Jan|