MySQL Lists are EOL. Please join:

List:Packagers« Previous MessageNext Message »
From:Lars Tangvald Date:January 23 2019 6:56am
Subject:MySQL Cluster 7.6.9 has been released
View as plain text  
Dear MySQL Users,

MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:

   - In-Memory storage - Real-time performance (with optional
     checkpointing to disk)
   - Transparent Auto-Sharding - Read & write scalability
   - Active-Active/Multi-Master geographic replication

   - 99.999% High Availability with no single point of failure
     and on-line maintenance
   - NoSQL and SQL APIs (including C++, Java, http, Memcached
     and JavaScript/Node.js)

MySQL Cluster 7.6.9 has been released and can be downloaded from

   http://www.mysql.com/downloads/cluster/

where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.

MySQL Cluster 7.6 is also available from our repository for Linux
platforms, go here for details:

   http://dev.mysql.com/downloads/repo/

The release notes are available from

   http://dev.mysql.com/doc/relnotes/mysql-cluster/7.6/en/index.html

MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.

More details can be found at

   http://www.mysql.com/products/cluster/

Enjoy !


==============================================================================
Changes in MySQL NDB Cluster 7.6.9 (5.7.25-ndb-7.6.9) (2019-01-22, 
General Availability)

    MySQL NDB Cluster 7.6.9 is a new release of NDB 7.6, based on
    MySQL Server 5.7 and including features in version 7.6 of the
    NDB storage engine, as well as fixing recently discovered
    bugs in previous NDB Cluster releases.

    Obtaining NDB Cluster 7.6.  NDB Cluster 7.6 source code and
    binaries can be obtained from
    https://dev.mysql.com/downloads/cluster/.

    For an overview of changes made in NDB Cluster 7.6, see What
    is New in NDB Cluster 7.6
(http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-what-is-new-7-6.html).

    This release also incorporates all bug fixes and changes made
    in previous NDB Cluster releases, as well as all bug fixes
    and feature changes which were added in mainline MySQL 5.7
    through MySQL 5.7.25 (see Changes in MySQL 5.7.25
    (2019-01-21, General Availability)
(http://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-25.html)).

Bugs Fixed


      * Important Change: When restoring to a cluster using data
        node IDs different from those in the original
cluster,
        ndb_restore tried to open files corresponding to
node ID
        0. To keep this from happening, the --nodeid and
        --backupid options---neither of which has a default
        value---are both now explicitly required when
invoking
        ndb_restore. (Bug #28813708)

      * Packaging; MySQL NDB ClusterJ: libndbclient was missing
        from builds on some platforms. (Bug #28997603)

      * NDB Replication: A DROP DATABASE operation involving
        certain very large tables could lead to an unplanned
        shutdown of the cluster. (Bug #28855062)

      * NDB Replication: When writes on the master---done in such
        a way that multiple changes affecting BLOB column
values
        belonging to the same primary key were part of the
same
        epoch---were replicated to the slave, Error 1022
occurred
        due to constraint violations in the NDB$BLOB_id_part
        table. (Bug #28746560)

      * NDB Cluster APIs: When the NDB kernel's SUMA block sends
        a TE_ALTER event, it does not keep track of when all
        fragments of the event are sent. When NDB receives
the
        event, it buffers the fragments, and processes the
event
        when all fragments have arrived. An issue could
possibly
        arise for very large table definitions, when the
time
        between transmission and reception could span
multiple
        epochs; during this time, SUMA could send a
        SUB_GCP_COMPLETE_REP signal to indicate that it has
sent
        all data for an epoch, even though in this case that
is
        not entirely true since there may be fragments of a
        TE_ALTER event still waiting on the data node to be
sent.
        Reception of the SUB_GCP_COMPLETE_REP leads to
closing
        the buffers for that epoch. Thus, when TE_ALTER
finally
        arrives, NDB assumes that it is a duplicate from an
        earlier epoch, and silently discards it.
        We fix the problem by making sure that the SUMA
kernel
        block never sends a SUB_GCP_COMPLETE_REP for any
epoch in
        which there are unsent fragments for a
SUB_TABLE_DATA
        signal.
        This issue could have an impact on NDB API
applications
        making use of TE_ALTER events. (SQL nodes do not
make any
        use of TE_ALTER events and so they and applications
using
        them were not affected.) (Bug #28836474)

      * Where a data node was restarted after a configuration
        change whose result was a decrease in the sum of
        MaxNoOfTables, MaxNoOfOrderedIndexes, and
        MaxNoOfUniqueHashIndexes, it sometimes failed with a
        misleading error message which suggested both a
temporary
        error and a bug, neither of which was the case.
        The failure itself is expected, being due to the
fact
        that there is at least one table object with an ID
        greater than the (new) sum of the parameters just
        mentioned, and that this table cannot be restored
since
        the maximum value for the ID allowed is limited by
that
        sum. The error message has been changed to reflect
this,
        and now indicates that this is a permanent error due
to a
        problem configuration. (Bug #28884880)

      * When only the management server but no data nodes were
        started, RESTART ALL timed out and eventually
failed.
        This was because, as part of a restart, ndb_mgmd
starts a
        timer, sends a STOP_REQ signal to all the data
nodes, and
        waits for all of them to reach node state SL_CMVMI.
The
        issue arose becaue no STOP_REQ signals were ever
sent,
        and thus no data nodes reached SL_CMVMI. This meant
that
        the timer always expired, causing the restart to
fail.
        (Bug #28728485, Bug #28698831)
        References: See also: Bug #11757421.

      * Running ANALYZE TABLE on an NDB table with an index
        having longer than the supported maximum length
caused
        data nodes to fail. (Bug #28714864)

      * It was possible in certain cases for nodes to hang during
        an initial restart. (Bug #28698831)
        References: See also: Bug #27622643.

      * The output of ndb_config --configinfo --xml --query-all
        now shows that configuration changes for the
ThreadConfig
        and MaxNoOfExecutionThreads data node parameters
require
        system initial restarts (restart="system"
        initial="true"). (Bug #28494286)

      * API nodes should observe that a node is moving through
        SL_STOPPING phases (graceful stop) and stop using
the
        node for new transactions, which minimizes potential
        disruption in the later phases of the node shutdown
        process. API nodes were only informed of node state
        changes via periodic heartbeat signals, and so might
not
        be able to avoid interacting with the node shutting
down.
        This generated unnecessary failures when the
heartbeat
        interval was long. Now when a data node is being
        gracefully stopped, all API nodes are notified
directly,
        allowing them to experience minimal disruption. (Bug
        #28380808)

      * Executing SELECT * FROM INFORMATION_SCHEMA.TABLES caused
        SQL nodes to restart in some cases. (Bug #27613173)

      * When scanning a row using a TUP scan or ACC scan, or when
        performing a read using the primary key, it is
possible
        to start a read of the row and hit a real-time break
        during which it is necessary to wait for the page to
        become available in memory. When the page request
returns
        later, an attempt to read the row fails due to an
invalid
        checksum; this is because, when the row is deleted,
its
        checksum is invalidated.
        This problem is solved by introducing a new tuple
header
        DELETE_WAIT flag, which is checked before starting
any
        row scan or PK read operations on the row where disk
data
        pages are not yet available, and cleared when the
row is
        finally committed. (Bug #27584165)
        References: See also: Bug #28868412.

      * When running a cluster with 4 or more data nodes under
        very high loads, data nodes could sometimes fail
with
        Error 899 Rowid already allocated. (Bug #25960230)

      * mysqld shut down unexpectedly when a purge of the binary
        log was requested before the server had completely
        started, and it was thus not yet ready to delete
rows
        from the ndb_binlog_index table. Now when this
occurs,
        requests for any needed purges of the
ndb_binlog_index
        table are saved in a queue and held for execution
when
        the server has completely started. (Bug #25817834)

      * When starting, a data node copies metadata, while a local
        checkpoint updates metadata. To avoid any conflict,
any
        ongoing LCP activity is paused while metadata is
being
        copied. An issue arose when a local checkpoint was
paused
        on a given node, and another node that was also
        restarting checked for a complete LCP on this node;
the
        check actually caused the LCP to be completed before
        copying of metadata was complete and so ended the
pause
        prematurely. Now in such cases, the LCP completion
check
        waits to complete a paused LCP until copying of
metadata
        is finished and the pause ends as expected, within
the
        LCP in which it began. (Bug #24827685)

      * Asynchronous disconnection of mysqld from the cluster
        caused any subsequent attempt to start an NDB API
        transaction to fail. If this occurred during a bulk
        delete operation, the SQL layer called
        HA::end_bulk_delete(), whose implementation by
        ha_ndbcluster assumed that a transaction had been
        started, and could fail if this was not the case.
This
        problem is fixed by checking that the transaction
pointer
        used by this method is set before referencing it.
(Bug
        #20116393)

      * NdbScanFilter did not always handle NULL according to the
        SQL standard, which could result in sending
        non-qualifying rows to be filtered (otherwise not
        necessary) by the MySQL server. (Bug #92407, Bug
        #28643463)

      * NDB attempted to use condition pushdown on greater-than
        (>) and less-than (<) comparisons with ENUM
column values
        but this could cause rows to be omitted in the
result.
        Now such comparisons are no longer pushed down.
        Comparisons for equality (=) and inequality
(<> / !=)
        with ENUM values are not affected by this change,
and
        conditions including these comparisons can still be
        pushed down. (Bug #92321, Bug #28610217)


Thread
MySQL Cluster 7.6.9 has been releasedLars Tangvald23 Jan 2019