MySQL Lists are EOL. Please join:

List:Announcements« Previous MessageNext Message »
From:Balasubramanian Kandasamy Date:October 18 2016 4:32pm
Subject:MySQL Cluster 7.5.4 has been released (part 2/2)
View as plain text  
[This is part 2 of the announcement]

      * When performing online reorganization of tables, unique
        indexes were not included in the reorganization. (Bug #13714258)

      * Local reads of unique index and blob tables did not work
        correctly for fully replicated tables using more than one node
        group. (Bug #83016, Bug #24675602)

      * The effects of an ALTER TABLE
        statement changing a table to use READ_BACKUP were not preserved
        after a restart of the cluster. (Bug #82812, Bug #24570439)

      * Using FOR_RP_BY_NODE or FOR_RP_BY_LDM for
        PARTITION_BALANCE did not work with fully replicated tables. (Bug
        #82801, Bug #24565265)

      * Changes to READ_BACKUP settings were not propagated to
        internal blob tables. (Bug #82788, Bug #24558232)

      * The count displayed by the c_exec column in the
        table was incomplete. (Bug #82635, Bug #24482218)

      * The default PARTITION_BALANCE setting for NDB
        tables created with READ_BACKUP=1 (see Setting NDB_TABLE options
        in table comments
        has been changed from FOR_RA_BY_LDM
        to FOR_RP_BY_LDM. (Bug #82634, Bug #24482114)

      * The internal function ndbcluster_binlog_wait(), which
        provides a way to make sure that all events originating from a
        given thread arrive in the binary log, is used by SHOW BINLOG
        EVENTS (
        as well as when resetting the binary log. This function
        waits on an injector condition while the latest global epoch
        handled by NDB is more recent than the epoch last committed in
        this session, which implies that this condition must be signalled
        whenever the binary log thread completes and updates a new latest
        global epoch.  Inspection of the code revealed that this
        condition signalling was missing, and that, instead of being
        awakened whenever a new latest global epoch completes (~100ms),
        client threads waited for the maximum timeout (1 second).  This
        fix adds the missing injector condition signalling, while also
        changing it to a condition broadcast to make sure that all client
        threads are alerted. (Bug #82630, Bug #24481551)

      * Fully replicated internal foreign key or unique index
        triggers could fire multiple times, which led to aborted
        transactions for an insert or a delete operation. This happened
        due to redundant deferred constraint triggers firing during
        pre-commit. Now in such cases, we ensure that only triggers
        specific to unique indexes are fired in this stage. (Bug #82570,
        Bug #24454378)

      * Backups potentially could fail when using fully
        replicated tables due to their high usage (and subsequent
        exhaustion) of internal trigger resources. To compensate for
        this, the amount of memory reserved in the NDB kernel for
        internal triggers has been increased, and is now based in part on
        the maximum number of tables. (Bug #82569, Bug #24454262)
        References: See also: Bug #23539733.

      * In the DBTC function executeFullyReplicatedTrigger() in
        the NDB kernel, an incorrect check of state led in some cases to
        failure handling when no failure had actually occurred. (Bug
        #82568, Bug #24454093) References: See also: Bug #23539733.

      * When returning from LQHKEYREQ with failure in LQHKEYREF
        in an internal trigger operation, no check was made as to whether
        the trigger was fully replicated, so that those triggers that
        were fully replicated were never handled.  (Bug #82566, Bug
        #24453949) References: See also: Bug #23539733.

      * When READ_BACKUP had not previously been set, then was
        set to 1 as part of an ALTER TABLE ... ALGORITHM=INPLACE
        statement, the change was not propagated to
        internal unique index tables or BLOB
        ( tables. (Bug
        #82491, Bug #24424459)

      * Distribution of MySQL privileges was incomplete due to
        the failure of the mysql_cluster_move_privileges() procedure to
        convert the mysql.proxies_priv table to NDB.  The root cause of
        this was an ALTER TABLE ... ENGINE NDB
        statement which sometimes failed when this table contained
        illegal TIMESTAMP
        ( values.
        (Bug #82464, Bug #24430209)

      * The internal variable m_max_warning_level was not
        initialized in storage/ndb/src/kernel/blocks/thrman.cpp.  This
        sometimes led to node failures during a restart when the
        uninitialized value was treated as 0. (Bug #82053, Bug #23717703)

      * Usually, when performing a system restart, all nodes are
        restored from redo logs and local checkpoints (LCPs), but in some
        cases some node might require a copy phase before it is finished
        with the system restart. When this happens, the node in question
        waits for all other nodes to start up completely before
        performing the copy phase.  Notwithstanding the fact that it is
        thus possible to begin a local checkpoint before reaching start
        phase 4 in the DBDIH block, LCP status was initialized to IDLE in
        all cases, which could lead to a node failure. Now, when
        performing this variant of a system restart, the LCP status is no
        longer initialized. (Bug #82050, Bug #23717479)

      * After adding a new node group online and executing ALTER
        partition IDs were not set correctly for
        new fragments.  In a related change done as part of fixing this
        issue, ndb_desc -p
        now displays rows relating to partitions in order of partition ID.
        (Bug #82037, Bug #23710999)

      * When executing STOP BACKUP it is possible sometimes that
        a few bytes are written to the backup data file before the backup
        process actually terminates. When using ODIRECT
        this resulted in the wrong error code being returned. Now in such cases,
        nothing is  written to O_DIRECT files unless the alignment is correct.
        (Bug #82017, Bug #23701911)

      * When transaction coordinator (TC) connection records were
        used up, it was possible to handle scans only for local
        checkpoints and backups, so that operations coming from the
        and other operations that reorganize
        metadata---were unnecessarily blocked. In addition, such
        operations were not always retried when TC records were
        exhausted. To fix this issue, a number of operation records are
        now earmarked for DBUTIL usage, as well as for LCP and backup
        usage so that these operations are also not negatively impacted
        by operations coming from DBUTIL.  For more information, see The
        DBUTIL Block
        (Bug #81992, Bug #23642198)

      * Operations performing multiple updates of the same row
        within the same transaction could sometimes lead to corruption of
        lengths of page entries. (Bug #81938, Bug #23619031)

      * During a node restart, a fragment can be restored using
        information obtained from local checkpoints (LCPs); up to 2
        restorable LCPs are retained at any given time. When an LCP is
        reported to the DIH kernel block as completed, but the node fails
        before the last global checkpoint index written into this LCP has
        actually completed, the latest LCP is not restorable. Although it
        should be possible to use the older LCP, it was instead assumed
        that no LCP existed for the fragment, which slowed the restart
        process. Now in such cases, the older, restorable LCP is used,
        which should help decrease long node restart times.  (Bug #81894,
        Bug #23602217)

      * Optimized node selection (ndb_optimized_node_selection
        was not respected by ndb_data_node_neighbour
        when this was enabled. (Bug #81778, Bug #23555834)

      * NDB no longer retries a global schema lock if this has
        failed due to a timeout (default 3000ms) and there is the
        potential for this lock request to participate in a metadata
        lock-global schema lock deadlock. Now in such cases it selects
        itself as a "victim", and returns the decision to the requestor
        of the metadata lock, which then handles the request as a failed
        lock request (preferable to remaining deadlocked indefinitely),
        or, where a deadlock handler exists, retries the metadata
        lock-global schema lock. (Bug #81775, Bug #23553267)

      * Two issues were found in the implementation of hash
        maps---used by NDB for mapping a table row's hash value to a
        partition---for fully replicated tables:

          1. Hash maps were selected based on the number of fragments
          rather than the number of partitions. This was previously
          undetected due to the fact that, for other kinds of tables,
          these values are always the same.

          2. The hash map was employed as a partition-to-partition map,
          using the table row's hash value modulus the partition count as
          input.  This fix addresses both of the problems just described.
          (Bug #81757, Bug #23544220) References: See also: Bug #81761,
          Bug #23547525, Bug #23553996.

      * Using mysqld together with --initialize
        and --ndbcluster
        led to problems
        later when attempting to use mysql_upgrade. When running with
        --initialize, the server does not require NDB
        support, and having it enabled can lead to issues with ndbinfo
        tables. To prevent this from happening, using the
        --initialize option now causes mysqld to ignore the --ndbcluster
        option if the latter is also specified.  This issue affects
        upgrades from MySQL Cluster NDB 7.5.3 or 7.5.3 only. In cases
        where such upgrades fail for the reasons outlined previously, you
        can work around the issue by deleting all .frm files in the
        data/ndbinfo directory following a rolling restart of the entire
        cluster, then running mysql_upgrade. (Bug #81689, Bug #23518923)
        References: See also: Bug #82724, Bug #24521927.

      * While a mysqld was waiting to connect to the management
        server during initialization of the NDB handler, it was not
        possible to shut down the mysqld. If the mysqld was not able to
        make the connection, it could become stuck at this point. This
        was due to an internal wait condition in the utility and index
        statistics threads that could go unmet indefinitely. This
        condition has been augmented with a maximum timeout of 1 second,
        which makes it more likely that these threads terminate
        themselves properly in such cases.  In addition, the connection
        thread waiting for the management server connection performed 2
        sleeps in the case just described, instead of 1 sleep, as
        intended.  (Bug #81585, Bug #23343673)

        on a fully replicated table did not copy
        the associated trigger ID, leading to a failure in the DBDICT
        kernel block. (Bug #81544, Bug #23330359)

      * The list of deferred tree node lookup requests created
        when preparing to abort a DBSPJ request were not cleared when
        this was complete, which could lead to deferred operations being
        started even after the DBSPJ request aborted. (Bug #81355, Bug
        #23251423) References: See also: Bug #23048816.

      * Error and abort handling in Dbspj::execTRANSID_AI() was
        implemented such that its abort() method was called before
        processing of the incoming signal was complete.  Since this
        method sends signals to the LDM, this partly overwrote the
        contents of the signal which was later required by
        execTRANSID_AI(). This could result in aborted DBSPJ requests
        cleaning up their allocated resources too early, or not at all.
        (Bug #81353, Bug #23251145) References: See also: Bug #23048816.

      * The read backup feature added in MySQL Cluster NDB 7.5.2
        that makes it possible to read from backup replicas was not used
        for reads with lock, or for reads of BLOB
        ( tables or
        unique key tables where locks were upgraded to reads with lock.
        Now the TCKEYREQ and SCAN_TABREQ signals use a flag to convey
        information about such locks making it possible to read from a
        backup replica when a read lock was upgraded due to being the
        read of the base table for a BLOB table, or due to being the read
        for a unique key.  (Bug #80861, Bug #23001841)

      * Primary replicas of partitioned tables were not
        distributed evenly among node groups and local data managers.  As
        part of the fix for this issue, the maximum number of node groups
        supported for a single MySQL Cluster, which was previously not
        determined, is now set at 48 (MAX_NDB_NODE_GROUPS). (Bug #80845,
        Bug #22996305)

      * Several object constructors and similar functions in the
        NDB (
        codebase did not always perform sanity checks when creating new
        instances. These checks are now performed under such
        circumstances. (Bug #77408, Bug #21286722)

      * Cluster API: Reuse of transaction IDs could occur when
        Ndb ( objects
        were created and deleted concurrently. As part of this fix, the
        NDB API methods lock_ndb_objects()
        and unlock_ndb_objects
        are now declared as const.  (Bug #23709232)

      * Cluster API: When the management server was restarted
        while running an MGM API application that continuously monitored
        events, subsequent events were not reported to the application,
        with timeouts being returned indefinitely instead of an error.
        This occurred because sockets for event listeners were not closed
        when restarting mgmd. This is fixed by ensuring that event
        listener sockets are closed when the management server shuts
        down, causing applications using functions such as
        to receive a read error following the restart.  (Bug #19474782)

      * Cluster API: To process incoming signals, a thread which
        wants to act as a receiver must acquire polling rights from the
        transporter layer. This can be requested and assigned to a
        separate receiver thread, or each client thread can take the
        receiver role when it is waiting for a result.  When the thread
        acting as poll owner receives a sufficient amount of data, it
        releases locks on any other clients taken while delivering
        signals to them. This could make them runnable again, and the
        operating system scheduler could decide that it was time to wake
        them up, which happened at the expense of the poll owner threads,
        which were in turn excluded from the CPU while still holding
        polling rights on it. After this fix, polling rights are released
        by a thread before unlocking and signalling other threads. This
        makes polling rights available for other threads that are
        actively executing on this CPU.  This change increases
        concurrency when polling receiver data, which should also reduce
        latency for clients waiting to be woken up. (Bug #83129, Bug

      * Cluster API: libndbclient and libmysqlclient exported
        conflicting symbols, resulting in a segmentation fault in debug
        builds on Linux. To fix this issue, the conflicting symbols in are no longer publicly visible. Due to this
        change, the version number for has been raised
        from 6.0.0 to 6.1.0. (Bug #83093, Bug #24707521) References: See
        also: Bug #80352, Bug #22722555.

      * Cluster API: When NDB schema object ownership checks are
        enabled by a given NdbTransaction
        objects used by this transaction are checked to make sure that
        they belong to the NdbDictionary
        owned by this connection. An attempt to create a NdbOperation
        or NdbIndexScanOperation
        on a table or index not belonging to the same
        connection fails.  This fix corrects a resource leak which
        occurred when the operation object to be created was allocated
        before checking schema object ownership and subsequently not
        released when the object creation failed. (Bug #81949, Bug
        #23623978) References: See also: Bug #81945, Bug #23623251.

      * Cluster API: NDB API objects are allocated in the context
        of an Ndb (
        object, or of an NdbTransaction
        object which is itself owned by an Ndb object. When a given Ndb
        object is destroyed, all remaining NdbTransaction objects are
        terminated, and all NDB API objects related to this Ndb object
        should be released at this time as well. It was found, when there
        remained unclosed NdbTransaction objects when their parent Ndb
        object was destroyed, leaks of objects allocated from the
        NdbTransaction objects could occur. (However, the NdbTransaction
        objects themselves did not leak.) While it is advisable (and,
        indeed, recommended) to close an NdbTransaction explicitly as
        soon as its lifetime ends, the destruction of the parent Ndb
        object should be sufficient to release whatever objects are
        dependent on it. Now in cases such as described previously, the
        Ndb destructor checks to ensure that all objects derived from a
        given Ndb instance are truly released. (Bug #81945, Bug

      * Cluster API: In some of the NDB API example programs
        included with the MySQL Cluster distribution, ndb_end() was
        called prior to calling the Ndb_cluster_connection
        destructor. This caused a segmentation fault in debug
        builds on all platforms. The example programs affected have also
        been extensively revised and refactored. See NDB API Examples
        (, for more
        information. (Bug #80352, Bug #22722555) References: See also:
        Bug #83093, Bug #24707521.

On behalf of the Oracle MySQL RE Team
Balasubramanian Kandasamy

MySQL Cluster 7.5.4 has been released (part 2/2)Balasubramanian Kandasamy18 Oct