MySQL Lists are EOL. Please join:

List:Announcements« Previous MessageNext Message »
From:Balasubramanian Kandasamy Date:February 5 2016 8:03am
Subject:MySQL Cluster 7.5.0 has been released (part 2/3)
View as plain text  
[This is part 2 of the announcement]

      * Important Change; Cluster API: The following NDB API
        methods were not actually implemented and have been removed from
        the sources:

           + Datafile
           methods: getNode()
and getFileNo()

           + Undofile
           methods: getNode()
and getFileNo()

           + Table (
           methods: getObjectType()
and setObjectType()

      * A serious regression was inadvertently introduced in
        MySQL Cluster NDB 7.4.8 whereby local checkpoints and thus
        restarts often took much longer than expected. This occurred due
        to the fact that the setting for MaxDiskWriteSpeedOwnRestart
        ignored during restarts and the value of
which is much lower by default than the default for
        MaxDiskWriteSpeedOwnRestart, was used instead. This issue
        affected restart times and performance only and did not have any
        impact on normal operations. (Bug #22582233)

      * The epoch for the latest restorable checkpoint provided
        in the cluster log as part of its reporting for EventBufferStatus
        events (see MySQL Cluster: Messages in the Cluster Log
<>)) was
not well defined and thus unreliable;
        depending on various factors, the reported epoch could be the one
        currently being consumed, the one most recently consumed, or the
        next one queued for consumption.  This fix ensures that the
        latest restorable global checkpoint is always regarded as the one
        that was most recently completely consumed by the user, and thus
        that it was the latest restorable global checkpoint that existed
        at the time the report was generated. (Bug #22378288)

      * Added the --ndb-allow-copying-alter-table option for
        mysqld. Setting this option (or the equivalent system variable
        ndb_allow_copying_alter_table) to OFF keeps ALTER TABLE
        statements from performing copying operations. The default value
        is ON. (Bug #22187649) References: See also Bug #17400320.

      * Attempting to create an NDB
        table having greater than the maximum supported combined width
        for all BIT
        ( columns
        (4096) caused data node failure when these columns were defined
        with COLUMN_FORMAT DYNAMIC. (Bug #21889267)

      * Creating a table with the maxmimum supported number of
        columns (512) all using COLUMN_FORMAT DYNAMIC led to data node
        failures. (Bug #21863798)

      * In a MySQL Cluster with multiple LDM instances, all
        instances wrote to the node log, even inactive instances on other
        nodes. During restarts, this caused the log to be filled with
        messages from other nodes, such as the messages shown here:
        2015-06-24 00:20:16 [ndbd] INFO     -- We are adjusting Max Disk
        Write Speed, a restart is ongoing now ...  2015-06-24 01:08:02
        [ndbd] INFO     -- We are adjusting Max Disk Write Speed, no
        restarts ongoing anymore

        Now this logging is performed only by the active LDM instance.
        (Bug #21362380)

      * Backup block states were reported incorrectly during
        backups. (Bug #21360188) References: See also Bug #20204854, Bug

      * For a timeout in GET_TABINFOREQ while executing a CREATE
        INDEX (
        statement, mysqld returned Error 4243 (Index not found) instead
        of the expected Error 4008 (Receive from NDB failed).  The fix
        for this bug also fixes similar timeout issues for a number of
        other signals that are sent the DBDICT kernel block as part of
        DDL operations, including ALTER_TAB_REQ, CREATE_INDX_REQ,
        WAIT_GCP_REQ, DROP_TAB_REQ, and LIST_TABLES_REQ, as well as
        several internal functions used in handling NDB
        schema operations. (Bug #21277472) References: See also Bug
        #20617891, Bug #20368354, Bug #19821115.

      * Previously, multiple send threads could be invoked for
        handling sends to the same node; these threads then competed for
        the same send lock. While the send lock blocked the additional
        send threads, work threads could be passed to other nodes.  This
        issue is fixed by ensuring that new send threads are not
        activated while there is already an active send thread assigned
        to the same node. In addition, a node already having an active
        send thread assigned to it is no longer visible to other, already
        active, send threads; that is, such a node is longer added to the
        node list when a send thread is currently assigned to it. (Bug
        #20954804, Bug #76821)

      * Queueing of pending operations when the redo log was
        overloaded (DefaultOperationRedoProblemAction
        API node configuration parameter) could lead to timeouts when
        data nodes ran out of redo log space (P_TAIL_PROBLEM errors). Now
        when the redo log is full, the node aborts requests instead of
        queuing them. (Bug #20782580) References: See also Bug #20481140.

      * An NDB event buffer can be used with an Ndb
        ( object to
        subscribe to table-level row change event streams.  Users
        subscribe to an existing event; this causes the data nodes to
        start sending event data signals (SUB_TABLE_DATA) and epoch
        completion signals (SUB_GCP_COMPLETE) to the Ndb object.
        SUB_GCP_COMPLETE_REP signals can arrive for execution in
        concurrent receiver thread before completion of the internal
        method call used to start a subscription.  Execution of
        SUB_GCP_COMPLETE_REP signals depends on the total number of SUMA
        buckets (sub data streams), but this may not yet have been set,
        leading to the present issue, when the counter used for tracking
        the SUB_GCP_COMPLETE_REP signals (TOTAL_BUCKETS_INIT) was found
        to be set to erroneous values. Now TOTAL_BUCKETS_INIT is tested
        to be sure it has been set correctly before it is used. (Bug
        #20575424) References: See also Bug #20561446, Bug #21616263.

      * NDB
        statistics queries could be delayed by the error delay set for
(default 60
        seconds) when the index that was queried had been marked with
        internal error. The same underlying issue could also cause
        ( to
        hang when executed against an NDB table having multiple indexes
        where an internal error occured on one or more but not all
        indexes.  Now in such cases, any existing statistics are returned
        immediately, without waiting for any additonal statistics to be
        discovered. (Bug #20553313, Bug #20707694, Bug #76325)

      * Memory allocated when obtaining a list of tables or
        databases was not freed afterward. (Bug #20234681, Bug #74510)
        References: See also Bug #18592390, Bug #72322.

      * Added the BackupDiskWriteSpeedPct
        node parameter. Setting this parameter causes the data node to
        reserve a percentage of its maximum write speed (as determined by
        the value of MaxDiskWriteSpeed
for use in
        local checkpoints while performing a backup.
        BackupDiskWriteSpeedPct is interpreted as a percentage which can
        be set between 0 and 90 inclusive, with a default value of 50.
        (Bug #20204854) References: See also Bug #21372136.

      * After restoring the database schema from backup using
        ndb_restore, auto-discovery of restored tables in transactions
        having multiple statements did not work correctly, resulting in
        Deadlock found when trying to get lock; try restarting
        transaction errors.  This issue was encountered both in the mysql
        client, as well as when such transactions were executed by
        application programs using Connector/J and possibly other MySQL
        APIs.  Prior to upgrading, this issue can be worked around by
        nodes following the restore operation, before executing any other
        statements. (Bug #18075170)

      * Using ndb_mgm STOP -f to force a node shutdown even when
        it triggered a complete shutdown of the cluster, it was possible
        to lose data when a sufficient number of nodes were shut down,
        triggering a cluster shutodwn, and the timing was such that SUMA
        handovers had been made to nodes already in the process of
        shutting down. (Bug #17772138)

      * When using a sufficiently large value for
and the default value for sort_buffer_size
executing SELECT
        ( * FROM
ORDER BY transid with multiple
        concurrent conflicting or deadlocked transactions, each
        transaction having several pending operations, caused the SQL
        node where the query was run to fail. (Bug #16731538, Bug #67596)

      * The ndbinfo.config_params
table is now read-only. (Bug #11762750,
        Bug #55383)

      * NDB failed during a node restart due to the status of the
        current local checkpoint being set but not as active, even though
        it could have other states under such conditions. (Bug #78780,
        Bug #21973758)

      * ndbmtd checked for signals being sent only after a full
        cycle in run_job_buffers, which is performed for all job buffer
        inputs. Now this is done as part of run_job_buffers itself, which
        avoids executing for extended periods of time without sending to
        other nodes or flushing signals to other threads. (Bug #78530,
        Bug #21889088)

      * When attempting to enable index statistics, creation of
        the required system tables, events and event subscriptions often
        fails when multiple mysqld processes using index statistics are
        started concurrently in conjunction with starting, restarting, or
        stopping the cluster, or with node failure handling. This is
        normally recoverable, since the affected mysqld process or
        processes can (and do) retry these operations shortly thereafter.
        For this reason, such failures are no longer logged as warnings,
        but merely as informational events.  (Bug #77760, Bug #21462846)

      * It was possible to end up with a lock on the send buffer
        mutex when send buffers became a limiting resource, due either to
        insufficient send buffer resource configuration, problems with
        slow or failing communications such that all send buffers became
        exhausted, or slow receivers failing to consume what was sent. In
        this situation worker threads failed to allocate send buffer
        memory for signals, and attempted to force a send in order to
        free up space, while at the same time the send thread was busy
        trying to send to the same node or nodes. All of these threads
        competed for taking the send buffer mutex, which resulted in the
        lock already described, reported by the watchdog as Stuck in
        Send.  This fix is made in two parts, listed here:

          1. The send thread no longer holds the global send thread mutex
          while getting the send buffer mutex; it now releases the global
          mutex prior to locking the send buffer mutex. This keeps worker
          threads from getting stuck in send in such cases.

          2. Locking of the send buffer mutex done by the send threads
          now uses a try-lock. If the try-lock fails, the node to make
          the send to is reinserted at the end of the list of send nodes
          in order to be retried later. This removes the Stuck in Send
          condition for the send threads.  (Bug #77081, Bug #21109605)

      * Disk Data: A unique index on a column of an NDB
        table is implemented with an associated internal ordered index,
        used for scanning. While dropping an index, this ordered index
        was dropped first, followed by the drop of the unique index
        itself. This meant that, when the drop was rejected due to (for
        example) a constraint violation, the statement was rejected but
        the associated ordered index remained deleted, so that any
        subsequent operation using a scan on this table failed.  We fix
        this problem by causing the unique index to be removed first,
        before removing the ordered index; removal of the related ordered
        index is no longer performed when removal of a unique index
        fails. (Bug #78306, Bug #21777589)

     * Cluster Replication: While the binary log injector thread
        was handling failure events, it was possible for all NDB tables
        to be left indefinitely in read-only mode. This was due to a race
        condition between the binlog injector thread and the utility
        thread handling events on the ndb_schema table, and to the fact
        that, when handling failure events, the binlog injector thread
        places all NDB tables in read-only mode until all such events are
        handled and the thread restarts itself.  When the binlog inject
        thread receives a group of one or more failure events, it drops
        all other existing event operations and expects no more events
        from the utility thread until it has handled all of the failure
        events and then restarted itself. However, it was possible for
        the utility thread to continue attempting binary log setup while
        the injector thread was handling failures and thus attempting to
        create the schema distribution tables as well as event
        subscriptions on these tables. If the creation of these tables
        and event subscriptions occurred during this time, the binlog
        injector thread's expectation that there were no further event
        operations was never met; thus, the injector thread never
        restarted, and NDB tables remained in read-only as described
        previously.  To fix this problem, the Ndb
        ( object that
        handles schema events is now definitely dropped once the
        ndb_schema table drop event is handled, so that the utility
        thread cannot create any new events until after the injector
        thread has restarted, at which time, a new Ndb object for
        handling schema events is created. (Bug #17674771, Bug #19537961,
        Bug #22204186, Bug #22361695)

* To be continued in part 3....

MySQL Cluster 7.5.0 has been released (part 2/3)Balasubramanian Kandasamy5 Feb