This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.3 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.44 (see Section C.1.3, “Changes in MySQL 5.1.44 (04 February 2010)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Cluster API: 
        It is now possible to determine, using the
        ndb_desc utility or the NDB API, which data
        nodes contain replicas of which partitions. For
        ndb_desc, a new
        --extra-node-info option is
        added to cause this information to be included in its output. A
        new method
        NdbDictionary::Object::Table::getFragmentNodes()
        is added to the NDB API for obtaining this information
        programmatically.
       (Bug#51184)
        Formerly, the REPORT and
        DUMP commands returned output to all
        ndb_mgm clients connected to the same MySQL
        Cluster. Now, these commands return their output only to the
        ndb_mgm client that actually issued the
        command.
       (Bug#40865)
Cluster Replication: Replication: 
        MySQL Cluster Replication now supports attribute promotion and
        demotion for row-based replication between columns of different
        but similar types on the master and the slave. For example, it
        is possible to promote an INT
        column on the master to a BIGINT
        column on the slave, and to demote a
        TEXT column to a
        VARCHAR column.
      
        The implementation of type demotion distinguishes between lossy
        and non-lossy type conversions, and their use on the slave can
        be controlled by setting the
        slave_type_conversions global
        server system variable.
      
For more information about attribute promotion and demotion for row-based replication in MySQL Cluster, see Attribute promotion and demotion (MySQL Cluster). (Bug#47163, Bug#46584)
Bugs fixed:
        If a node or cluster failure occurred while
        mysqld was scanning the
        ndb.ndb_schema table (which it does when
        attempting to connect to the cluster), insufficient error
        handling could lead to a crash by mysqld in
        certain cases. This could happen in a MySQL Cluster with a great
        many tables, when trying to restart data nodes while one or more
        mysqld processes were restarting.
       (Bug#52325)
After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug#52217)
        When performing a complex mix of node restarts and system
        restarts, the node that was elected as master sometimes required
        optimized node recovery due to missing REDO
        information. When this happened, the node crashed with
        Failure to recreate object ... during restart, error
        721 (because the DBDICT restart
        code was run twice). Now when this occurs, node takeover is
        executed immediately, rather than being made to wait until the
        remaining data nodes have started.
       (Bug#52135)
See also Bug#48436.
        The redo log protects itself from being filled up by
        periodically checking how much space remains free. If
        insufficient redo log space is available, it sets the state
        TAIL_PROBLEM which results in transactions
        being aborted with error code 410 (out of redo
        log). However, this state was not set following a
        node restart, which meant that if a data node had insufficient
        redo log space following a node restart, it could crash a short
        time later with Fatal error due to end of REDO
        log. Now, this space is checked during node
        restarts.
       (Bug#51723)
        The output of the ndb_mgm client
        REPORT BACKUPSTATUS command could sometimes
        contain errors due to uninitialized data.
       (Bug#51316)
        A GROUP BY query against
        NDB tables sometimes did not use
        any indexes unless the query included a FORCE
        INDEX option. With this fix, indexes are used by such
        queries (where otherwise possible) even when FORCE
        INDEX is not specified.
       (Bug#50736)
        The ndb_mgm client sometimes inserted extra
        prompts within the output of the REPORT
        MEMORYUSAGE command.
       (Bug#50196)
Issuing a command in the ndb_mgm client after it had lost its connection to the management server could cause the client to crash. (Bug#49219)
The ndb_print_backup_file utility failed to function, due to a previous internal change in the NDB code. (Bug#41512, Bug#48673)
        When the MemReportFrequency configuration
        parameter was set in config.ini, the
        ndb_mgm client REPORT
        MEMORYUSAGE command printed its output multiple times.
       (Bug#37632)
        ndb_mgm -e "... REPORT ..." did not write any
        output to stdout.
      
        The fix for this issue also prevents the cluster log from being
        flooded with INFO messages when
        DataMemory usage reaches 100%, and insures
        that when when the usage is decreased, an appropriate message is
        written to the cluster log.
       (Bug#31542, Bug#44183, Bug#49782)
Replication: 
        Metadata for GEOMETRY fields was not properly
        stored by the slave in its definitions of tables.
       (Bug#49836)
See also Bug#48776.
Replication: 
        Column length information generated by
        InnoDB did not match that generated
        by MyISAM, which caused invalid
        metadata to be written to the binary log when trying to
        replicate BIT columns.
       (Bug#49618)
Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full.
This issue appeared similar to Bug#48113, but had a different underlying cause. (Bug#52201)
Disk Data: 
        The error message returned after atttempting to execute
        ALTER LOGFILE GROUP on an
        nonexistent logfile group did not indicate the reason for the
        failure.
       (Bug#51111)
Cluster Replication: 
        The
        --ndb-log-empty-epochs
        option did not work correctly.
       (Bug#49559)
Cluster API: 
        When reading blob data with lock mode
        LM_SimpleRead, the lock was not upgraded as
        expected.
       (Bug#51034)
        1) In rare cases, if a thread was interrupted during a
        FLUSH
        PRIVILEGES operation, a debug assertion occurred later
        due to improper diagnostic area setup. 2) A
        KILL operation could cause a
        console error message referring to a diagnostic area state
        without first ensuring that the state existed.
       (Bug#33982)


User Comments
Add your own comment.