This is a new Beta development release, fixing recently discovered bugs in previous MySQL Cluster NDB 6.3 releases.
Obtaining MySQL Cluster NDB 6.3. This is a source-only release, which you must compile and install using the instructions found in Section 2.3, “MySQL Installation Using a Source Distribution”, and in Section 17.2.1, “MySQL Cluster Multi-Computer Installation”. You can download the GPL source tarball from the MySQL FTP site at ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/.
This Beta release incorporates all bugfixes and changes made in the previous MySQL Cluster NDB 6.3 release, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.23 (see Section C.1.29, “Changes in MySQL 5.1.23 (29 January 2008)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Cluster API: Important Change:
Because NDB_LE_MemoryUsage.page_size_kb
shows
memory page sizes in bytes rather than kilobytes, it has been
renamed to page_size_bytes
. The name
page_size_kb
is now deprecated and thus
subject to removal in a future release, although it currently
remains supported for reasons of backward compatibility. See
The Ndb_logevent_type
Type, for more information
about NDB_LE_MemoryUsage
.
(Bug#30271)
ndb_restore now supports basic
attribute promotion; that is, data from a
column of a given type can be restored to a column using a
“larger” type. For example, Cluster backup data
taken from a SMALLINT
column can
be restored to a MEDIUMINT
,
INT
, or
BIGINT
column.
For more information, see Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”.
Now only 2 local checkpoints are stored, rather than 3 as in previous MySQL Cluster versions. This lowers disk space requirements and reduces the size and number of redo log files needed.
The mysqld option
--ndb-batch-size
has been added. This allows
for controlling the size of batches used for running
transactions.
Node recovery can now be done in parallel, rather than sequentially, which can result in much faster recovery times.
Persistence of NDB
tables can now
be controlled using the session variables
ndb_table_temporary
and
ndb_table_no_logging
.
ndb_table_no_logging
causes
NDB
tables not to be checkpointed
to disk; ndb_table_temporary
does the same,
and in addition, no schema files are created.
OPTIMIZE TABLE
can now be
interrupted. This can be done, for example, by killing the SQL
thread performing the OPTIMIZE
operation.
Bugs fixed:
Disk Data: Important Change:
It is no longer possible on 32-bit systems to issue statements
appearing to create Disk Data log files or data files greater
than 4 GB in size. (Trying to create log files or data files
larger than 4 GB on 32-bit systems led to unrecoverable data
node failures; such statements now fail with
NDB
error 1515.)
(Bug#29186)
Replication: The code implementing heartbeats did not check for possible errors in some circumstances; this kept the dump thread hanging while waiting for heartbeats loop even though the slave was no longer connected. (Bug#33332)
High numbers of insert operations, delete operations, or both
could cause NDB
error 899
(Rowid already allocated) to occur
unnecessarily.
(Bug#34033)
A periodic failure to flush the send buffer by the
NDB
TCP transporter could cause a
unnecessary delay of 10 ms between operations.
(Bug#34005)
DROP TABLE
did not free all data
memory. This bug was observed in MySQL Cluster NDB 6.3.7 only.
(Bug#33802)
A race condition could occur (very rarely) when the release of a GCI was followed by a data node failure. (Bug#33793)
Some tuple scans caused the wrong memory page to be accessed, leading to invalid results. This issue could affect both in-memory and Disk Data tables. (Bug#33739)
A failure to initialize an internal variable led to sporadic crashes during cluster testing. (Bug#33715)
The server failed to reject properly the creation of an
NDB
table having an unindexed
AUTO_INCREMENT
column.
(Bug#30417)
Issuing an
INSERT ...
ON DUPLICATE KEY UPDATE
concurrently with or following
a TRUNCATE TABLE
statement on an
NDB
table failed with
NDB
error 4350
Transaction already aborted.
(Bug#29851)
The Cluster backup process could not detect when there was no more disk space and instead continued to run until killed manually. Now the backup fails with an appropriate error when disk space is exhausted. (Bug#28647)
It was possible in config.ini
to define
cluster nodes having node IDs greater than the maximum allowed
value.
(Bug#28298)
Under some circumstances, a recovering data node did not use its own data, instead copying data from another node even when this was not required. This in effect bypassed the optimized node recovery protocol and caused recovery times to be unnecessarily long. (Bug#26913)
Cluster Replication:
Consecutive DDL statements involving tables
(CREATE TABLE
,
ALTER TABLE
, and
DROP TABLE
) could be executed so
quickly that previous DDL statements upon which they depended
were not yet written in the binary log.
For example, if DROP TABLE foo
was issued
immediately following CREATE TABLE foo
, the
DROP
statement could fail because the
CREATE
had not yet been recorded.
(Bug#34006)
Cluster Replication:
ndb_restore -e restored excessively large
values to the ndb_apply_status
table's
epoch
column when restoring to a MySQL
Cluster version supporting Micro-GCPs from an older version that
did not support these.
A workaround when restoring to MySQL Cluster releases supporting
micro-GCPs previous to MySQL Cluster NDB 6.3.8 is to perform a
32-bit shift on the epoch
column values to
reduce them to their proper size.
(Bug#33406)
Cluster API:
Transactions containing inserts or reads would hang during
NdbTransaction::execute()
calls made from NDB
API applications built against a MySQL Cluster version that did
not support micro-GCPs accessing a later version that supported
micro-GCPs. This issue was observed while upgrading from MySQL
Cluster NDB 6.1.23 to MySQL Cluster NDB 6.2.10 when the API
application built against the earlier version attempted to
access a data node already running the later version, even after
disabling micro-GCPs by setting
TimeBetweenEpochs
equal to 0.
(Bug#33895)
Cluster API:
When reading a BIT(64)
value using
NdbOperation:getValue()
, 12 bytes were
written to the buffer rather than the expected 8 bytes.
(Bug#33750)
User Comments
Add your own comment.