MySQL Cluster NDB 6.3.31 was withdrawn shortly after release, due to Bug#51027. Users should upgrade to MySQL Cluster NDB 6.3.31a, which fixes this issue.
This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.3 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.41 (see Section C.1.7, “Changes in MySQL 5.1.41 (05 November 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Important Change:
The maximum allowed value of the
ndb_autoincrement_prefetch_sz
system variable has been increased from 256 to 65536.
(Bug#50621)
Cluster Replication:
Due to the fact that no timestamp is available for delete
operations, a delete using NDB$MAX()
is
actually processed as NDB$OLD
. However,
because this is not optimal for some use cases,
NDB$MAX_DELETE_WIN()
is added as a conflict
resolution function; if the “timestamp” column
value for a given row adding or updating an existing row coming
from the master is higher than that on the slave, it is applied
(as with NDB$MAX()
); however, delete
operations are treated as always having the higher value.
See
NDB$MAX_DELETE_WIN(
),
for more information.
(Bug#50650)column_name
Bugs fixed:
Setting BuildIndexThreads
greater than 1 with
more than 31 ordered indexes caused node and system restarts to
fail.
(Bug#50266)
Dropping unique indexes in parallel while they were in use could cause node and cluster failures. (Bug#50118)
When setting the LockPagesInMainMemory
configuration parameter failed, only the error Failed
to memlock pages... was returned. Now in such cases
the operating system's error code is also returned.
(Bug#49724)
If a query on an NDB
table compared
a constant string value to a column, and the length of the
string was greater than that of the column, condition pushdown
did not work correctly. (The string was truncated to fit the
column length before being pushed down.) Now in such cases, the
condition is no longer pushed down.
(Bug#49459)
Performing intensive inserts and deletes in parallel with a high
scan load could a data node crashes due to a failure in the
DBACC
kernel block. This was because checking
for when to perform bucket splits or merges considered the first
4 scans only.
(Bug#48700)
During Start Phases 1 and 2, the STATUS
command sometimes (falsely) returned Not
Connected
for data nodes running
ndbmtd.
(Bug#47818)
When performing a DELETE
that
included a left join from an NDB
table, only the first matching row was deleted.
(Bug#47054)
When setting LockPagesInMainMemory
, the
stated memory was not allocated when the node was started, but
rather only when the memory was used by the data node process
for other reasons.
(Bug#37430)
Trying to insert more rows than would fit into an
NDB
table caused data nodes to crash. Now in
such situations, the insert fails gracefully with error 633
Table fragment hash index has reached maximum
possible size.
(Bug#34348)
Disk Data:
When a crash occurs due to a problem in Disk Data code, the
currently active page list is printed to
stdout
(that is, in one or more
ndb_
files). One of these lists could contain an endless loop; this
caused a printout that was effectively never-ending. Now in such
cases, a maximum of 512 entries is printed from each list.
(Bug#42431)nodeid
_out.log
On Mac OS X or Windows, sending a SIGHUP
signal to the server or an asynchronous flush (triggered by
flush_time
) caused the server
to crash.
(Bug#47525)
The ARCHIVE
storage engine lost
records during a bulk insert.
(Bug#46961)
When using the ARCHIVE
storage
engine, SHOW TABLE STATUS
displayed incorrect
information for Max_data_length
,
Data_length
and
Avg_row_length
.
(Bug#29203)
User Comments
Add your own comment.