This doesn't directly address your question but from MySQL Cluster 7.0 onwards, there is
a procedure for adding nodes on-line (no loss of service or data) - the steps are
explained in the white paper at
> -----Original Message-----
> From: Anand Sriraman [mailto:indianapple89@stripped]
> Sent: 09 March 2010 06:18
> To: MySQL Cluster Mailing List
> Subject: Problem when adding new data nodes to MySQL Cluster
> Originally, I had 2 data nodes, 2 SQL nodes (installed on 2 machines -
> data and SQL node per machine) and 1 Management node. This setup was
> perfectly. Then I added 2 more data and SQL nodes to the cluster ( on 2
> machines, again with the same configuration), with one of the nodes
> the Management node.
> So now I have the following config:
> Node 1 - Mgmt, Data & SQL
> Node 2 - Data & SQL
> Node 3 - Data & SQL
> Node 4 - Data & SQL
> Before adding the new nodes we shut down the whole cluster, updated the
> config.ini file and then started the mgmt node with --initial option.
> first time we ran the Cluster with this new config, all the nodes
> properly and we didn't have a problem. Then we restarted all 4 machines
> started all the processes again.
> This time when we started the data nodes, they gave confirmation that
> Configuration was fetched properly from the Mgmt node. However, at the
> management node when we tried to run the show command, we got the
> [root@spade1 ~]# ndb_mgmd --initial -f /var/lib/mysql-
> 2010-03-04 12:04:02 [MgmtSrvr] INFO -- NDB Cluster Management
> mysql-5.1.39 ndb-7.0.9b
> 2010-03-04 12:04:02 [MgmtSrvr] INFO -- Reading cluster
> from '/var/lib/mysql-cluster/config.ini'
> [root@spade1 ~]# ndb_mgm
> -- NDB Cluster -- Management Client --
> ndb_mgm> show
> Warning, event thread startup failed, degraded printouts as result,
> Connected to Management Server at: localhost:1186
> Got SIGPIPE!
> Could not get status
> When I looked this errno 115 on google, I got references to posts
> maybe this was some kind of data node failure. What should I do in this
> P.S. When starting the data nodes right after installation ( before
> restarting the machines ) , I may have forgotted to give the --initial
> option for the existing data nodes and just given it to the new data
> But as I didn't get a problem before the restart I didn't think this
> was the
> issue. After that I've tried to give the --initial option to all data
> but the management node still doesn't work properly.