Originally, I had 2 data nodes, 2 SQL nodes (installed on 2 machines - one
data and SQL node per machine) and 1 Management node. This setup was working
perfectly. Then I added 2 more data and SQL nodes to the cluster ( on 2
machines, again with the same configuration), with one of the nodes being
the Management node.
So now I have the following config:
Node 1 - Mgmt, Data & SQL
Node 2 - Data & SQL
Node 3 - Data & SQL
Node 4 - Data & SQL
Before adding the new nodes we shut down the whole cluster, updated the
config.ini file and then started the mgmt node with --initial option. The
first time we ran the Cluster with this new config, all the nodes connected
properly and we didn't have a problem. Then we restarted all 4 machines and
started all the processes again.
This time when we started the data nodes, they gave confirmation that the
Configuration was fetched properly from the Mgmt node. However, at the
management node when we tried to run the show command, we got the following
[root@spade1 ~]# ndb_mgmd --initial -f /var/lib/mysql-cluster/config.ini
2010-03-04 12:04:02 [MgmtSrvr] INFO -- NDB Cluster Management Server.
2010-03-04 12:04:02 [MgmtSrvr] INFO -- Reading cluster configuration
[root@spade1 ~]# ndb_mgm
-- NDB Cluster -- Management Client --
Warning, event thread startup failed, degraded printouts as result,
Connected to Management Server at: localhost:1186
Could not get status
When I looked this errno 115 on google, I got references to posts stating
maybe this was some kind of data node failure. What should I do in this
P.S. When starting the data nodes right after installation ( before
restarting the machines ) , I may have forgotted to give the --initial
option for the existing data nodes and just given it to the new data nodes.
But as I didn't get a problem before the restart I didn't think this was the
issue. After that I've tried to give the --initial option to all data nodes
but the management node still doesn't work properly.