Can you do
ndb_mgm -e "show" to check the status of the nodes?
Are you using the connection_pool paramenter in the my.cnf of the mysqlds?
On 05/05/11 22.10, "Johan Andersson" <johan@stripped> wrote:
>Did you also restart the data nodes after restarting the management
>server with --reload?
>Also, I do recommend you to upgrade to 7.1.10 or later.
>On 2011-05-05 22.02, csavino wrote:
>> I¹m having an odd problem. I¹m installing/configuring a new cluster
>> ndb version 5.1.44-ndb-7.1.3. I have 2 management nodes connected to
>> cluster with 2 data nodes and 4 sqld nodes connected. I have 36 open
>> [mysqld] entries in the config.ini files with no hostnames or id¹s
>> specified. I¹m trying to bring up a 5th sqld node and it won¹t
>> the cluster. I¹m getting the below error in the cluster log.
>> WARNING -- Allocate nodeid (0) failed. Connection from ip 10.44.9.40.
>> Returned error string "No free node id found for mysqld(API)."
>> 2011-05-05 13:45:35 [MgmtSrvr] INFO -- Mgmt server state: node id's
>> 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 connected but not reserved
>> Normally when I get this message I would think I needed to add mysqld
>> entries in the config file, but there are plenty available right now. I
>> have tried restarting the management nodes several times after updating
>> config.ini file with specific ids that were not being used, but that
>> Does anyone have an idea what might be causing this?
>> Thanks for any help in advance,
>MySQL Cluster Mailing List
>For list archives: http://lists.mysql.com/cluster
>To unsubscribe: http://lists.mysql.com/cluster?unsub=1