I'm working with debian distribution and MySQL 5.0.
I tried to follow the ufficial documentation to startup a cluster but I
can't understand a couple of things.
I have 5 hosts: 1 management node (mgm), 2 sql node (sql1,sql2) and 2 data
After the initial configuration I tried to start up the cluster and all
seems good; the output of the show command of the management client on the
mgm host is ok: all nodes are connected.
The next step is to create some database in order to have some trial.
So I created a databases named world on s1 and imported the sample world.sql
after changing the engine in NDB for all the tables.
Next step was to create a database named world on s2 and as I typed 'use
world' it read all the tables and updated the database.
Now if I create/delete tables or records on one sql host, this is reflected
over the other and it's fine.
Anyway my questions are:
Are the data just created stored on ndb1 and ndb2?
If so, why there is no track on these hosts of the world database?
Finally there is something in a documentation I found that I can't undestand
"The final thing you are likely to want to check is that all your SQL nodes
are actually working, so if you insert one row on one node, it does appear
on all the others. You can do this by logging in to each storage node and
inserting another row before selecting all rows".
Someone can suggest me how to do this if I have no information on my data
Thanks in advance