List:Cluster« Previous MessageNext Message »
From:Devananda Date:August 25 2004 4:58pm
Subject:Re: Basic questions
View as plain text  
Paul Gardner wrote:

>Hi all
>I have a knowledge gap in how the cluster works - reading the
>whitepaper, it implies (as do the other docs) that no changes are
>required to the application to support the cluster and it's failover
>But the whitepaper also says that as well as storage nodes, which I
>understand executes all the transactions, you also need MySQL Server
>nodes in addition which is what the application connects to.  The MySQL
>server nodes then transact with the storage node(s).  If the Mysql
>server node fails, then surely the application does, after all, need to
>have failover logic built in, to push transactions to some other mysql
>server, as the application cannot talk directly to the storage nodes?
>Is this right?  Surely not?
You are correct - if the MySQL server node (API node) fails, but you are 
running multiple API nodes then your application must be able to handle 
failing over to another API node. If a DB node fails, the cluster 
handles that internally and, besides any current transactions being 
aborted, shouldn't cause any problems with the API nodes or your client 

>Another question - I understand the storage nodes require large RAM (db
>* 2) and large CPU - but how powerful do these mysql server nodes need
>to be in terms of ram/cpu?
This really depends on the size of the table you will be storing in the 
NDB engine. For a small table that handles many transactions, a modern 
desktop computer would even work, I believe. I have been testing on 
2.8Ghz P4 Xeon's with ATA133 7200 rpm drives and 1G ram, and everything 
seems to be fine. Albeit, I can not store a large table on such small 
machines, they handle smaller tables just fine. By large table, I don't 
mean lots of rows, I mean total size (num rows * (row size + index size 
+ overhead) ).

Neopets, Inc
Basic questionsPaul Gardner25 Aug
  • Re: Basic questionsDevananda25 Aug
RE: Basic questionsLuke H. Crouch25 Aug