On Thu, 2 Dec 2004, Adrian Immler wrote:
> i have done some tests with the mysql cluster v. 4.1.7 over 3 computers,
> all with 2800ghz and 512 mb of ram. these systems were connected with
> gigabit ethernet over copper.
> one system (node1) was running mysqld and ndb_mgmd and the other 2 nodes
> (node2 and node3) were running ndbd with 500mb of ram assigned by the
> ndb_mgmd. on node2 and node3 i turned off swap. i have run several tests
> with different mysqldumps and 2 slaves, 2 copies and 2 slaves one copy.
> i had 2 mysqldumps of these types:
> the first one was a database with about 780.000 inserts, 4 keys and lots
> of varchars and about 30 rows etc. (filesize: 226mb)
> the second one is a database with about 2.800.000 inserts, 1 key and 9
> rows (filesize: 454 mb)
> the "table <abc> is full" is the result of "only" 512 mb of ram per
> node. "selects" on those tables with ndb were also much slower than the
> same query on the same table (with twice the entries) in myisam mode.
you're saying that 226 Mbytes of original data don't fit in 3 DB
Nodes with 512Mo RAM .. that's the size of the mysqldump text file, right
I would be great if the Mysql guys could tell how much RAM do we
need to put for the DB Nodes in front of each Mbytes of a pure text
Do we need more or less RAM on the DB Nodes than the mysql text
dump file ?
Does this amount of RAM depends on the type of data we put into
the cluster ?