> I calculate the row size as 99 bytes.
> mysql> select count(*) from gmargin;
> | count(*) |
> | 4355766 |
> 1 row in set (26.64 sec)
> and, as I understand, this sent all 4,355,766 rows over the network to the mysql
> server, so 400,355,766 bytes, right? does this seem like a reasonable time? I'm not sure
> how much time it is taking the mysql server to count up all the rows...?
Actually only the primary key of each row is fetched. The limiting
factor here is that too few records are fetched at a time. We are
working on improving the fetch algorithm too allow for more records to
be fetched in each "batch" thus utilizing the network bandwith better.
> 400MB/26.6s = 15MB/s or 120 Mbit.
> basically, we're interested in all the settings we can change to max out the
> performance on our cluster. is there any kind of optimization guide for the cluster?
In order to help you I will need some more knowledge concerning the kind
of queries your application will be running, I assume you will not only
do "select count(*)". :)
MySQL Cluster is very good at primary key and unique index lookups,
where a hash index is used to find the record. There are also ordered
indexes available, which improves performance for a query a lot. So
please give me some examples of queries that you intend to run, and I'll
advise how to optimize them.
Magnus Svensson, Software Engineer
MySQL AB, www.mysql.com
Office: +46 709 164 491