List:Cluster« Previous MessageNext Message »
From:Jonas Oreland Date:August 6 2004 12:59pm
Subject:Re: max_rows=4294967295 but "The table 'bench1' is full"
View as plain text  
Gowrynath Sivaganeshamoorthy wrote:
> guys,
> 
> I got the following setup:
> 
> two nodes, both connected, up and running:
> 
> NDB> all status
> Node 2: Started (Version 3.5.0)
> 
> Node 3: Started (Version 3.5.0)
> 
> NDB>
> 
> both nodes got the following settings in my.cnf:
> 
> [mysqld]
> port            = 3306
> socket          = /opt/mysql_db01/var/mysql.sock
> skip-locking
> 
> max_heap_table_size=64M
> back_log = 512
> key_buffer = 512M
> table_cache = 2048
> sort_buffer_size = 8M
> myisam_sort_buffer_size = 128M
> thread_cache = 8
> read_buffer_size = 4M
> query_cache_size = 64M
> record_buffer = 8M
> tmp_table_size = 256M
> max_connections = 4096
> wait_timeout = 3600
> max_connect_errors = 1024
> [...]
> 
> now, I've created a table with:
> 
> mysql> create table bench1 (a int NOT NULL,b int,s char(10),primary key (a))
> ENGINE=NDB max_rows = 200000000000 avg_row_length = 50;
> Query OK, 0 rows affected (1.36 sec)
> mysql> show table status like 'bench1' \G
> *************************** 1. row ***************************
>            Name: bench1
>          Engine: ndbcluster
>         Version: 9
>      Row_format: Fixed
>            Rows: 100
>  Avg_row_length: 0
>     Data_length: 0
> Max_data_length: NULL
>    Index_length: 0
>       Data_free: 0
>  Auto_increment: NULL
>     Create_time: NULL
>     Update_time: NULL
>      Check_time: NULL
>       Collation: latin1_swedish_ci
>        Checksum: NULL
>  Create_options: max_rows=4294967295 avg_row_length=50
>         Comment:
> 1 row in set (0.01 sec)
> 
> mysql>
> 
> but, when I insert now about some million rows (~5 millions), I get
> the following:
> 
> $ bin/mysql --defaults-file=etc/my.cnf test < insertfile
> ERROR 1114 at line 1190581: The table 'bench1' is full
> 
> after ...
> 
> mysql> select count(*) from bench1;
> +----------+
> | count(*) |
> +----------+
> |   975983 |
> +----------+
> 1 row in set (9.69 sec)
> 
> around 1 million rows. funny thing is, that I tried it several times
> now and it halts between ~900.000 to ~1.400.000 rows.
> 
> any ideas?

The table is limited by memory allocated to cluster storage nodes.
(Specified in config.ini as DataMemory & IndexMemory).

I.e. the max_rows parameter is currently ignored.

---

The reason for that the table can be full and not full some time later 
is that the index is a linear hashing algorithm that uses some pages 
while expanding/shrinking buckets.

the expand happens sometime after the operation has been commited...

Hope this helps,

Jonas
Thread
max_rows=4294967295 but "The table 'bench1' is full"Gowrynath Sivaganeshamoorthy6 Aug
  • Re: max_rows=4294967295 but "The table 'bench1' is full"Gowrynath Sivaganeshamoorthy6 Aug
  • Re: max_rows=4294967295 but "The table 'bench1' is full"Mikael Ronström6 Aug
  • Re: max_rows=4294967295 but "The table 'bench1' is full"Jonas Oreland6 Aug