List:Cluster« Previous MessageNext Message »
From:Giovane Date:June 23 2009 8:01am
Subject:Re: Tablespaces large enough, but still get table full error
View as plain text  
Hi Augusto,

Thanks for your reply.

Well, let's go to the answers:

> May I ask what's the data structure?
sure.
CREATE TABLE `table` (
  `start_time` decimal(15,0) unsigned NOT NULL,
  `end_time` decimal(15,0) unsigned NOT NULL,
  `duration` decimal(15,0) unsigned NOT NULL,
  `ipv4_src` int(10) unsigned NOT NULL,
  `port_src` smallint(5) unsigned NOT NULL,
  `ipv4_dst` int(10) unsigned NOT NULL,
  `port_dst` smallint(5) unsigned NOT NULL,
  `protocol` tinyint(3) unsigned NOT NULL,
  `tos` smallint(5) unsigned NOT NULL,
  `packets` decimal(33,0) NOT NULL,
  `octets` decimal(33,0) NOT NULL,
) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1 TABLESPACE ts3 STORAGE DISK ;

> Are you using 32/64bit binaries/arch?
32 bits

> Did you check if the ndbd process can actually access all the memory it's
> configured to use?
The NDB node? Well, I have configured memory parameters for data node
in ndb config file (DataMemory and IndexMemory), and those they can
access, otherwise these nodes don't start.

> Any other limitations in place?
I'm not an expert in mysqlcluster.. so I think the answer is not. What
could be? I have TCP buffer parameters and so on, but nothing that I
believe could impact this.

> Can you please provide more details on that configuration of yours (like
> index data size?)
Sure, but I got no indexes, as you can see on the table structure.
The values are:

DataMemory=1200M
IndexMemory=200M

> Also, keep in mind that 'table is full' errors in MySQL can happen for a
> multitude of reasons, not necessarily because there is no more room for rows
> on a table.
How can I really trace it back and determine what's the cause of that?

David also replied my message, and he asked this:

> 2) Have you used any of the tools in the bin directory to see what the state of the
> node/tablespaces are?
No, I have no idea which tool should I use, but I 'll take a look on
that. If you have any suggestions pls let me know.

Thanks a lot,

Giovane





On Mon, Jun 22, 2009 at 6:56 PM, Augusto Bott<augusto@stripped> wrote:
> May I ask what's the data structure?
> Are you using 32/64bit binaries/arch?
> Did you check if the ndbd process can actually access all the memory it's
> configured to use?
> Any other limitations in place?
> Can you please provide more details on that configuration of yours (like
> index data size?)
>
> Also, keep in mind that 'table is full' errors in MySQL can happen for a
> multitude of reasons, not necessarily because there is no more room for rows
> on a table.
>
> Just my $0.02.
>
> --
> Augusto Bott
>
>
> On Mon, Jun 22, 2009 at 12:12, Giovane <gufrgs@stripped> wrote:
>>
>> Dear all,
>>
>> I'm trying to use mysqlcluster to store  data on DISK, instead of main
>> memory.
>>
>> However,when I'm importing the data into the database, I still get the
>> folllowing error:
>> "ERROR 1114 (HY000): The table 'table' is full"
>>
>> I'm aware that, by default, Mysql cluster store data in memory. So ,
>> as suggested by Andrew   (http://lists.mysql.com/cluster/6629), I used
>>  tablespaces to address this.
>> I created it large to enough to store the data and also defined
>> explicitly that the table should be stored on disk.
>> But I still get the error.
>>
>> Does anybody have any idea why?
>>
>> Here is the summary of what I did:
>>
>> 1.  NDB config file: set NoOfReplicas to 1 (I don't need redudancy, I
>> just wanna speed up the process).
>>
>> 2. Followed Andrew's suggestion  (http://lists.mysql.com/cluster/6629)
>> on how to store data on disk on cluster (instead of the default in
>> memory), by using tablespaces. So I basically:
>>
>>   2.a) Create the log files: CREATE LOGFILE GROUP lg_1  ADD UNDOFILE
>> 'undo_1.log' INITIAL_SIZE 16M     UNDO_BUFFER_SIZE 2M    
> ENGINE
>> NDBCLUSTER;
>>   2.b) Create the tablespace: create tablespace ts3 add datafile
>> 'DATAFILE3.data' USE LOGFILE GROUP lg_1 INITIAL_SIZE 3G ENGINE
>> NDBCLUSTER;
>>   2.c) Add more DATA FILES to the tablespace:
>>           alter tablespace ts3 add datafile
> 'DATAFILE4.data'
>> INITIAL_SIZE 3G ENGINE NDBCLUSTER;
>>           alter tablespace ts3 add datafile
> 'DATAFILE5.data'
>> INITIAL_SIZE 3G ENGINE NDBCLUSTER;
>>           alter tablespace ts3 add datafile
> 'DATAFILE6.data'
>> INITIAL_SIZE 3G ENGINE NDBCLUSTER;
>>
>> After that, I got reserved 12GB for this tablespace.
>>
>> 3. Create the DB (create database experimental;), and table
>> (referencing the previous tablespace,  saying to store IN DISK)
>>    CREATE TABLE `table` (
>>      #columns here
>> ) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1 TABLESPACE ts3 STORAGE DISK ;
>>
>> 4. Import the data to the table (DATA = 4GB)
>>   * load data infile .... into table table ...
>>
>> 5. Got an error (even though the tablespace is 12GB and my data is only
>> 4GB).
>> ERROR 1114 (HY000): The table 'table' is full
>>
>> Does anybody if is the correct procedure and why I am getting this error?
>>
>> Thanks and regards,
>>
>> Giovane
>>
>> --
>> MySQL Cluster Mailing List
>> For list archives: http://lists.mysql.com/cluster
>> To unsubscribe:
>>  http://lists.mysql.com/cluster?unsub=1
>>
>
>



-- 
Best regards,

Giovane
Thread
Tablespaces large enough, but still get table full errorGiovane22 Jun
  • Re: Tablespaces large enough, but still get table full errorAugusto Bott22 Jun
    • Re: Tablespaces large enough, but still get table full errorGiovane23 Jun
      • Re: Tablespaces large enough, but still get table full errorAugusto Bott23 Jun
        • RE: Tablespaces large enough, but still get table full errorAndrew Morgan23 Jun
          • Re: Tablespaces large enough, but still get table full errorGiovane25 Jun
            • Re: Tablespaces large enough, but still get table full errorDavid Ennis25 Jun
              • Re: Tablespaces large enough, but still get table full errorGiovane25 Jun
  • Re: Tablespaces large enough, but still get table full errorice28 Sep