List:Cluster« Previous MessageNext Message »
From:Andrew Morgan Date:June 23 2009 2:18pm
Subject:RE: Tablespaces large enough, but still get table full error
View as plain text  
Hi Giovane,

 In terms of memory usage, one thing to note is that if you don't include a
primary key with your table definition then a hidden one will be created for
you. This is required so that Cluster can use it to partition the data
between data nodes. For this reason, you always need some memory for the
table data (at least until enhancements are made to allow indexes to be
stored on disk).

Regards, Andrew.


Andrew Morgan
MySQL Senior Product Manager - Sun Microsystems
andrew.morgan@stripped
Blog: www.clusterdb.com

> -----Original Message-----
> From: augusto.bott@stripped [mailto:augusto.bott@stripped] On Behalf
> Of Augusto Bott
> Sent: 23 June 2009 15:12
> To: cluster@stripped
> Cc: Giovane
> Subject: Re: Tablespaces large enough, but still get table full error
> 
> Hi Giovane!
> 
> There might be a number of limiting factors in play here. One of them
> could
> be your architecture. You mentioned you wanted to address 12G of
> address
> space, right? Well, on 32bits you're limited to around 2.5GB for a
> single
> process (and ndbd is a single process, even if you told it to use
> datafiles
> as storage). I guess you could run multiple ndbd processes on the same
> box
> so each could use around 2G but I don't think this is a recommended
> practice. That said, it looks like your ndbd process is not able to
> access
> all RAM assigned to it (thus, the table full error message). One thing
> you
> could investigate is the possibility of using 64bit OS and binaries.
> 
> On the other hand, there are a number of other factors that could be
> limiting memory for a process. I'm assuming we're talking about Linux
> here,
> so have you checked the output of 'ulimit -a' or the contents of
> /etc/security/limits.conf?
> 
> Also, on your first email you told the dataset was about 4G and
> datamemory
> was 12G, but on your last email you mentioned 1200M (roughly 1.2G). You
> probably wanna review your config as well.
> 
> I'd recommend going with David's suggestions: read the docs and find
> out the
> state of the tablespaces thru the tools under the bin directory.
> Reading the
> docs entirely is usually a good way of understand the mechanics of the
> whole
> thing, specially if you're new to this.
> 
> Best of luck!
> 
> --
> Augusto Bott
> 
> 
> On Tue, Jun 23, 2009 at 05:01, Giovane <gufrgs@stripped> wrote:
> 
> > Hi Augusto,
> >
> > Thanks for your reply.
> >
> > Well, let's go to the answers:
> >
> > > May I ask what's the data structure?
> > sure.
> > CREATE TABLE `table` (
> >  `start_time` decimal(15,0) unsigned NOT NULL,
> >  `end_time` decimal(15,0) unsigned NOT NULL,
> >  `duration` decimal(15,0) unsigned NOT NULL,
> >  `ipv4_src` int(10) unsigned NOT NULL,
> >  `port_src` smallint(5) unsigned NOT NULL,
> >  `ipv4_dst` int(10) unsigned NOT NULL,
> >  `port_dst` smallint(5) unsigned NOT NULL,
> >  `protocol` tinyint(3) unsigned NOT NULL,
> >  `tos` smallint(5) unsigned NOT NULL,
> >  `packets` decimal(33,0) NOT NULL,
> >  `octets` decimal(33,0) NOT NULL,
> > ) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1 TABLESPACE ts3 STORAGE
> DISK ;
> >
> > > Are you using 32/64bit binaries/arch?
> > 32 bits
> >
> > > Did you check if the ndbd process can actually access all the
> memory it's
> > > configured to use?
> > The NDB node? Well, I have configured memory parameters for data node
> > in ndb config file (DataMemory and IndexMemory), and those they can
> > access, otherwise these nodes don't start.
> >
> > > Any other limitations in place?
> > I'm not an expert in mysqlcluster.. so I think the answer is not.
> What
> > could be? I have TCP buffer parameters and so on, but nothing that I
> > believe could impact this.
> >
> > > Can you please provide more details on that configuration of yours
> (like
> > > index data size?)
> > Sure, but I got no indexes, as you can see on the table structure.
> > The values are:
> >
> > DataMemory=1200M
> > IndexMemory=200M
> >
> > > Also, keep in mind that 'table is full' errors in MySQL can happen
> for a
> > > multitude of reasons, not necessarily because there is no more room
> for
> > rows
> > > on a table.
> > How can I really trace it back and determine what's the cause of
> that?
> >
> > David also replied my message, and he asked this:
> >
> > > 2) Have you used any of the tools in the bin directory to see what
> the
> > state of the node/tablespaces are?
> > No, I have no idea which tool should I use, but I 'll take a look on
> > that. If you have any suggestions pls let me know.
> >
> > Thanks a lot,
> >
> > Giovane
> >
> >
> >
> >
> >
> > On Mon, Jun 22, 2009 at 6:56 PM, Augusto Bott<augusto@stripped>
> wrote:
> > > May I ask what's the data structure?
> > > Are you using 32/64bit binaries/arch?
> > > Did you check if the ndbd process can actually access all the
> memory it's
> > > configured to use?
> > > Any other limitations in place?
> > > Can you please provide more details on that configuration of yours
> (like
> > > index data size?)
> > >
> > > Also, keep in mind that 'table is full' errors in MySQL can happen
> for a
> > > multitude of reasons, not necessarily because there is no more room
> for
> > rows
> > > on a table.
> > >
> > > Just my $0.02.
> > >
> > > --
> > > Augusto Bott
> > >
> > >
> > > On Mon, Jun 22, 2009 at 12:12, Giovane <gufrgs@stripped> wrote:
> > >>
> > >> Dear all,
> > >>
> > >> I'm trying to use mysqlcluster to store  data on DISK, instead of
> main
> > >> memory.
> > >>
> > >> However,when I'm importing the data into the database, I still get
> the
> > >> folllowing error:
> > >> "ERROR 1114 (HY000): The table 'table' is full"
> > >>
> > >> I'm aware that, by default, Mysql cluster store data in memory. So
> ,
> > >> as suggested by Andrew   (http://lists.mysql.com/cluster/6629), I
> used
> > >>  tablespaces to address this.
> > >> I created it large to enough to store the data and also defined
> > >> explicitly that the table should be stored on disk.
> > >> But I still get the error.
> > >>
> > >> Does anybody have any idea why?
> > >>
> > >> Here is the summary of what I did:
> > >>
> > >> 1.  NDB config file: set NoOfReplicas to 1 (I don't need
> redudancy, I
> > >> just wanna speed up the process).
> > >>
> > >> 2. Followed Andrew's suggestion
> (http://lists.mysql.com/cluster/6629)
> > >> on how to store data on disk on cluster (instead of the default in
> > >> memory), by using tablespaces. So I basically:
> > >>
> > >>   2.a) Create the log files: CREATE LOGFILE GROUP lg_1  ADD
> UNDOFILE
> > >> 'undo_1.log' INITIAL_SIZE 16M     UNDO_BUFFER_SIZE 2M     ENGINE
> > >> NDBCLUSTER;
> > >>   2.b) Create the tablespace: create tablespace ts3 add datafile
> > >> 'DATAFILE3.data' USE LOGFILE GROUP lg_1 INITIAL_SIZE 3G ENGINE
> > >> NDBCLUSTER;
> > >>   2.c) Add more DATA FILES to the tablespace:
> > >>           alter tablespace ts3 add datafile 'DATAFILE4.data'
> > >> INITIAL_SIZE 3G ENGINE NDBCLUSTER;
> > >>           alter tablespace ts3 add datafile 'DATAFILE5.data'
> > >> INITIAL_SIZE 3G ENGINE NDBCLUSTER;
> > >>           alter tablespace ts3 add datafile 'DATAFILE6.data'
> > >> INITIAL_SIZE 3G ENGINE NDBCLUSTER;
> > >>
> > >> After that, I got reserved 12GB for this tablespace.
> > >>
> > >> 3. Create the DB (create database experimental;), and table
> > >> (referencing the previous tablespace,  saying to store IN DISK)
> > >>    CREATE TABLE `table` (
> > >>      #columns here
> > >> ) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1 TABLESPACE ts3 STORAGE
> DISK ;
> > >>
> > >> 4. Import the data to the table (DATA = 4GB)
> > >>   * load data infile .... into table table ...
> > >>
> > >> 5. Got an error (even though the tablespace is 12GB and my data is
> only
> > >> 4GB).
> > >> ERROR 1114 (HY000): The table 'table' is full
> > >>
> > >> Does anybody if is the correct procedure and why I am getting this
> > error?
> > >>
> > >> Thanks and regards,
> > >>
> > >> Giovane
> > >>
> > >> --
> > >> MySQL Cluster Mailing List
> > >> For list archives: http://lists.mysql.com/cluster
> > >> To unsubscribe:
> > >>  http://lists.mysql.com/cluster?unsub=1
> > >>
> > >
> > >
> >
> >
> >
> > --
> > Best regards,
> >
> > Giovane
> >

Thread
Tablespaces large enough, but still get table full errorGiovane22 Jun
  • Re: Tablespaces large enough, but still get table full errorAugusto Bott22 Jun
    • Re: Tablespaces large enough, but still get table full errorGiovane23 Jun
      • Re: Tablespaces large enough, but still get table full errorAugusto Bott23 Jun
        • RE: Tablespaces large enough, but still get table full errorAndrew Morgan23 Jun
          • Re: Tablespaces large enough, but still get table full errorGiovane25 Jun
            • Re: Tablespaces large enough, but still get table full errorDavid Ennis25 Jun
              • Re: Tablespaces large enough, but still get table full errorGiovane25 Jun
  • Re: Tablespaces large enough, but still get table full errorice28 Sep