List:Cluster« Previous MessageNext Message »
From:Jonas Oreland Date:July 8 2004 5:17am
Subject:Re: cluster crashing when trying to load data from file
View as plain text  
Devananda wrote:
> I've got cluster running on 5 computers, 1 MGM, 2 DB, 2 API, and things 
> look good when I make small changes to existing tables. I submitted a 
> bugreport concerning creating and dropping tables - it is not 
> propagating from one API node to the other unless I restart the mysqld 
> process on the API node that didn't originate the create / drop query.
> 
> However I've got another problem now too ...
> I have a large table that I'm attempting to import (180 mil rows, just 
> over 1G of text), but for starters I've cut off just the first 1MB of 
> data (in text format). When I try to load it via a single *very* long 
> insert statement, the DB nodes in the cluster simultaneously crash, 
> reporting
> 
> Date/Time: Wednesday 7 July 2004 - 17:31:37
> Type of error: error
> Message: No more free UNDO log
> Fault ID: 2312
> Problem data: There are more than 1Mbyte undolog writes outstanding
> Object of reference: DBACC (Line: 8600) 0x00000002
> ProgramName: NDB Kernel
> ProcessID: 2352
> TraceFile: NDB_TraceFile_3.trace
> ***EOM***
> 
> 
> I've looked at 
> http://dev.mysql.com/doc/mysql/en/MySQL_Cluster_DB_Definition.html to 
> see if I could find a config setting to change, but this is the only 
> information I could find.
> 
> "Another important aspect is that the |DataMemory| also contains UNDO 
> information for records. For each update of a record a copy record is 
> allocated in the |DataMemory|."
> 
> I modified 'DataMemory' to  200M and 'IndexMemory' to 100M, hoping that 
> this would help, but recieved the same error, only 1 Mbyte undo log.
> 
> 
> Is there a way to change the size of the Undo log ?

note: this is not the actual undo-log but rather a buffer on top of it.

but currently no, (it' shouldn't be that hard to impl. though)

but, the failure as such indicates that the nodes can't write undo to 
disk fast enough.

Do you see high disk load?

 > When I try to load it via a single *very* long insert statement

This ways is definitly not "prefered" way of loading ndb.

A better way is to instead bulk rows to about 64k and the commit.
So, each insert should contain roughly 64k and there should be 
autocommit on.

/Jonas
Thread
cluster crashing when trying to load data from fileDevananda8 Jul
  • Re: cluster crashing when trying to load data from fileJonas Oreland8 Jul
    • Re: cluster crashing when trying to load data from fileDevananda8 Jul
      • Re: cluster crashing when trying to load data from fileMikael Ronström8 Jul
        • Re: cluster crashing when trying to load data from fileDevananda9 Jul