List:Cluster« Previous MessageNext Message »
From:Devananda Date:July 8 2004 10:47pm
Subject:Re: cluster crashing when trying to load data from file
View as plain text  
Bug submitted. Since the Trace file is 1.9Mb, I only posted a portion of 
it. The data looks pretty much the same from that point on, but if you 
would like the entire file I will be happy to email it to you. I 
increased ZUNDOPAGESIZE to 1024, recompiled and copied the binaries to 
all my servers, but still it crashes.


Mikael Ronström wrote:

> Hi Devananda,
>
> 2004-07-08 kl. 21.04 skrev Devananda:
>
>> Jonas Oreland wrote:
>>
>>> A better way is to instead bulk rows to about 64k and the commit.
>>> So, each insert should contain roughly 64k and there should be 
>>> autocommit on.
>>>
>> The insert statements are the direct result of mysqldump; I merely 
>> grabbed the first complete line. In looking at the options for 
>> mysqldump I dont see anything allowing me to control how long each 
>> insert statement that it creates will be. If the optimum size of an 
>> insert is 64k, is there a way to get mysqldump to create inserts of 
>> just that size?
>>
>
> The problem you are experiencing sounds very much like your disk is 
> much slower compared to your CPU as Jonas mentioned.
> The problem you are experiencing is that the NDB kernel produces UNDO 
> log records faster than the disk can swallow them. Since
> for inserts the UNDO log records are created at the time of insert and 
> not at commit that points to this problem even more. At the same
> time there is a protection barrier that should be invoked to ensure 
> this problem doesn't occur but rather that the behaviour is aborted 
> transactions.
> So I suggest you file a bug report on that one together with the trace 
> file so that we can check it up. Please also describe the HW and
> the query so that we get an idea of the environment. How long is very 
> long and how many inserts is pressed into this single line.
>
> If the problem persists there is a compile time parameter at line 197 
> in Dbacc.hpp
> ZUNDOPAGESIZE set to 64 (number of 32 kByte pages). You can set this 
> to a higher
> value like 128, 256, 512, 1024. At the moment it is 1 MByte. So far we 
> haven't seen the
> need to make it a configurable parameter.
>
> Rgrds Mikael
>
>>
>> -Devananda
>>
>>
>> -- 
>> MySQL Cluster Mailing List
>> For list archives: http://lists.mysql.com/cluster
>> To unsubscribe:    http://lists.mysql.com/cluster?unsub=1
>>
>>
> Mikael Ronström, Senior Software Architect
> MySQL AB, www.mysql.com
>
> Clustering:
> http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html
>
> http://www.eweek.com/article2/0,1759,1567546,00.asp
>
>


Thread
cluster crashing when trying to load data from fileDevananda8 Jul
  • Re: cluster crashing when trying to load data from fileJonas Oreland8 Jul
    • Re: cluster crashing when trying to load data from fileDevananda8 Jul
      • Re: cluster crashing when trying to load data from fileMikael Ronström8 Jul
        • Re: cluster crashing when trying to load data from fileDevananda9 Jul