2004-07-08 kl. 21.04 skrev Devananda:
> Jonas Oreland wrote:
>> A better way is to instead bulk rows to about 64k and the commit.
>> So, each insert should contain roughly 64k and there should be
>> autocommit on.
> The insert statements are the direct result of mysqldump; I merely
> grabbed the first complete line. In looking at the options for
> mysqldump I dont see anything allowing me to control how long each
> insert statement that it creates will be. If the optimum size of an
> insert is 64k, is there a way to get mysqldump to create inserts of
> just that size?
The problem you are experiencing sounds very much like your disk is
much slower compared to your CPU as Jonas mentioned.
The problem you are experiencing is that the NDB kernel produces UNDO
log records faster than the disk can swallow them. Since
for inserts the UNDO log records are created at the time of insert and
not at commit that points to this problem even more. At the same
time there is a protection barrier that should be invoked to ensure
this problem doesn't occur but rather that the behaviour is aborted
So I suggest you file a bug report on that one together with the trace
file so that we can check it up. Please also describe the HW and
the query so that we get an idea of the environment. How long is very
long and how many inserts is pressed into this single line.
If the problem persists there is a compile time parameter at line 197
ZUNDOPAGESIZE set to 64 (number of 32 kByte pages). You can set this to
value like 128, 256, 512, 1024. At the moment it is 1 MByte. So far we
haven't seen the
need to make it a configurable parameter.
> MySQL Cluster Mailing List
> For list archives: http://lists.mysql.com/cluster
> To unsubscribe:
Mikael Ronström, Senior Software Architect
MySQL AB, www.mysql.com