List:Cluster« Previous MessageNext Message »
From:Devananda Date:August 19 2004 5:26pm
Subject:Re: Data persistency
View as plain text  
Mikael Ronström wrote:

>
> All unfinished transactions will be lost.
> The transactions completed will be safe on disk after the next global 
> checkpoint which
> is run at configurable intervals (normally around 1-2 seconds).
>

Could you explain the difference between 
|[DB]TimeBetweenLocalCheckpoints and ||[DB]TimeBetweenGlobalCheckpoints, 
and how both of these relate to ||[DB]NoOfFragmentLogFiles? In the 
documentation on NoOfFragmentLogFiles it says,
    "|REDO log records aren't removed until three local checkpoints have 
completed since the log record was inserted. The speed of checkpoint is 
controlled by a set of other parameters so these parameters are all 
glued together ... In high update scenarios this parameter needs to be 
set very high. Test cases where it has been necessary to set it to over 
300 have been performed."
I can tell that I do not understand how these are tied together, even 
though I have read and reread 
http://dev.mysql.com/doc/mysql/en/MySQL_Cluster_DB_Definition.html.

One of the functions that I am testing is importing a large quantity of 
data into the cluster, and so I am trying to figure out what settings 
need to be what for this to be possible - I hope to be able to move a 
very large table (75mil rows, 8 columns: 4 int, 3 varchar(20), 1 
varchar(100), all but 1 column allowed to be NULL, and charset has to be 
UTF8). I continue to run into problems with UNDO buffers filling up 
during the import process, though I have had mostly success lately, when 
using small import files.

A bigger problem that I am running into is that DataMemory and 
IndexMemory are filling up after an unexpectedly short amount of time. I 
have split the data into text dumpfiles of 250,000 rows (~15Mb each) and 
import them with mysqlimport -L. I currently have 8 DB nodes, set to 
600M DataMemory and 96M IndexMemory, but I run out of DataMemory after 
only 3 to 4 mil rows (~200Mb of text files). In other words, 200Mb of 
text seem to be filling up 4.8Gb of DataMemory .... I have studied the 
documentation on DataMemory and IndexMemory use, but it is too confusing 
for me and does not seem to cover clearly the circumstances that I am 
working with.

Could someone please clarify these things with me? It would be most 
appreciated :)


Devananda
Neopets, Inc
Thread
Data persistencyRichard Goh19 Aug
  • Re: Data persistencyMikael Ronström19 Aug
    • Re: Data persistencyDevananda19 Aug
      • Re: Data persistencyMikael Ronström20 Aug
        • Re: Data persistencyDevananda20 Aug
          • Re: Data persistencyDevananda20 Aug
          • Re: Data persistencyMikael Ronström21 Aug
            • Re: Data persistencyDevananda21 Aug
            • Re: Data persistencyClint Byrum21 Aug
              • Re: Data persistencyMikael Ronström21 Aug