List:Cluster« Previous MessageNext Message »
From:Jim Hoadley Date:April 12 2005 6:05pm
Subject:Re: DataMemory and IndexMemory
View as plain text  
A related question:

When I increase DataMemory from 2600M to 2700M, my NDB nodes won't restart.
Error given is:

Date/Time: Tuesday 12 April 2005 - 10:50:25
Type of error: error
Message: Memory allocation failure
Fault ID: 2327
Problem data: DBTUP could not allocate memory for Page
Object of reference: Requested: 32768x86400 = 2831155200 bytes
ProgramName: ndbd
ProcessID: 15489
TraceFile: /usr/local/mysql/ndb_1_trace.log.12
***EOM***

There is no entry in the management node error log.

Now I'm increasing (DataMemory + IndexMemory) from 3.1GB to 3.2GB, so it could
an operating system issue, but since I'm running the hugemem kernel, a single
process should be able to allocate up to 4GB of RAM.

Hmm, is there some other parameter I need to change? Such as
NoOfDiskPagesToDiskAfterRestartTUP or NoOfDiskPagesToDiskAfterRestartACC?
Othre ideas? My config.ini is attached below.

Thx.

-- Jim Hoadley
   Sr Software Eng
   Dealer Fusion, Inc


--- Jim Hoadley <j_hoadley@stripped> wrote:
> Here's how I calculate my database's data and index memory size requirements:
> 
> PER COLUMN
>                                                                              
>  
>                                              
> Data Requirement    Index Requirement    Fields
> --------------------------------------------------------------
> 4                                        enum,int,time,timestamp
> 8                                        datetime
> size*                                    char
> size + 2*                                varchar
> 256                                      text***
>                     size + 25**          hash key
> 10                  size + 25**          hash key + ordered key
>                                                                              
>  
>                                              
> *   rounded up to nearest 4-byte boundary
> ** + 8 if size > 32
> *** see below
>                                                                              
>  
>                                                                              
>  
>                                                                              
>  
>           
> PER TABLE
>                                                                              
>  
>                                              
> data subtotal = sum of Data Requirement column + 16
> data records per page = (32768 - 128) / data subtotal
> data pages = number of records / data records per page
> data memory requirement = (data pages * 32768) + text requirement***
>                                                                              
>  
>                                              
> index subtotal = sum of Index Requirement column
> index records per page = (32768 - 128) / index subtotal
> index pages = number of records / index records per page
> index memory requirement = index pages * 32768
>                                                                              
>  
>                                              
> *** text requirement = sum of any text field data in excess of 256 bytes
>     NOTE: This calculation may be off because it doesn't include any
>     overhead for records, table or indexes associated with the overflow
>     text
> 
> Is this correct? If so, then my database sizes are:
> 
>    data size = 1725M
>    index size = 360M
> 
> And using this formula:
> 
>    (size * 1.1) * number of replicas / number of nodes 
> 
> the minimum settings for my 2 nodes should be:
> 
>    DataMemory = 1898M
>    IndexMemory = 396M
> 
> (I've got 6GB of RAM on each of 2 NDB nodes. I'm using the hugemem Linux
> kernel
> so each NDB process can access up to 4GB--instead of 2GB as in the standard
> kernel.)
> 
> In the real world, though, after several days of trying, I haven't been able
> to
> run my databases with anything smaller than 2600M and 500M without getting
> "table full" errors. I want to understand why. Any help would be appreciated.
> Thx.
> 
> -- Jim Hoadley
>    Sr Software Eng
>    Dealer Fusion, Inc
> 
> latest config.ini:
> 
> #################################################################
> [ndbd default]
> NoOfReplicas= 2
> MaxNoOfConcurrentOperations=131072
> DataMemory= 2600M
> IndexMemory= 500M
> Diskless= 0
> DataDir= /var/mysql-cluster
> TimeBetweenWatchDogCheck=10000
> HeartbeatIntervalDbDb=10000
> HeartbeatIntervalDbApi=10000
> NoOfFragmentLogFiles=64
> #TimeBetweenLocalCheckpoints=19
> NoOfDiskPagesToDiskAfterRestartTUP=54   #default=40
> NoOfDiskPagesToDiskAfterRestartACC=8    #default=20
> #http://lists.mysql.com/cluster/1441
> MaxNoOfAttributes = 2000
> MaxNoOfOrderedIndexes = 5000
> MaxNoOfUniqueHashIndexes = 5000
>  
> [ndbd]
> HostName= 10.0.1.199
>  
> [ndbd]
> HostName= 10.0.1.200
>  
> [ndb_mgmd]
> HostName= 10.0.1.198
> PortNumber= 2200
>    
> [mysqld]
>   
> [mysqld]
>  
> [tcp default]
> PortNumber= 2202
> #################################################################
> 
> 
> 
> 		
> __________________________________ 
> Do you Yahoo!? 
> Make Yahoo! your home page 
> http://www.yahoo.com/r/hs
> 
> -- 
> MySQL Cluster Mailing List
> For list archives: http://lists.mysql.com/cluster
> To unsubscribe:    http://lists.mysql.com/cluster?unsub=1
> 
> 


		
__________________________________ 
Do you Yahoo!? 
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/
Thread
DataMemory and IndexMemoryJim Hoadley11 Apr
Re: DataMemory and IndexMemoryJim Hoadley12 Apr
  • Re: DataMemory and IndexMemoryMikael Ronström12 Apr
  • Re: DataMemory and IndexMemoryStewart Smith13 Apr