List:Cluster« Previous MessageNext Message »
From:Mikael Ronström Date:August 4 2004 5:57pm
Subject:Re: memory overhead question
View as plain text  
Hi Luke,

2004-08-04 kl. 19.38 skrev Crouch, Luke H.:

> sounds great, Mikael. if I change the LockPagesInMainMemory setting, 
> will it compromise any of the data we've already loaded?
>
No. but the change only takes effect after restarting the node.

> OS is running on pretty thin memory (54MB total is used before cluster 
> starts up) on account that we basically installed bare-minimum linux.
>
Good (but you probably will use some file cache unless you use Direct 
I/O, which is ok to do with the ndbd part).

> in production, we will probably end up running the 4 machines each 
> with just a DB node, and then a separate machine with MGM, and a 
> couple of other machines with API's, so we'd hopefully be able to give 
> the DB node every speck of memory each machine can spare...
>

Then you should be ok and even have margins to increase some memory if 
problems occur (at least given the 4 GB you previously mentioned).

> I've made a couple posts regarding performance too...
>
> basically, a query (count with where clause) takes about 2x as long on 
> NDB as it does on a MyISAM. could some of this be caused by the swap 
> usage? so maybe if I lock the pages the performance will increase, 
> too?
>

Slight performance increase. We still haven't focused so much on 
optimising the MySQL connection so that NDB usually is a bit slower is 
not necessarily
any problem but simply an indication that we still have some 
performance improvements to make. Not far away is also support for 
extra cluster interconnect
HW that will also speed up queries by a great deal (we easily get a 
speed up of a factor 3-4x in our early tests).

> thanks a ton, you guys have been awesome help,
>

Just fun.

Rgrds Mikael

> -L
>
>> -----Original Message-----
>> From: Mikael Ronström [mailto:mikael@stripped]
>> Sent: Wednesday, August 04, 2004 12:32 PM
>> To: Crouch, Luke H.
>> Cc: cluster@stripped
>> Subject: Re: memory overhead question
>>
>>
>> Hi Luke,
>>
>> 2004-08-04 kl. 17.54 skrev Crouch, Luke H.:
>>
>>> 50? or 50,000?!
>>>
>>> for our DB settings now, we have:
>>>
>>> DataMemory: 1000M
>>> IndexMemory: 350M
>>> MaxNoOfConcurrentOperations: 65536
>>>
>>
>> 65536 > 50000 so should be ok.
>>
>> Given my earlier experiments this should result in a memory size of
>> 1.6GB. With a 2 GB machine this is ok, but presumably you are
>> running other programs that are also consuming memory (like
>> the OS :))
>> so it is pretty tight. We have a customer with a slightly smaller
>> DataMemory and IndexMemory and they ran into problems a few times a
>> week when executing a backup program.
>>
>>> and we have 2G of memory on the machines. now when we run
>> the load of
>>> inserts, it seems to do better, BUT...
>>>
>>
>> Great
>>
>>> I think our problem last time may have happened because we only had
>>> 80M allocated to DataMemory, and when it started loading, it just
>>> started loading a bunch into swap, and the swap memory was
>> causing the
>>> problems...does this sound reasonable?
>>>
>>> it looks like the ndbd is still dipping into swap memory...? not
>>> nearly as much as before (400M now vs. 2G before), but as I
>> understand
>>> it, ndbd using swap memory can cause problems? is there a way to
>>> prevent ndbd from using any swap memory at all? disabling swap?
>>>
>>
>> There is a setting that works on some OS (Linux and Solaris) which is
>> LockPagesInMainMemory: Y. If this is set the entire process will be
>> locked into memory.
>> There is a small problem with this in that also stack memory gets
>> locked and so if more stack memory is needed and allocated
>> and there is
>> no memory
>> available then the process can fail pretty hard (without
>> cores and the
>> likes).
>>
>>> we're planning on getting some more memory (about 4G in each
>>> machine)...will that take care of it, or will swap always
>> be used to
>>> some extent?
>>>
>>
>> LockPagesInMemory: Y with 4 GB of memory and not using the
>> machine for
>> many other activities in production should cater for good
>> sleeping. We
>> have
>> sites running like this for many months in production without hiccups.
>>
>> Rgrds Mikael
>>
>>> thanks,
>>> -L
>>>
>>>> -----Original Message-----
>>>> From: Mikael Ronström [mailto:mikael@stripped]
>>>> Sent: Wednesday, August 04, 2004 8:35 AM
>>>> To: Crouch, Luke H.
>>>> Cc: cluster@stripped
>>>> Subject: Re: memory overhead question
>>>>
>>>>
>>>> Hi Luke,
>>>> Due to a bug that I am currently fixing you need to have twice the
>>>> number of operation records as the maximum records involved in a
>>>> transaction.
>>>> I am not exactly sure of how big your insert transactions
>> are. I have
>>>> seen 50.000 records per transaction in another mysqldump
>> usage which
>>>> would
>>>> then require around 100.000. From what I have seen, each insert is
>>>> about 1 MByte in size when using extended_insert, dependent on your
>>>> record
>>>> size this should at least not be bigger than 50.000.
>>>>
>>>> Rgrds Mikael
>>>>
>>>> 2004-08-04 kl. 15.16 skrev Crouch, Luke H.:
>>>>
>>>>> thanks Mikael...that's very helpful. we had a very high number for
>>>>> MaxNoOfConcurrentOperations, which was probably another
>> thing using
>>>>> lots of our memory. what would be the suggested number of
>>>> concurrent
>>>>> operations? right now, we're just trying to load our
>> large table (4
>>>>> million records, dumped with --extended-inserts)...
>>>>>
>>>>> thanks again,
>>>>> -L
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Mikael Ronström [mailto:mikael@stripped]
>>>>>> Sent: Wednesday, August 04, 2004 5:39 AM
>>>>>> To: Crouch, Luke H.
>>>>>> Cc: cluster@stripped
>>>>>> Subject: Re: memory overhead question
>>>>>>
>>>>>>
>>>>>> Hi Luke,
>>>>>> Here are some experiments I performed on my machine.
>>>>>>
>>>>>> PID COMMAND      %CPU   TIME   #TH #PRTS #MREGS RPRVT
>> RSHRD  RSIZE
>>>>>> VSIZE
>>>>>>    465 ndbd         0.0%  0:02.35  22    99   155   230M
>>>>>> 3.25M   232M
>>>>>> 301M
>>>>>>    464 ndbd         0.0%  0:00.00   1     9    27   720K
>>>>>> 3.24M   188K
>>>>>> 48.7M
>>>>>>    462 ndb_mgmd     0.0%  0:00.19  11    46    49  6.10M
>>>>>> 1.30M  1.20M
>>>>>> 41.0M
>>>>>>
>>>>>> This is a printout from a quick-start of a 1-node MySQL
>>>> Cluster on my
>>>>>> PowerBook running
>>>>>> Mac OS X 10.3. So about 300M of virtual address space is
>> used for a
>>>>>> standard configured
>>>>>> ndbd process.
>>>>>>
>>>>>> PID COMMAND      %CPU   TIME   #TH #PRTS #MREGS RPRVT
>> RSHRD  RSIZE
>>>>>> VSIZE
>>>>>>    486 ndbd         0.0%  0:03.11  22    99   155   367M
>>>>>> 3.25M   368M
>>>>>> 437M
>>>>>>
>>>>>> This printout was achieved when IndexMemory was set to 50M and
>>>>>> DataMemory to 190M.
>>>>>> Memory increased as much as IndexMemory and DataMemory
> increased.
>>>>>>
>>>>>> PID COMMAND      %CPU   TIME   #TH #PRTS #MREGS RPRVT
>> RSHRD  RSIZE
>>>>>> VSIZE
>>>>>>    496 ndbd         0.8%  0:03.15  20    95   149   405M+
>>>>>> 3.25M   406M+
>>>>>> 476M
>>>>>>
>>>>>> And changing MaxNoOfConcurrentOperations to 65536 adds
>>>> another 39M of
>>>>>> memory.
>>>>>>
>>>>>> Optimisations of MySQL Cluster both in terms of memory usage and
>>>>>> improved performance
>>>>>> of mysql clients using the storage engine is on the TODO
>>>>>> list. At first
>>>>>> however we focus on
>>>>>> ensuring that MySQL Cluster is fully integrated with MySQL.
>>>>>>
>>>>>> Rgrds Mikael
>>>>>>
>>>>>>
>>>>>> 2004-08-03 kl. 17.23 skrev Crouch, Luke H.:
>>>>>>
>>>>>>> when we start our cluster up with default settings
> (DataMemory:
>>>>>>> 80000k, IndexMemory: 24000k), and check the memory usage on
> our
>>>>>>> different nodes, it shows ndbd size as 2395M! and the
>>>>>> machine is using
>>>>>>> 2G memory, and 1G of swap!
>>>>>>>
>>>>>>> are these numbers overhead with only 80M of memory
>>>>>> allocated to data?
>>>>>>> how much memory is required other than the memory
>>>> dedicated to data
>>>>>>> via DataMemory setting?
>>>>>>>
>>>>>>> thanks,
>>>>>>> -L
>>>>>>>
>>>>>>> -- 
>>>>>>> MySQL Cluster Mailing List
>>>>>>> For list archives: http://lists.mysql.com/cluster
>>>>>>> To unsubscribe:
>>>>>>> http://lists.mysql.com/cluster?unsub=1
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> Mikael Ronström, Senior Software Architect
>>>>>> MySQL AB, www.mysql.com
>>>>>>
>>>>>> Clustering:
>>>>>> http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html
>>>>>>
>>>>>> http://www.eweek.com/article2/0,1759,1567546,00.asp
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>> Mikael Ronström, Senior Software Architect
>>>> MySQL AB, www.mysql.com
>>>>
>>>> Clustering:
>>>> http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html
>>>>
>>>> http://www.eweek.com/article2/0,1759,1567546,00.asp
>>>>
>>>>
>>>>
>>>>
>> Mikael Ronström, Senior Software Architect
>> MySQL AB, www.mysql.com
>>
>> Clustering:
>> http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html
>>
>> http://www.eweek.com/article2/0,1759,1567546,00.asp
>>
>>
>>
>> -- 
>> MySQL Cluster Mailing List
>> For list archives: http://lists.mysql.com/cluster
>> To unsubscribe:
>> http://lists.mysql.com/cluster?unsub=1
>>
>>
>>
Mikael Ronström, Senior Software Architect
MySQL AB, www.mysql.com

Clustering:
http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html

http://www.eweek.com/article2/0,1759,1567546,00.asp


Thread
memory overhead questionLuke H. Crouch3 Aug
  • Re: memory overhead questionMikael Ronström4 Aug
RE: memory overhead questionLuke H. Crouch4 Aug
  • Re: memory overhead questionMikael Ronström4 Aug
RE: memory overhead questionLuke H. Crouch4 Aug
  • Re: memory overhead questionMikael Ronström4 Aug
RE: memory overhead questionLuke H. Crouch4 Aug
  • Re: memory overhead questionMikael Ronström4 Aug