List:Cluster« Previous MessageNext Message »
From:Luke H. Crouch Date:August 27 2004 9:12pm
Subject:RE: ALL READ [was: adding ndb nodes?]
View as plain text  
I think I understand your question...

so you would put 2 nodes on each machine, say 2 machines to start with. now you have 4
nodes executing on 2 machines. then, sometime later, you want to add 2 more machines, and
go to having just 1 node on each?

the scenario sounds plausible to me. when moving the nodes to their new machines, you
would have to make sure to copy over the entire filesystem for that node, then set up the
nodes to execute on different computer in the config.ini, and restart the whole
cluster...as long as you aren't running something crazy like 32 ndbd nodes on each
machine, I don't *think* the overhead would be too large on the initial configuration.

it seems like an interesting experiment for someone to try. we've now set up our former
cluster boxes as a mysql master + 3 slave replication setup. (a little more fitting,
considering we got 300GB disk space on each for db) but if someone can give this
experiment a try, I'd like to see how it goes...

-L

> -----Original Message-----
> From: Wheeler, Alex [mailto:awheeler@stripped]
> Sent: Friday, August 27, 2004 4:02 PM
> To: cluster@stripped
> Subject: RE: ALL READ [was: adding ndb nodes?]
> 
> 
> 
> What is the feasibility of building a cluster with twice as many nodes
> on each computer as you need, say 2 per 1 cpu computer, or 4 per 2 cpu
> computer?
> 
> When an upgrade is needed, half the nodes are migrated to their own
> servers, the memory configurations are doubled -- since half as much
> memory would be used on each computer, and then restart the whole
> cluster, now with twice as many computers and increased ram.
> 
> Would the overhead in this design outweigh the benefits of not needing
> to bring the cluster down for very long to double the number of
> computers?  Or is it even possible to migrate a node from one computer
> to another?
> 
> --
> Alex Wheeler
>  
> 
> -----Original Message-----
> From: Tomas Ulin [mailto:tomas@stripped] 
> Sent: Friday, August 27, 2004 5:51 AM
> To: cluster@stripped
> Cc: Devananda; Crouch, Luke H.; Clint Byrum
> Subject: ALL READ [was: adding ndb nodes?]
> 
> all,
> 
> I feel an urge to break in here so as not to cause confusion.
> 
> Today (will change some day) you can not change the number of nodes 
> on-line,  we've just added verifications for this in the code because 
> doing this will cause the system to break eventually.
> 
> Hence to upgrade:
> 1) make backup
> 2) shutdown your old cluster
> 3) bring new cluster up
> 4) restore
> 
> What you've managed to to by going from 4 to 8 on-line will 
> (maybe) work
> 
> if you don't create new tables...  and anyways the data will not have 
> redistributed itself in going from 4 to 8, all the data will still be 
> located on the 4 first nodes (you should be able to see this 
> if you do a
> 
> load test...).
> 
> An online upgrade path will look like follows:
> 
> 4-node version 4.x -> 4-node version 5.y -> 8-node version 5.z
> 
> BR,
> 
> Tomas
> 
> Devananda wrote:
> 
> > I'll give it a test run tomorrow, but I'm fairly sure that 
> you do not 
> > need (or want) to do #1 or #5. backing up beforehand is of course a 
> > good idea. At minimum,what needs to happen for the cluster 
> > configuration to change, change the config.ini (while everything is 
> > running is fine), then the next 3 steps I'm not sure what 
> order to do 
> > in - start up the new DB nodes (they wont be able to join 
> the cluster 
> > at this point) then restart the MGM node (then the new 
> nodes will join
> 
> > the cluster) then restart the old nodes, one at a time; or, restart 
> > the mgm node first, so it rereads the config file, then startup the 
> > new nodes then restart the old, or restart mgm, restart 
> old, start up 
> > new nodes.
> >
> > However, the dev's may well correct me on all this ;) I've 
> just been 
> > experimenting alot! hehehe...
> >
> > Devananda
> > Neopets, Inc
> >
> >
> >
> >
> > Crouch, Luke H. wrote:
> >
> >> the MySQL guys might correct me on some of this, but to 
> rebuild the 
> >> node with more db nodes, I think you would have to follow this 
> >> procedure...
> >>
> >> 0. Use management node to create a global backup
> >> 1. Use management node to shut down all the db nodes
> >> 2. Shut down the management node
> >> 3. Change the config.ini file to include the new DB nodes
> >> 4. Bring up the management node
> >> 5. Bring up each of the db nodes with ndbd -i
> >> 6. Use the management node to restore from the global backup
> >>
> >> that's the process as I understand it to be...?
> >>
> >> -L
> >>
> >>
> >>> -----Original Message-----
> >>> From: Clint Byrum [mailto:cbyrum@stripped]
> >>> Sent: Thursday, August 26, 2004 11:33 AM
> >>> To: cluster@stripped
> >>> Subject: adding ndb nodes?
> >>>
> >>>
> >>> Hey guys, first off.. wow.. lots of good questions on this list
> lately.
> >>> ;)
> >>>
> >>> Anyway, I think I have this right, but I'm not sure.
> >>>
> >>> Once the cluster is running.. say with 4 nodes .. can I add nodes 
> >>> later?
> >>> I understand that 6 nodes is a no-no, but say I wanted to 
> add 4 more
> >>> nodes after the cluster has been running for a few months 
> and has 5G
> of
> >>> data in it.
> >>> Here's how I think it works. Correct me where I'm wrong:
> >>>
> >>> 1) Add new nodes to the config on the management server.
> >>> 2) Start the new nodes
> >>> 3) restart the existing db and api nodes one by one
> >>> 4) magically new nodes start getting new inserts.
> >>>
> >>> Or is it more complex than that?
> >>>
> >>> Thanks
> >>> -cb
> >>>
> >>>
> >>> -- 
> >>> MySQL Cluster Mailing List
> >>> For list archives: http://lists.mysql.com/cluster
> >>> To unsubscribe:    
> >>> http://lists.mysql.com/cluster?unsub=1
> >>>
> >>>
> >>
> >>
> >
> >
> 
> 
> -- 
> MySQL Cluster Mailing List
> For list archives: http://lists.mysql.com/cluster
> To unsubscribe:
> http://lists.mysql.com/cluster?unsub=1
> 
> 
> -- 
> MySQL Cluster Mailing List
> For list archives: http://lists.mysql.com/cluster
> To unsubscribe:    
http://lists.mysql.com/cluster?unsub=1

Thread
adding ndb nodes?Clint Byrum26 Aug
  • Re: adding ndb nodes?Devananda26 Aug
RE: adding ndb nodes?Luke H. Crouch26 Aug
  • Re: adding ndb nodes?Devananda27 Aug
    • ALL READ [was: adding ndb nodes?]Tomas Ulin27 Aug
      • Re: ALL READ [was: adding ndb nodes?]Devananda27 Aug
        • Re: ALL READ [was: adding ndb nodes?]tulin27 Aug
      • Re: ALL READ [was: adding ndb nodes?]Clint Byrum27 Aug
    • Re: adding ndb nodes?Mikael Ronström27 Aug
RE: adding ndb nodes?Luke H. Crouch26 Aug
RE: ALL READ [was: adding ndb nodes?]Alex Wheeler27 Aug
RE: ALL READ [was: adding ndb nodes?]Luke H. Crouch27 Aug