I know this question gets asked often, however the archives on the MySQL site seem
outdated and I couldn't find any solutions there.
I'm trying to come up with a solution for true redundancy on MySQL and so far have not
come up with anything that isn't a disaster waiting to happen. Here were some potential
Assume there are two databases (A and B) and updates/inserts are done via a CGI script.
Database A receives all updates from CGI. Every 5 minutes or so FTP Database A and
overwrite Database B. If Database A goes down CGI then sends all information to Database
Potential problems with this: Information updated since last A->B transfer will be
lost. Perhaps doing some log parsing can retain the information...but this would require
more than likely manual activation of a data retrieval script. B would also need to be
copied back to A before CGI script realizes that DB A is back online...in a nutshell it
requires precise timing, has potential race conditional and overall not fully automated
CGI updates both Database A & B. During retrieval programs can access either server
for the information.
First off this will slow down the CGI and increase server load as it requires two
connections & queries not just one. Also if one database goes down something needs to
be done to get the database that went down up to speed. Can do parsing via log files but
if times on servers are not exact there could be duplicated or lost data.
So right now neither solution is appealing to me :( If someone has a working solution or
can provide feedback on how I could improve either of the above theories please tell me.
I'm open to all ideas and suggestions. Also rumour has it Linux was in the process of
developing some sort of MySQL redundancy software...does anyone know if there's any truth