List:Commits« Previous MessageNext Message »
From:jon Date:December 19 2005 3:50pm
Subject:svn commit - mysqldoc@docsrva: r597 - in trunk: refman-5.1 refman-5.1/images refman-common
View as plain text  
Author: jstephens
Date: 2005-12-19 16:50:01 +0100 (Mon, 19 Dec 2005)
New Revision: 597

Log:

RefMan-5.1: 
  New section mysql-cluster-replication (currently commented out)
  New image files for this section (2)
  New entity reference &ddash; = '--' in fixedchars.ent (for use with config 
    options, etc. to keep from breaking validation when these need to be used 
    inside XML comments).
  Fixed typo in titles.ent
  Updated Makefile



Added:
   trunk/refman-5.1/images/cluster-replication-binlog-injector.png
   trunk/refman-5.1/images/cluster-replication-overview.png
Modified:
   trunk/refman-5.1/Makefile
   trunk/refman-5.1/ndbcluster.xml
   trunk/refman-common/fixedchars.ent
   trunk/refman-common/titles.en.ent

Modified: trunk/refman-5.1/Makefile
===================================================================
--- trunk/refman-5.1/Makefile	2005-12-19 15:41:06 UTC (rev 596)
+++ trunk/refman-5.1/Makefile	2005-12-19 15:50:01 UTC (rev 597)
@@ -44,7 +44,7 @@
 
 MANUAL_SRCS_EXTRA = versions.ent ../refman-common/fixedchars.ent ../refman-common/titles.en.ent
 
-MANUAL_SRCS = $(MANUAL_SRCS_EXTRA) manual.xml preface.xml introduction.xml ../refman-common/manual-conventions.en.xml ../refman-common/what-is-mysql-ab.en.xml ../refman-common/what-is.en.xml ../refman-common/maxdb.en.xml ../refman-common/information-sources.xml installing.xml tutorial.xml using-mysql-programs.xml database-administration.xml replication.xml optimization.xml client-side-scripts.xml language-structure.xml reservedwords.xml ../refman-common/reserved-new-5.1.xml charset.xml data-types.xml functions.xml sql-syntax.xml pluggable-storage.xml images/pluggable-storage-overview.png images/pluggable-storage-choosing.png storage-engines.xml innodb.xml images/blackhole-1.png custom-engine.xml images/custom-engine-overview.png ndbcluster.xml images/cluster-components-1.png images/multi-comp-1.png partitioning.xml spatial-extensions.xml stored-procedures.xml triggers.xml views.xml information-schema.xml precision-math.xml apis.xml connectors.xml connector-odbc.xml images/m!
 yarchitecture.png images/mydsn-icon.png images/mydsn.png images/mydsn-setup.png images/mydsn-example.png images/mydsn-test-success.png images/mydsn-test-fail.png images/mydsn-options.png images/mydsn-icon.png images/mydsn.png images/mydsn-trace.png images/myaccess.png images/myaccess-odbc.png images/mydsn-trace.png images/mydll-properties.png images/mydsn-options.png images/myflowchart.png connector-net.xml ../refman-common/news-connector-net.xml connector-j.xml ../refman-common/news-connector-j.xml connector-mxj.xml extending-mysql.xml problems.xml error-handling.xml errmsgs-server.xml errmsgs-client.xml ../refman-common/credits.xml news.xml ../refman-common/news-5.1.xml ../refman-common/news-myodbc.xml porting.xml ../refman-common/environment-variables.xml regexp.xml limits.xml restrictions.xml ../refman-common/gpl-license.xml ../refman-common/mysql-floss-license-exception.xml 
+MANUAL_SRCS = $(MANUAL_SRCS_EXTRA) manual.xml preface.xml introduction.xml ../refman-common/manual-conventions.en.xml ../refman-common/what-is-mysql-ab.en.xml ../refman-common/what-is.en.xml ../refman-common/maxdb.en.xml ../refman-common/information-sources.xml installing.xml tutorial.xml using-mysql-programs.xml database-administration.xml replication.xml optimization.xml client-side-scripts.xml language-structure.xml reservedwords.xml ../refman-common/reserved-new-5.1.xml charset.xml data-types.xml functions.xml sql-syntax.xml pluggable-storage.xml images/pluggable-storage-overview.png images/pluggable-storage-choosing.png storage-engines.xml innodb.xml images/blackhole-1.png custom-engine.xml images/custom-engine-overview.png ndbcluster.xml images/cluster-components-1.png images/multi-comp-1.png images/cluster-replication-overview.png images/cluster-replication-binlog-injector.png partitioning.xml spatial-extensions.xml stored-procedures.xml triggers.xml views.xml inform!
 ation-schema.xml precision-math.xml apis.xml connectors.xml connector-odbc.xml images/myarchitecture.png images/mydsn-icon.png images/mydsn.png images/mydsn-setup.png images/mydsn-example.png images/mydsn-test-success.png images/mydsn-test-fail.png images/mydsn-options.png images/mydsn-icon.png images/mydsn.png images/mydsn-trace.png images/myaccess.png images/myaccess-odbc.png images/mydsn-trace.png images/mydll-properties.png images/mydsn-options.png images/myflowchart.png connector-net.xml ../refman-common/news-connector-net.xml connector-j.xml ../refman-common/news-connector-j.xml connector-mxj.xml extending-mysql.xml problems.xml error-handling.xml errmsgs-server.xml errmsgs-client.xml ../refman-common/credits.xml news.xml ../refman-common/news-5.1.xml ../refman-common/news-myodbc.xml porting.xml ../refman-common/environment-variables.xml regexp.xml limits.xml restrictions.xml ../refman-common/gpl-license.xml ../refman-common/mysql-floss-license-exception.xml 
 
 manual-prepped.xml: $(MANUAL_SRCS)
 manual-manprepped.xml: $(MANUAL_SRCS)

Added: trunk/refman-5.1/images/cluster-replication-binlog-injector.png
===================================================================
(Binary files differ)


Property changes on: trunk/refman-5.1/images/cluster-replication-binlog-injector.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Added: trunk/refman-5.1/images/cluster-replication-overview.png
===================================================================
(Binary files differ)


Property changes on: trunk/refman-5.1/images/cluster-replication-overview.png
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream

Modified: trunk/refman-5.1/ndbcluster.xml
===================================================================
--- trunk/refman-5.1/ndbcluster.xml	2005-12-19 15:41:06 UTC (rev 596)
+++ trunk/refman-5.1/ndbcluster.xml	2005-12-19 15:50:01 UTC (rev 597)
@@ -8030,6 +8030,1620 @@
     </section>
 
   </section>
+  
+  <!-- 
+  
+    CLUSTER REPLICATION SECTION COMMENTED OUT TFN  //JS
+  
+  -->
+  
+  <!--  
+  
+    <section id="mysql-cluster-replication">
+    <title>&title-mysql-cluster-replication;</title>
+    
+    <para>
+      Previous to MySQL 5.1, <firstterm>asynchronous
+        replication</firstterm>, more usually referred to simply as
+      <quote>replication</quote>, was not available when using MySQL
+      Cluster. This section explains how to set up and manage a
+      configuration wherein one group of computers operating as a MySQL
+      Cluster replicates to a second computer or group of computers. We
+      assume some familiarity on the part of the reader with standard
+      MySQL replication as discussed elsewhere in this Manual. (See
+      <xref linkend="replication"/>).
+    </para>
+    
+    <para>
+      Normal (non-clustered) replication involves a <quote>master</quote>
+      server and a <quote>slave</quote> server, the master being the
+      source of the operations and data to be replicated and the slave
+      being the recipient of these. In MySQL Cluster, replication is
+      conceptually very similar but can be more complex in practise, as it
+      may be  extended to cover a number of different configurations
+      including replicating between two complete clusters. While a MySQL
+      Cluster itself depends on the <literal>NDB Cluster</literal> storage
+      engine for clustering functionality, it is not necessary to use the
+      Cluster storage engine on the slave. However, for maximum
+      availability, it  is possible to replicate from one MySQL Cluster to
+      another, and it is this type of configuration that we discuss, as
+      shown in the figure below:
+    </para>  
+  
+    <mediaobject>
+      <imageobject>
+        <imagedata fileref="images/cluster-replication-overview.png" format="PNG"/>
+      </imageobject>
+      <textobject>
+        <phrase lang="en">MySQL Cluster-to-Cluster Replication
+          Layout</phrase>
+      </textobject>
+    </mediaobject>
+    
+    <para>
+      In this scenario the replication process is one in which successive
+      states of a master cluster are logged and saved to a slave cluster.
+      This process is accomplished by a special thread  known as the NDB
+      binlog injector thread, which runs on each MySQL server and produces
+      a binary log (<literal>binlog</literal>). This thread ensures that
+      all changes in the cluster producing the binary log &mdash; and not
+      just those changes that are effected via the MySQL Server &mdash;
+      are inserted into the binary log with the correct serialization
+      order. We refer to the MySQL replication master and replication
+      slave servers as replication servers or replication nodes, and the
+      data flow or line of communication between them as a
+      <firstterm>replication channel</firstterm>. 
+    </para>
+  
+    <section id="mysql-cluster-replication-abbreviations">
+      <title>&title-mysql-cluster-replication-abbreviations;</title>
+      
+      <para>
+        Throughout this section, we use the following abbreviations or
+        symbols for referring to the master and slave clusters, and to
+        processes and commands run on the clusters or cluster nodes:
+      </para>
+      
+      <informaltable>      
+        <tgroup cols="2">
+          <colspec colwidth="25*"/>
+          <colspec colwidth="75*"/>
+          <tbody>
+            <row>
+              <entry><emphasis role="bold">Symbol or
+                Abbreviation</emphasis></entry>
+              <entry><emphasis role="bold">Description (Refers
+                to...)</emphasis></entry>
+            </row>
+            <row>
+              <entry><replaceable>M</replaceable></entry>
+              <entry>The cluster serving as the (primary) replication
+                master</entry>
+            </row>
+            <row>
+              <entry><replaceable>S</replaceable></entry>
+              <entry>The cluster acting as the (primary) replication
+                slave</entry>
+            </row>
+            <row>
+              <entry><literal>shell<replaceable>M</replaceable>&gt;</literal></entry>
+              <entry>Shell command to be issued on the master cluster</entry>
+            </row>
+            <row>
+              <entry><literal>mysql<replaceable>M</replaceable>&gt;</literal></entry>
+              <entry>MySQL client command issued on a single MySQL server
+                running as an SQL node on the master cluster</entry>
+            </row>
+            <row>
+              <entry><literal>mysql<replaceable>M*</replaceable>&gt;</literal></entry>
+              <entry>MySQL client command to be issued on all SQL nodes
+                participating in the replication master cluster</entry>
+            </row>
+            <row>
+              <entry><literal>shell<replaceable>S</replaceable>&gt;</literal></entry>
+              <entry>Shell command to be issued on the slave cluster</entry>
+            </row>
+            <row>
+              <entry><literal>mysql<replaceable>S</replaceable>&gt;</literal></entry>
+              <entry>MySQL client command issued on a single MySQL server
+                running as an SQL node on the slave cluster</entry>
+            </row>
+            <row>
+              <entry><literal>mysql<replaceable>S*</replaceable>&gt;</literal></entry>
+              <entry>MySQL client command to be issued on all SQL nodes
+                participating in the replication slave cluster</entry>
+            </row>
+            <row>
+              <entry><replaceable>C</replaceable></entry>
+              <entry>Primary replication channel</entry>
+            </row>
+            <row>
+              <entry><replaceable>C'</replaceable></entry>
+              <entry>Secondary replication channel</entry>
+            </row>
+            <row>
+              <entry><replaceable>M'</replaceable></entry>
+              <entry>Secondary replication master</entry>
+            </row>
+            <row>
+              <entry><replaceable>S'</replaceable></entry>
+              <entry>Secondary replication slave</entry>
+            </row>
+          </tbody>
+        </tgroup>      
+      </informaltable>
+      
+    </section>
+  
+    <section id="mysql-cluster-replication-general">
+      <title>&title-mysql-cluster-replication-general;</title>
+      
+      <para>
+        A replication channel requires two MySQL servers acting as
+        replication servers (one each for the master and slave). For
+        example, this means that in the case of a replication setup with
+        two replication channels (in order to provide an extra channel for
+        redundancy), there will be a total of four replication nodes, two
+        per cluster.
+      </para>
+      
+      <para>
+        Each MySQL server used for replication in either cluster must be
+        uniquely identified among all the MySQL replication servers
+        participating in either cluster (you cannot have replication
+        servers on both the master and slave clusters sharing the same
+        ID). This can be done by starting each SQL node using the
+        <option>&ddash;server-id=<replaceable>id</replaceable></option> option,
+        where <replaceable>id</replaceable> is a unique integer. While it is 
+        not strictly necessary, we will assume for purposes of this
+        discussion that all MySQL installations are the same version.
+      </para>
+      
+      <para>
+        In any event, servers involved in replication must be compatible
+        with one another with respect to both the version of the
+        replication protocol used and the SQL feature sets which they
+        support; the simplest and easiest way to assure that this is the
+        case is to use the same MySQL version for all servers involved.
+        Note that in many cases it is not possible to replicate to a slave
+        running a version of MySQL with a lower version number than that
+        of the master &mdash; see 
+        <xref linkend="replication-compatibility"/> for details.
+      </para>
+    
+      <para>
+        We assume that the slave server or cluster is dedicated to
+        replication of the master, and that no other data is being stored
+        on it.
+      </para>
+    
+    </section>
+  
+  
+    <section id="mysql-cluster-replication-issues">
+      <title>&title-mysql-cluster-replication-issues;</title>
+      
+      <para>
+        The following are known problems or issues when using replication
+        with MySQL Cluster in MySQL 5.1:
+      </para>
+      
+      <itemizedlist>
+        <listitem>
+          <para>
+            The use of data definition statements, such as <literal>CREATE
+              TABLE</literal>, <literal>DROP TABLE</literal>, and
+            <literal>ALTER TABLE</literal>, are recorded in the binary
+            log for only the MySQL server on which they are issued.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            A MySQL server involved in replication should be started or
+            restarted after using <command>ndb_restore</command> in order
+            to discover and setup replication of <literal>NDB
+              Cluster</literal> tables. Alternatively, you can issue a
+            <literal>SHOW TABLES</literal> command on all databases in the
+            cluster.
+          </para>
+          
+          <para>
+            Similarly, when using <literal>CREATE DATABASE</literal> or
+            <literal>CREATE SCHEMA</literal>, the new database is not
+            automatically discoverable by the MySQL server. Thus, this
+            statement must be issued on each MySQL server participating in
+            the cluster upon creating a new database.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Restarting the cluster with the <option>&ddash;initial</option>
+            option will cause the sequence of GCI and epoch numbers to
+            start over from <literal>0</literal>. (This is generally true
+            of MySQL Cluster and not limited to replication scenarios
+            involving Cluster.) The MySQL servers involved in replication
+            should in this case be replicated. After this, you should use
+            the <literal>RESET MASTER</literal> and <literal>RESET
+              SLAVE</literal> commands to clear the invalid
+            <literal>binlog_index</literal> and
+            <literal>apply_status</literal> tables. respectively.
+          </para>
+        </listitem>
+        
+      </itemizedlist>
+      
+      <para>
+        See <xref linkend="mysql-cluster-replication-schema-discovery"/>
+        for more information about the first two items listed above, as
+        well as some examples illustrating how to handle applicable
+        situations.
+      </para>
+    
+    </section>
+    
+    <section id="mysql-cluster-replication-schema">
+      <title>&title-mysql-cluster-replication-schema;</title>
+  
+      <para>
+        Replication in MySQL Cluster makes use of a number of dedicated
+        tables in a separate <literal>cluster_replication</literal>
+        database on each MySQL Server instance acting as an SQL node in
+        both the cluster being replicated and the replication slave
+        (whether the slave is a single server or a cluster). This
+        database, which is created  during the MySQL installation process
+        by the <command>mysql_install_db</command> script, contains a
+        table for storing the binary log's indexing data. As the
+        binlog_index table is local to each MySQL server and does not
+        participate in clustering, it uses the <literal>MyISAM</literal>
+        storage engine, and so must be created separately on each
+        <command>mysqld</command> participating in the master cluster.
+        This table is defined as follows:
+      </para>
+  
+  <programlisting>	
+  CREATE TABLE `binlog_index` (
+      `Position`  BIGINT(20) UNSIGNED NOT NULL,
+  	  `File`      VARCHAR(255) NOT NULL,
+  	  `epoch`     BIGINT(20) UNSIGNED NOT NULL,
+  	  `inserts`   BIGINT(20) UNSIGNED NOT NULL,
+  	  `updates`   BIGINT(20) UNSIGNED NOT NULL,
+  	  `deletes`   BIGINT(20) UNSIGNED NOT NULL,
+  	  `schemaops` BIGNINT(20) UNSIGNED NOT NULL,
+  	  PRIMARY KEY (`epoch`)
+  ) ENGINE=MYISAM  DEFAULT CHARSET=latin1;
+  </programlisting>
+  
+      <para>
+        In the figure below, the relationship of the MySQL Cluster
+        replication master server, its binlog injector thread, and the
+        <literal>cluster_replication.binlog_index</literal> table are
+        shown.
+      </para>
+  
+      <mediaobject>
+        <imageobject>
+          <imagedata fileref="images/cluster-replication-binlog-injector.png" format="PNG"/>
+        </imageobject>
+        <textobject>
+          <phrase lang="en">The replication master cluster, the
+            binlog-injector thread, and the
+            <literal>binlog_index</literal> table</phrase>
+        </textobject>
+      </mediaobject>
+      
+      <para>
+        An additional table, named <literal>apply_status</literal>, is
+        used to keep a record of the operations which have been replicated
+        from the master to the slave. Unlike the case with
+        <literal>binlog_index</literal>, the data in this table is not
+        specific to any one SQL  node in the (slave) cluster, and so
+        <literal>apply_status</literal> can use the <literal>NDB
+          Cluster</literal> storage engine, as shown here:
+      </para>
+      
+  <programlisting>
+  CREATE TABLE `apply_status` (
+      `server_id` INT(10) UNSIGNED NOT NULL,
+      `epoch`     BIGINT(20) UNSIGNED NOT NULL,
+      PRIMARY KEY  USING HASH (`server_id`)
+  ) ENGINE=NDBCLUSTER  DEFAULT CHARSET=latin1;
+  </programlisting>
+      
+      <para>
+        These tables are created in a separate database because they
+        should not be replicated. No user intervention is normally
+        required to create or maintain either of them. Both the
+        <literal>binlog_index</literal> and the
+        <literal>apply_status</literal> tables are maintained by the NDB
+        injector thread. This keeps the master <literal>mysqld</literal>
+        process updated to changes performed by the NDB storage engine.
+        The NDB <firstterm>binlog injector thread</firstterm> receives
+        events directly from the NDB storage engine. The NDB injector is
+        responsible for capturing all the data events within the cluster,
+        and insures that all events changing, inserting, or deleting data
+        are recorded in the <literal>binlog_index</literal> table. The
+        slave I/O thread will transfer the from the master's binary log to the slave's relay log.
+      </para>
+      
+      <para>
+        However, it is advisable to check for the existence and integrity
+        of these tables as an initial step in preparing a MySQL Cluster
+        for replication. It is possible to view event data recorded in the
+        binary log by querying the
+        <literal>cluster_replication.binlog_index</literal> table directly
+        on the master. This can be also be accomplished using the
+        <literal>SHOW BINLOG EVENTS</literal> command on either the
+        replication master or slave MySQL servers.
+      </para>
+  
+    </section>
+    
+    <section id="mysql-cluster-replication-preparation">
+      <title>&title-mysql-cluster-replication-preparation;</title>
+      
+      <para>
+        Preparing the MySQL Cluster for replication consists of the
+        following steps:
+      </para>
+  
+      <orderedlist>
+        <listitem>
+          <para>
+            Check all MySQL servers for version compatibility (see
+            <xref linkend="mysql-cluster-replication-general"/>).
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Create a slave account on the master Cluster with the
+            appropriate privileges:
+          </para>
+        
+  <programlisting>
+  mysql<replaceable>M</replaceable>&gt; <userinput>GRANT REPLICATION SLAVE</userinput>
+          -&gt; <userinput>ON *.* TO '<replaceable>slave-user</replaceable>'@'<replaceable>slave-host</replaceable>'</userinput>
+          -&gt; <userinput>IDENTIFIED BY '<replaceable>slave-password</replaceable>';</userinput>
+  </programlisting>
+        
+          <para>
+            where <replaceable>slave-user</replaceable> is the slave
+            account username, <replaceable>slave-host</replaceable> is the
+            hostname or IP address of the replication slave, and
+            <replaceable>slave-password</replaceable> is the password that
+            you wish to assign to this account.
+          </para>
+          
+          <para>
+            For example, to create a slave user account with the name
+            <quote><literal>myslave</literal></quote>, logging in from the
+            host named <quote><literal>rep-slave</literal></quote>, and
+            using the password <quote><literal>53cr37</literal></quote>,
+            you would use the following <literal>GRANT</literal> command:
+          </para>
+  
+  <programlisting>
+  mysql<replaceable>M</replaceable>&gt; <userinput>GRANT REPLICATION SLAVE</userinput>
+       -&gt; <userinput>ON *.* TO 'myslave'@'rep-slave'</userinput>
+       -&gt; <userinput>IDENTIFIED BY '53cr37';</userinput>
+  </programlisting>
+        
+          <para>
+            For security reasons, it is preferable to use a unique user
+            account &mdash; not employed for any other purpose &mdash; for
+            the replication slave account.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Configure the slave to use the master. Using the MySQL
+            Monitor, this can be accomplished with the <literal>CHANGE
+              MASTER TO</literal> command: 
+          </para>
+        
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>CHANGE MASTER TO</userinput>
+       -&gt; <userinput>MASTER_HOST='<replaceable>master-host</replaceable>',</userinput>
+       -&gt; <userinput>MASTER_PORT=<replaceable>master-port</replaceable>,</userinput>
+       -&gt; <userinput>MASTER_USER='<replaceable>slave-user</replaceable>',</userinput>
+       -&gt; <userinput>MASTER_PASSWORD='<replaceable>slave-password</replaceable>';</userinput>
+  </programlisting>
+        
+          <para>
+            where <replaceable>master-host</replaceable> is the hostname
+            or IP address of the replication master,
+            <replaceable>master-port</replaceable> is the port for the
+            slave to use for connecting to the master,
+            <replaceable>slave-user</replaceable> is the username set up
+            for the slave on the master, and
+            <replaceable>slave-password</replaceable> is the password set
+            for that user account in the previous step.
+          </para>
+          
+          <para>
+            For example, to tell the slave to replicate from the MySQL
+            server whose hostname is
+            <quote><literal>rep-master</literal></quote>, using the
+            replication slave account created in the previous step, you
+            would use the following command:
+          </para>
+        
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>CHANGE MASTER TO</userinput>
+       -&gt; <userinput>MASTER_HOST='rep-master'</userinput>
+       -&gt; <userinput>MASTER_PORT=3306,</userinput>
+       -&gt; <userinput>MASTER_USER='myslave'</userinput>
+       -&gt; <userinput>MASTER_PASSWORD='53cr37';</userinput>
+  </programlisting>
+        
+          <para>
+            (For a complete list of clauses that can be used with this
+            command, see <xref linkend="change-master-to"/>.)
+          </para>
+          
+          <para>
+            You can also configure the slave to use the master by setting
+            the corresponding startup options in the slave server's
+            <filename>my.cnf</filename> file. To configure the slave in
+            the same way as the example <literal>CHANGE MASTER
+              TO</literal> command above, the following information would
+            need to be included in the slave's <filename>my.cnf</filename>
+            file: 
+          </para>
+        
+  <programlisting>
+  [mysqld]
+  master-host=rep-master
+  master-port=3306
+  master-user=myslave
+  master-password=53cr37
+  </programlisting>
+        
+          <para>
+            See <xref linkend="replication-options"/> for additional
+            options that can be set in <filename>my.cnf</filename> for
+            replication slaves.
+          </para>
+          
+          <para>
+            <emphasis role="bold">Note</emphasis>: To provide replication
+            backup capability, you will also need to add an
+            <option>ndb-connectstring</option> option to the slave's
+            <filename>my.cnf</filename> file prior to starting the
+            replication process. See 
+            <xref linkend="mysql-cluster-replication-backups"/> for
+            details.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            If the master cluster is already in use, you can create a
+            backup of the master and load this onto the slave in order to
+            cut down on the amount of time required for the slave to
+            synchronise itself with the master. If the slave is also
+            running MySQL Cluster, this can be accomplished using the
+            backup and restore procedure described in 
+            <xref linkend="mysql-cluster-replication-backups"/>.
+          </para>
+        
+  <programlisting>
+  ndb-connectstring=<replaceable>management-host</replaceable>[:<replaceable>port</replaceable>]
+  </programlisting>
+          
+          <para>
+            In the event that you are <emphasis>not</emphasis> using MySQL
+            Cluster on the replication slave, you can create a backup with
+            this command on the replication master: 
+          </para>
+        
+  <programlisting>
+  shell<replaceable>M</replaceable>&gt; <userinput>mysqldump &ddash;master-data=1</userinput>
+  </programlisting>
+        
+          <para>
+            Then import the resulting data dump onto the slave by copying
+            the dump file over to the slave. After this, you can use the
+            <command>mysql</command> client to import the data from the
+            dumpfile into the slave database as shown here, where
+            <filename>dump_file</filename> is the name of the file that
+            was generated using <command>mysqldump</command> on the
+            master, and <literal>database-name</literal> is the name of
+            the database to be replicated:
+          </para>
+        
+  <programlisting>
+  shell<replaceable>S</replaceable>&gt; <userinput>mysql -u root -p database-name &lt; dump_file</userinput>
+  </programlisting>
+        
+          <para>
+            For a complete list of options to use with
+            <command>mysqldump</command>, see <xref linkend="mysqldump"/>.
+          </para>
+        
+          <para>
+            Note that if you copy the data to the slave in this fashion,
+            you should make sure that the slave is started with the
+            <option>&ddash;skip-slave-start</option> option on the command
+            line, or else include <literal>skip-slave-start</literal> in
+            the slave's <filename>my.cnf</filename> file in order to keep
+            it from trying to connect to the master to begin replicating
+            before all the data has been loaded. Once the loading of data
+            has completed, follow the additional steps outlined in the
+            next two sections.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <remark role="todo">
+            [js] Note that the xref in the second sentence of the next
+            para points to the Row-Based Replication section, and must
+            be commented out until that section is uncommented.
+          </remark>
+          
+          <para>
+            Ensure that each MySQL server acting as a replication master
+            is configured with a unique server ID, and with binary logging
+            enabled, using the row format. (See 
+            <xref linkend="replication-row-based"/>.) These options can be
+            set either in the master server's <filename>my.cnf</filename>
+            file, or on the command line when starting the master
+            <command>mysqld</command> process. See 
+            <xref linkend="mysql-cluster-replication-starting"/> for
+            information regarding the latter option.
+          </para>
+        </listitem>
+      </orderedlist>
+      
+    </section>
+    
+    <section id="mysql-cluster-replication-starting">
+      <title>&title-mysql-cluster-replication-starting;</title>
+      
+      <para>
+        This section outlines the procedure for starting MySQL CLuster
+        replication using a single replication channel.
+      </para>
+      
+      <orderedlist>
+        <listitem>
+          <para>
+            Start the MySQL replication master server by issuing this
+            command:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>M</replaceable>&gt; <userinput>mysqld &ddash;nbdcluster &ddash;server-id=<replaceable>id</replaceable> \</userinput> 
+          <userinput>&ddash;log-bin &ddash;binlog-format=row &amp;</userinput>
+  </programlisting>
+          
+          <para>
+            where <replaceable>id</replaceable> is this server's unique ID
+            (see <xref linkend="mysql-cluster-replication-general"/>).
+            This starts the server's <command>mysqld</command> process
+            with binary logging enabled using the proper logging format.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Start the MySQL replication slave server as shown here:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>S</replaceable>&gt; <userinput>mysqld &ddash;ndbcluster &ddash;server-id=<replaceable>id</replaceable> &amp;</userinput>
+  </programlisting>
+          
+          <para>
+            where <replaceable>id</replaceable> is the slave server's
+            unique ID. It is not necessary to enable logging on the
+            replication slave.
+          </para>
+          
+          <para>Note that you should use the
+            <option>&ddash;skip-slave-start</option> option with this command
+            or else you should include <literal>skip-slave-start</literal>
+            in the slave server's <filename>my.cnf</filename> file, unless
+            you wish for replication to begin immediately. With the use of
+            this option, the start of replication will be delayed until
+            the appropriate <literal>START SLAVE</literal> command has
+            been issued, as explained in Step 4 below.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            It is necessary to synchronise the slave server with the
+            master server's replication binlog. If binary logging has not
+            previously been running on the master, then run the following
+            query on the slave:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>CHANGE MASTER TO</userinput>
+       -&gt; <userinput>MASTER_LOG_FILE='',</userinput>
+       -&gt; <userinput>MASTER_LOG_POS=4;</userinput>
+  </programlisting>
+          
+          <para>
+            This will instruct the slave to begin reading the master's
+            binary log from the log's starting point. Otherwise &mdash;
+            that is, if you are loading data from the master using a
+            backup mdash; see 
+            <xref linkend="mysql-cluster-replication-failover"/> for
+            information on how to obtain the correct values to use for
+            <literal>MASTER_LOG_FILE</literal> and
+            <literal>MASTER_LOG_POS</literal> in such cases.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Finally, you must instruct the slave to begin applying
+            replication by issuing this command from the
+            <command>mysql</command> client on the replication slave:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>START SLAVE;</userinput>
+  </programlisting>
+          
+          <para>
+            This also initiates the transmission of replication data from
+            the master to the slave.
+          </para>
+        </listitem>
+      </orderedlist>    
+      
+      <para>It is also possible to use two replication channels, in a
+        manner simlar to the procedure described above; the differences
+        between this and using a single replication channel are covered in
+        <xref linkend="mysql-cluster-replication-two-channels"/>.
+      </para>    
+      
+    </section>
+  
+    <section id="mysql-cluster-replication-two-channels">
+      <title>&title-mysql-cluster-replication-two-channels;</title>
+      
+      <para>
+        In a more complete example scenario, we envision two replication
+        channels in order to provide redundancy and thereby guard against
+        possible failure of a single replication channel. This requires a
+        total of 4 replication servers, two masters for the master cluster
+        and two slave servers for the slave cluster. For purposes of the
+        discussion that follows, we assume that unique identifiers are
+        assigned as shown here:
+      </para>
+      
+      <informaltable>
+        <tgroup cols="2">
+          <colspec colwidth="20*"/>
+          <colspec colwidth="70*"/>
+          <tbody>
+            <row>
+              <entry><emphasis role="bold">Server ID</emphasis></entry>
+              <entry><emphasis role="bold">Description</emphasis></entry>
+            </row>
+            <row>
+              <entry>1</entry>
+              <entry>Master - primary replication channel (<emphasis>M</emphasis>)</entry>
+            </row>
+            <row>
+              <entry>2</entry>
+              <entry>Master - secondary replication channel (<emphasis>M'</emphasis>)</entry>
+            </row>
+            <row>
+              <entry>3</entry>
+              <entry>Slave - primary replication channel (<emphasis>S</emphasis>)</entry>
+            </row>
+            <row>
+              <entry>4</entry>
+              <entry>Slave - secondary replication channel (<emphasis>S'</emphasis>)</entry>
+            </row>
+          </tbody>
+        </tgroup>
+      </informaltable>
+      
+      <para>
+        Setting up replication with two channels is not radically
+        different than setting up a single replication channel. First, the
+        <command>mysqld</command> processes for the primary and secondary
+        replication masters must be started, followed by those for the
+        primary and secondary slaves. Then the replication processes may
+        be initiated by issuing the <literal>START SLAVE</literal> command
+        on each of the slaves. The commands and the order in which they
+        need to be issued are shown here:
+      </para>
+      
+      <orderedlist>
+        <listitem>
+          <para>
+            Start the primary replication master:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>M</replaceable>&gt; <userinput>mysqld &ddash;ndbcluster &ddash;server-id=1 \</userinput> 
+                 <userinput>&ddash;log-bin &ddash;binlog-format=row &amp;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Start the secondary replication master:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>M'</replaceable>&gt; <userinput>mysqld &ddash;ndbcluster &ddash;server-id=2 \</userinput>
+                  <userinput>&ddash;log-bin &ddash;binlog-format=row &amp;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Start the primary replication slave server:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>S</replaceable>&gt; <userinput>mysqld &ddash;ndbcluster &ddash;server-id=3 \</userinput>
+                 <userinput>&ddash;skip-slave-start &amp;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Start the secondary replication slave:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>S'</replaceable>&gt; <userinput>mysqld &ddash;ndbcluster &ddash;server-id=4 \</userinput>
+                  <userinput>&ddash;skip-slave-start &amp;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Finally, commence replication on the primary channel by
+            executing the <literal>START SLAVE</literal> command on the
+            primary slave as shown here:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>START SLAVE;</userinput>
+  </programlisting>
+        </listitem>
+      </orderedlist>
+          
+      <para>
+        Note that &mdash; as mentioned previously &mdash; is not necessary
+        to enable binary logging on replication slaves.
+      </para>
+      
+    </section>
+  
+    <section id="mysql-cluster-replication-failover">
+      <title>&title-mysql-cluster-replication-failover;</title>
+      
+      <para>
+        In the event that the primary Cluster replication process fails,
+        it is possible to switch over to the secondary replication
+        channel. We describe in this section the steps required to
+        accomplish this.
+      </para>
+      
+      <orderedlist>
+        <listitem>
+          <para>
+            First, you must obtain the time of the most recent global
+            checkpoint (GCP). That is, you need to determine the most
+            recent epoch from the <literal>apply_status</literal> table on
+            the slave cluster, which can be found using the following
+            query:
+          </para>
+      
+  <programlisting>
+  mysql<replaceable>S'</replaceable>&gt; <userinput>SELECT @latest:=MAX(epoch)</userinput>
+  	       <userinput>FROM cluster_replication.apply_status;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Using the information obtained from the query shown in Step 1,
+            you can obtain the corresponding records from the
+            <literal>binlog_index</literal> table on the master cluster as
+            shown here:
+          </para>
+      
+  <programlisting>
+  mysqlM'&gt; <userinput>SELECT</userinput> 
+        -&gt;     <userinput>@file:=SUBSTRING_INDEX(File, '/', -1),</userinput>
+        -&gt;     <userinput>@pos:=Position</userinput>
+        -&gt; <userinput>FROM cluster_replication.binlog_index</userinput>
+        -&gt; <userinput>WHERE epoch &gt; @latest</userinput>
+        -&gt; <userinput>ORDER BY epoch ASC LIMIT 1;</userinput>
+  </programlisting>
+      
+          <para>
+            These are the records saved on the master since the failure of
+            the primary replication channel. We have employed a user
+            variable <literal>@latest</literal> here to represent the
+            value obtained in Step 1. Of course, it is not possible for
+            one <command>mysqld</command> instance to access user
+            variables set on another server instance directly; this value
+            will need to be <quote>plugged in</quote> to the second query
+            manually or in application code.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Now it is possible to synchronise the secondary channel by
+            running the following query on the secondary slave server:
+      </para>
+      
+  <programlisting>
+  mysql<replaceable>S'</replaceable>&gt; <userinput>CHANGE MASTER TO</userinput>
+        -&gt;     <userinput>MASTER_LOG_FILE='@file',</userinput>
+        -&gt;     <userinput>MASTER_LOG_POS=@pos;</userinput>
+  </programlisting>
+      
+          <para>
+            Again we have employed user variables (in this case
+            <literal>@file</literal> and <literal>@pos</literal>) in order
+            to represent the values obtained in Step 2 and applied in Step
+            3; in practise these values must be inserted manually or using
+            application code that can access both of the servers involved.
+          </para>
+          
+          <para>
+            Note that <literal>@file</literal> is a string value such as
+            <literal>'/var/log/mysql/replication-master-bin.00001'</literal>
+            and so must be quoted when used in SQL or application code.
+            However, the value represented by <literal>@pos</literal> must
+            <emphasis>not</emphasis> be quoted; while MySQL normally
+            attempts to convert strings to numbers, this case is an
+            exception.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            You can now initiate replication on the secondary channel by
+            issuing the appropriate command on the secondary slave
+            <command>mysqld</command>:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S'</replaceable>&gt; <userinput>START SLAVE;</userinput>
+  </programlisting>
+        </listitem>
+      </orderedlist>
+      
+      <para>
+        Once the secondary replication channel is active, you can
+        investigate the failure of the primary and effect repairs. The
+        precise actions required to do this will depend upon the reasons
+        for which the primary channel failed.
+      </para>
+      
+      <para>
+        If the failure is limited to a single server, it should (in
+        theory) be possible to replicate from <replaceable>M</replaceable>
+        to <replaceable>S'</replaceable>, or from
+        <replaceable>M'</replaceable> to <replaceable>S</replaceable>;
+        however, this has not yet been tested.
+      </para>
+  
+    </section>
+    
+    <section id="mysql-cluster-replication-backups">
+      <title>&title-mysql-cluster-replication-backups;</title>
+      
+      <para>
+        This discussion discusses making beackups and restoring from them
+        using MySQL Cluster replication. We assume that the replication
+        servers have already been configured as covered previously (see 
+        <xref linkend="mysql-cluster-replication-preparation"/> and the
+        sections immediately following). This having been done, the
+        procedure for making a backup and then restoring from it is as
+        follows: 
+      </para>
+      
+      <orderedlist>
+        <listitem>
+          <para>
+            There are two different methods by which the backup may be
+            started.
+          </para>
+          
+          <itemizedlist>
+            <listitem>
+              <para>
+                <emphasis role="bold">Method A</emphasis>:
+              </para>
+              
+              <para>
+                This method requires that the cluster backup process was
+                previously enabled on the master server, prior to starting
+                the replication process. This can be done by including the
+                line
+              </para>
+        
+  <programlisting>
+  ndb-connectstring=<replaceable>management-host</replaceable>[:<replaceable>port</replaceable>]
+  </programlisting>
+          
+              <para>
+                in a <literal>[MYSQL_CLUSTER]</literal> section in the
+                <filename>my.cnf file</filename>, where
+                <replaceable>management-host</replaceable> is the IP
+                address or hostname of the NDB management server for the
+                master cluster, and <replaceable>port</replaceable> is the
+                management server's port number. Note that the port number
+                needs to be specified only if the default port (1186) is
+                not being used. (See <xref linkend="multi-config"/> for
+                more information about ports and port allocation in MySQL
+                Cluster.)
+              </para>
+              
+              <para>
+                In this case, the backup can be started by executing this
+                statement on the replication master:
+              </para>
+  
+  <programlisting>
+  shell<replaceable>M</replaceable>&gt; <userinput>ndb_mgm -e "START BACKUP"</userinput>
+  </programlisting>
+            </listitem>
+            
+            <listitem>
+              <para>
+                <emphasis role="bold">Method B</emphasis>:
+              </para>
+              
+              <para>
+                If the <filename>my.cnf</filename> file does not spacify
+                where to find the management host, you can start the
+                backup process by passing this information to the
+                <literal>NDB</literal> management client as part of the
+                <literal>START BACKUP</literal> command, like this:
+              </para>
+              
+  <programlisting>
+  shell<replaceable>M</replaceable>&gt; <userinput>ndb_mgm management-host:port -e "START BACKUP"</userinput>
+  </programlisting>
+              
+              <para>
+                where <replaceable>management-host</replaceable> and
+                <replaceable>port</replaceable> are the hostname and port
+                number of the  management server. In our scenario as
+                outlined earliuer in this section (see 
+                <xref linkend="mysql-cluster-replication-preparation"/>),
+                this would be executed as follows:
+              </para>
+  
+  <programlisting>
+  shell<replaceable>M</replaceable>&gt; <userinput>ndb_mgm rep-master:1186 -e "START BACKUP"</userinput>
+  </programlisting>
+              
+            </listitem>
+          </itemizedlist>
+          
+          <para>
+            In either case, it is highly advisable to allow any pending
+            transactions to be completed before beginning the backup, and
+            then not to permit any new transactions to begin during the
+            backup process.
+          </para>        
+        </listitem>
+        
+        <listitem>
+          <para>
+            Copy the cluster backup files to the slave that is being
+            brought on line. Each system running an
+            <command>ndbd</command> process for the master cluster will
+            have cluster backup files located on it, and
+            <emphasis>all</emphasis> of these files must be copied to the
+            slave in order to assure a successful restore. The backup
+            files can be copied into any directory on the computer where
+            the slave management host resides, so long as the MySQL and
+            NDB binaries have read permissions in that directory. In this
+            case, we will assume that these files have been copied into
+            the directory <filename>/var/BACKUPS/BACKUP-1</filename>.
+          </para>
+          
+          <para>
+            It is not necessary that the slave cluster have the same
+            number of <command>ndbd</command> processes (data nodes) as
+            the master; however, it is highly recommended this number be
+            the same. It <emphasis>is</emphasis> necessary that the slave
+            be started with the <option>&ddash;skip-slave-start
+              option</option>, in order to prevent premature startup of
+            the replication process.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Create any databases on the slave cluster that are present on
+            the master cluster that are to be replicated to the slave.
+            <emphasis role="bold">Important</emphasis>: A <literal>CREATE
+              SCHEMA</literal> command corresponding to each database to
+            be replicated must be executed on each data node in the slave 
+            cluster.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Reset the slave cluster using this command in the MySQL
+            Monitor:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>RESET SLAVE;</userinput>
+  </programlisting>
+          
+          <para>
+            It is important that you make sure that the slave's
+            apply_status table does not contain any records prior to
+            running the restore process. You can accomplish this by
+            running this SQL command on the slave: 
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>DELETE FROM cluster_replication.apply_status;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            You can now start the cluster restoration process on the
+            replication slave using the <literal>ndb_restore</literal>
+            command for each backup file in turn. For the first of these
+            it is necessary to include the <option>-m</option> option in
+            order to restore the cluster metadata:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>S</replaceable>&gt; <userinput>ndb_restore -c <replaceable>slave-host</replaceable>:<replaceable>port</replaceable> -n <replaceable>node-id</replaceable> \</userinput>
+          <userinput>-b <replaceable>backup-id</replaceable> -m -r <replaceable>dir</replaceable></userinput>
+  </programlisting>
+          
+          <para>
+            where <replaceable>dir</replaceable> is the path to the
+            directory where the backup files have been placed on the
+            replication slave. For the <literal>ndb_restore</literal>
+            commands corresponding to the remaining backup files, the
+            <option>-m</option> option should <emphasis>not</emphasis> be
+            used.
+          </para>
+          
+          <para>
+            For restoring from a master cluster with four data nodes (as
+            shown in the figure in 
+            <xref linkend="mysql-cluster-replication"/>) where the backup
+            files have been copied to the directory
+            <filename>/var/BACKUPS/BACKUP-1</filename>, the proper
+            sequence of commands to be executed on the slave might look
+            like this:
+          </para>
+          
+  <programlisting>
+  shell<replaceable>S</replaceable>&gt; <userinput>ndb_restore -c rep-slave:1186 -n 2 -b 1 -m \</userinput>
+          <userinput>-r ./VAR/BACKUPS/BACKUP-1</userinput>
+  shell<replaceable>S</replaceable>&gt; <userinput>ndb_restore -c rep-slave:1186 -n 3 -b 1 \</userinput>
+          <userinput>-r ./VAR/BACKUPS/BACKUP-1</userinput>
+  shell<replaceable>S</replaceable>&gt; <userinput>ndb_restore -c rep-slave:1186 -n 4 -b 1 \</userinput>
+          <userinput>-r ./VAR/BACKUPS/BACKUP-1</userinput>
+  shell<replaceable>S</replaceable>&gt; <userinput>ndb_restore -c rep-slave:1186 -n 5 -b 1 \</userinput>
+          <userinput>-r ./VAR/BACKUPS/BACKUP-1</userinput>
+  </programlisting>
+          
+          <para>
+            This sequence of commands causes the most recent epoch records
+            to be written to the slave's <literal>apply_status</literal>
+            table.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Next, it is necessary to make all nodes in the slave cluster
+            aware of the new tables. (This is due to the fact that the
+            <literal>NDB Cluster</literal> storage engine does not
+            currently support autodiscovery of schema changes. See
+            <xref linkend="mysql-cluster-replication-schema-discovery"/>).
+            You can accomplish this using the commands
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>USE <replaceable>database</replaceable>;</userinput>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>SHOW TABLES;</userinput>
+  </programlisting>
+          
+          <para>
+            where <replaceable>database</replaceable> is the name of the
+            database which was backed up and restored. Where multiple
+            databases have been backed up and then restored, it is
+            necessary  to issue the <literal>USE</literal> and
+            <literal>SHOW</literal> commands for each database in turn.
+            Note also that these commands must be issued on each host
+            acting as a data node in the slave cluster. 
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Now you need to obtain the most recent epoch from the
+            <literal>binlog_index</literal> table on the slave, as
+            discussed elsewhere (see 
+            <xref linkend="mysql-cluster-replication-failover"/>):
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>SELECT @latest:=MAX(epoch)</userinput>
+          <userinput>FROM cluster_replication.apply_status;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Using <literal>@latest</literal> as the epoch value obtained
+            in the previous step, you can obtain the correct starting
+            position <literal>@pos</literal> in the correct binary logfile
+            <literal>@file</literal> from the master's
+            <literal>cluster_replication.binlog_index table</literal>
+            using the query shown here:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>M</replaceable>&gt; <userinput>SELECT</userinput> 
+       -&gt;     <userinput>@file:=SUBSTRING_INDEX(File, '/', -1),</userinput>
+       -&gt;     <userinput>@pos:=Position</userinput>
+       -&gt; <userinput>FROM cluster_replication.binlog_index</userinput>
+       -&gt; <userinput>WHERE epoch &gt; @latest</userinput>
+       -&gt; <userinput>ORDER BY epoch ASC LIMIT 1;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Using the values obtained in the previous step, you can now
+            issue the appropriate <literal>CHANGE MASTER TO</literal>
+            command in the slave's <command>mysql</command> client:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>CHANGE MASTER TO</userinput>
+       -&gt;     <userinput>MASTER_LOG_FILE='@file',</userinput>
+       -&gt;     <userinput>MASTER_LOG_POS=@pos;</userinput>
+  </programlisting>
+        </listitem>
+        
+        <listitem>
+          <para>
+            Now that the slave <quote>knows</quote> from what point in
+            which <literal>binlog</literal> file to start reading data
+            from the master, you can cause the slave to begin replicating
+            with this standard MySQL command:
+          </para>
+          
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>START SLAVE;</userinput>
+  </programlisting>
+        </listitem>
+      </orderedlist>
+      
+      <para>
+        In order to perform a backup and restore on a second replication
+        channel, it is necessary only to repeat these steps, substituting
+        the hostnames and IDs of the secondary master and slave for those
+        of the primary master and slave replication servers where
+        appropriate, and running the above commands on them. 
+      </para>
+      
+      <para>
+        For additional information on performing Cluster backups and
+        restoring Cluster from backups, see 
+        <xref linkend="mysql-cluster-backup"/>.
+      </para>
+      
+      <section id="mysql-cluster-replication-auto-sync">
+        <title>&title-mysql-cluster-replication-auto-sync;</title>
+        
+        <para>
+          It is possible to automate much of the process described in the
+          previous section (see 
+          <xref linkend="mysql-cluster-replication-backups"/>). The
+          following Perl script <filename>reset-slave.pl</filename> serves
+          as an example of how you can do this.
+        </para>
+        
+  <programlisting>
+  #!/user/bin/perl -w
+  
+  #  file: reset-slave.pl
+  
+  #  Copyright &copy;2005 MySQL AB
+  
+  #  This program is free software; you can redistribute it and/or modify
+  #  it under the terms of the GNU General Public License as published by
+  #  the Free Software Foundation; either version 2 of the License, or
+  #  (at your option) any later version.
+  
+  #  This program is distributed in the hope that it will be useful,
+  #  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  #  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  #  GNU General Public License for more details.
+  
+  #  You should have received a copy of the GNU General Public License
+  #  along with this program; if not, write to: 
+  #  Free Software Foundation, Inc. 
+  #  59 Temple Place, Suite 330 
+  #  Boston, MA 02111-1307 USA
+  #
+  #  Version 1.1
+  
+  
+  ######################## Includes ###############################
+  
+  use DBI;
+  
+  ######################## Globals ################################
+  
+  my  $m_host='';
+  my  $m_port='';
+  my  $m_user='';
+  my  $m_pass='';
+  my  $s_host='';
+  my  $s_port='';
+  my  $s_user='';
+  my  $s_pass='';
+  my  $dbhM='';
+  my  $dbhS='';
+  
+  ####################### Sub Prototypes ##########################
+  
+  sub CollectCommandPromptInfo;
+  sub ConnectToDatabases;
+  sub DisconnectFromDatabases;
+  sub GetSlaveEpoch;
+  sub GetMasterInfo;
+  sub UpdateSlave;
+  
+  ######################## Program Main ###########################
+  
+  CollectCommandPromptInfo;
+  ConnectToDatabases;
+  GetSlaveEpoch;
+  GetMasterInfo;
+  UpdateSlave;
+  DisconnectFromDatabases;
+  
+  ################## Collect Command Prompt Info ##################
+  
+  sub CollectCommandPromptInfo
+  {
+    ### Check that user has supplied correct number of command line args
+    die "Usage:\n
+         reset-slave &gt;master MySQL host&lt; &gt;master MySQL port&lt; \n
+                     &gt;master user&lt; &gt;master pass&lt; &gt;slave MySQL host&lt; \n
+                     &gt;slave MySQL port&lt; &gt;slave user&lt; &gt;slave pass&lt; \n
+         All 8 arguments must be passed. Use BLANK for NULL passwords\n"
+         unless @ARGV == 8;
+  
+    $m_host  =  $ARGV[0];
+    $m_port  =  $ARGV[1];
+    $m_user  =  $ARGV[2];
+    $m_pass  =  $ARGV[3];
+    $s_host  =  $ARGV[4];
+    $s_port  =  $ARGV[5];
+    $s_user  =  $ARGV[6];
+    $s_pass  =  $ARGV[7];
+  
+    if ($m_pass eq "BLANK") { $m_pass = '';}
+    if ($s_pass eq "BLANK") { $s_pass = '';}
+  }
+  
+  ###############  Make connections to both databases #############
+  
+  sub ConnectToDatabases
+  {
+    ### Connect to both master and slave cluster databases
+  
+    ### Connect to master
+    $dbhM
+      = DBI->connect(
+      "dbi:mysql:database=cluster_replication;host=$m_host;port=$m_port",
+      "$m_user", "$m_pass")
+        or die "Can't connect to Master Cluster MySQL process!
+                Error: $DBI::errstr\n";
+  
+    ### Connect to slave
+    $dbhS 
+      = DBI->connect(
+            "dbi:mysql:database=cluster_replication;host=$s_host",
+            "$s_user", "$s_pass")
+      or die "Can't connect to Slave Cluster MySQL process!
+              Error: $DBI::errstr\n";
+  }
+  
+  ################  Disconnect from both databases ################
+  
+  sub DisconnectFromDatabases
+  {
+    ### Disconnect from master
+  
+    $dbhM->disconnect
+    or warn " Disconnection failed: $DBI::errstr\n";
+  
+    ### Disconnect from slave
+  
+    $dbhS->disconnect
+    or warn " Disconnection failed: $DBI::errstr\n";
+  }
+  
+  ######################  Find the last good GCI ##################
+  
+  sub GetSlaveEpoch
+  {
+    $sth = $dbhS->prepare("SELECT MAX(epoch)
+                           FROM cluster_replication.apply_status;")
+        or die "Error while preparing to select epoch from slave: ", 
+               $dbhS->errstr;
+  
+    $sth->execute
+        or die "Selecting epoch from slave error: ", $sth->errstr;
+  
+    $sth->bind_col (1, \$epoch);
+    $sth->fetch;
+    print "\tSlave Epoch =  $epoch\n";
+    $sth->finish;
+  }
+  
+  #######  Find the position of the last GCI in the binlog ########
+  
+  sub GetMasterInfo
+  {
+    $sth = $dbhM->prepare("SELECT 
+                             SUBSTRING_INDEX(File, '/', -1), Position
+                           FROM cluster_replication.binlog_index
+                           WHERE epoch > $epoch 
+                           ORDER BY epoch ASC LIMIT 1;")
+        or die "Prepare to select from master error: ", $dbhM->errstr;
+  
+    $sth->execute
+        or die "Selecting from master error: ", $sth->errstr;
+  
+    $sth->bind_col (1, \$binlog);
+    $sth->bind_col (2, \$binpos);
+    $sth->fetch;
+    print "\tMaster bin log =  $binlog\n";
+    print "\tMaster Bin Log position =  $binpos\n";
+    $sth->finish;
+  }
+  
+  ##########  Set the slave to process from that location #########
+  
+  sub UpdateSlave
+  {
+    $sth = $dbhS->prepare("CHANGE MASTER TO
+                           MASTER_LOG_FILE='$binlog',
+                           MASTER_LOG_POS=$binpos;")
+        or die "Prepare to CHANGE MASTER error: ", $dbhS->errstr;
+  
+    $sth->execute
+         or die "CHNAGE MASTER on slave error: ", $sth->errstr;
+    $sth->finish;
+    print "\tSlave has been updated. You may now start the slave.\n";
+  }
+  
+  # end reset-slave.pl
+  </programlisting>
+        
+      </section>
+      
+      <section id="mysql-cluster-replication-schema-discovery">
+        <title>&title-mysql-cluster-replication-schema-discovery;</title>
+        
+        <para>
+          The NDB Cluster storage engine does not at present automatically
+          detect structural changes in databases or tables. When a
+          database or table is created or dropped, or when a table is
+          altered using <literal>ALTER TABLE</literal>, the cluster must
+          be made aware of the change. When a database is created or
+          dropped, the appropriate <literal>CREATE SCHEMA</literal> or
+          <literal>DROP SCHEMA</literal> command should be issued on each
+          storage node in the cluster in order induce discovery of the
+          change, that is:
+        </para>
+        
+  <programlisting>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>CREATE SCHEMA <replaceable>db-name</replaceable>;</userinput>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>DROP SCHEMA <replaceable>db-name</replaceable>;</userinput>
+  </programlisting>
+        
+        <para>
+          <emphasis role="bold">Dropping Tables</emphasis>:
+        </para>
+        
+        <para>
+          When dropping a table that uses the <literal>NDB
+            Cluster</literal> storage engine, it is necessary to allow any
+          unfinished transactions to be completed and then not to begin
+          any  new transactions before performing the DROP operation:
+        </para>
+        
+        <orderedlist>
+          <listitem>
+            <para>
+              Stop performing transactions on the slave.
+            </para>
+          </listitem>
+          
+          <listitem>
+            <para>
+              Drop the table:
+            </para>
+            
+  <programlisting>
+  mysql<replaceable>S</replaceable>&amp; <userinput>DROP TABLE [<replaceable>db-name</replaceable>.]<replaceable>table-name</replaceable>;</userinput>
+  </programlisting>
+          </listitem>
+          
+          <listitem>
+            <para>
+              Make all slave <command>mysqld</command> processes aware of
+              the drop:
+            </para>
+            
+  <programlisting>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>SHOW TABLES [FROM <replaceable>db-name</replaceable>];</userinput>
+  </programlisting>
+          </listitem>
+        </orderedlist>
+        
+        <para>
+          All of the MySQL slave servers can now <quote>see</quote> that
+          the table has been dropped from the database.
+        </para>
+        
+        <para>
+          <emphasis role="bold">Creating Tables</emphasis>
+        </para>
+        
+        <para>
+          When creating a new table, you should perform the following
+          steps:
+        </para>
+        
+        <orderedlist>
+          <listitem>
+            <para>
+              Create the table:
+            </para>
+            
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>CREATE TABLE [<replaceable>db-name</replaceable>.]<replaceable>table-name</replaceable> (</userinput>
+          <userinput><replaceable>#  column and index definitions...</replaceable></userinput>
+          <userinput>) ENGINE=NDB;</userinput>
+  </programlisting>
+          </listitem>
+          
+          <listitem>
+            <para>
+              Make all SQL nodes in the slave cluster aware of the new
+              table:
+            </para>
+            
+  <programlisting>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>SHOW TABLES [FROM <replaceable>db-name</replaceable>];</userinput>
+  </programlisting>
+            
+            <para>
+              You can now start using the table as normal. When creating a
+              new table, note that &mdash; unlike the case when dropping
+              tables &mdash;, it is <emphasis>not</emphasis> necessary to
+              stop performing any transactions beforehand.
+            </para>
+          </listitem>
+        </orderedlist>
+        
+        <para>
+          <emphasis role="bold">Altering tables</emphasis>
+        </para>
+        
+        <para>
+          When altering tables, you should perform the following steps in
+          the order shown:
+        </para>
+        
+        <orderedlist>
+          <listitem>
+            <para>
+              Ensure that all pending transactions have been completed,
+              and do not initiate any new transactions at this time.
+            </para>
+          </listitem>
+          
+          <listitem>
+            <para>
+              Issue any desried <literal>ALTER TABLE</literal> commands
+              that add and/or remove columns to or from an existing table,
+              for example:
+            </para>
+            
+  <programlisting>
+  mysql<replaceable>S</replaceable>&gt; <userinput>ALTER TABLE table-name /* <replaceable>column-definition(s)...</replaceable> */;</userinput>
+  </programlisting>
+          </listitem>
+          
+          <listitem>
+            <para>
+              Force all slave SQL nodes to become aware of the changed
+              table definition. The recommended way to do this is by
+              issuing a <quote>throwaway</quote> <literal>SHOW
+                TABLES</literal> command on each slave
+              <command>mysqld</command>:
+            </para>
+            
+  <programlisting>
+  mysql<replaceable>S*</replaceable>&gt; <userinput>SHOW TABLES;</userinput>
+  </programlisting>
+            
+            <para>
+              You may now resume normal operations. These include
+              transactions involving records in the changed table.
+            </para>
+          </listitem>
+        </orderedlist>
+        
+        <para>
+          Note that when you create a new <literal>NDB Cluster</literal>
+          table on the master cluster, if you do so using the
+          <command>mysqld</command> that acts as the replication master,
+          then you must execute a <literal>SHOW TABLES</literal>, also on
+          the master <command>mysqld</command>, in order to initiate
+          discovery properly. Otherwise, the new table and any data it
+          contains cannot be seen by the replication master
+          <command>mysqld</command>, nor by the slave (that is, neither
+          the new table nor its data is replicated). If the table is
+          created on a <command>mysqld</command> that is not acting as the
+          replication master, then it does not matter which
+          <command>mysqld</command> issues the <literal>SHOW
+            TABLES</literal>.
+        </para>
+        
+        <para>
+          It is also possible to force discovery by issuing a
+          <quote>dummy</quote> <literal>SELECT</literal> statement using
+          the new or altered table in the statement's
+          <literal>FROM</literal> clause. Although the statement fails, it
+          causes the change to be recognised by the cluster. However,
+          issuing a <literal>SHOW TABLES</literal> is the preferred method.
+        </para>
+        
+        <para>
+          We are working to implement automatic discovery of schema
+          changes in a future MySQL Cluster release. For more information
+          about this and other Cluster issues, see 
+          <xref linkend="mysql-cluster-limitations"/>.
+        </para>
+        
+      </section>    
+      
+    </section>
+  
+  </section>
+  
+  -->
+  
+  
 
   <section id="mysql-cluster-interconnects">
 

Modified: trunk/refman-common/fixedchars.ent
===================================================================
--- trunk/refman-common/fixedchars.ent	2005-12-19 15:41:06 UTC (rev 596)
+++ trunk/refman-common/fixedchars.ent	2005-12-19 15:50:01 UTC (rev 597)
@@ -243,3 +243,4 @@
         <!ENTITY euro   "&#x20AC;">
         <!ENTITY le     "&#x2264;">
         <!ENTITY ge     "&#x2655;">
+        <!ENTITY ddash  "--">

Modified: trunk/refman-common/titles.en.ent
===================================================================
--- trunk/refman-common/titles.en.ent	2005-12-19 15:41:06 UTC (rev 596)
+++ trunk/refman-common/titles.en.ent	2005-12-19 15:50:01 UTC (rev 597)
@@ -833,7 +833,7 @@
 <!ENTITY title-mysql-cluster-replication-two-channels "Using Two Replication Channels">
 <!ENTITY title-mysql-cluster-replication-failover "Implementing Failover with MySQL Cluster">
 <!ENTITY title-mysql-cluster-replication-backups "MySQL Cluster Backups With Replication">
-<!ENTITY title-mysql-cluster-replication-auto-sync "Automating Synchronisation of the Slave to the Master binlog">
+<!ENTITY title-mysql-cluster-replication-auto-sync "Automating Synchronization of the Slave to the Master binlog">
 <!ENTITY title-mysql-cluster-replication-schema-discovery "Initiating Discovery of Schema Changes">
 <!ENTITY title-mysql-cluster-restore "How to Restore a Cluster Backup">
 <!ENTITY title-mysql-cluster-roadmap "MySQL Cluster Development Roadmap">

Thread
svn commit - mysqldoc@docsrva: r597 - in trunk: refman-5.1 refman-5.1/images refman-commonjon19 Dec