MySQL Lists are EOL. Please join:

List:Commits« Previous MessageNext Message »
From:jon Date:November 28 2008 9:51am
Subject:svn commit - mysqldoc@docsrva: r12688 - in trunk: refman-4.1 refman-5.0 refman-5.1
View as plain text  
Author: jstephens
Date: 2008-11-28 10:51:51 +0100 (Fri, 28 Nov 2008)
New Revision: 12688

Log:

Fixes Docs Bug #40596 (dated SCI docs)



Modified:
   trunk/refman-4.1/mysql-cluster-interconnects.xml
   trunk/refman-5.0/mysql-cluster-interconnects.xml
   trunk/refman-5.1/mysql-cluster-interconnects.xml


Modified: trunk/refman-4.1/mysql-cluster-interconnects.xml
===================================================================
--- trunk/refman-4.1/mysql-cluster-interconnects.xml	2008-11-28 03:28:32 UTC (rev 12687)
+++ trunk/refman-4.1/mysql-cluster-interconnects.xml	2008-11-28 09:51:51 UTC (rev 12688)
Changed blocks: 1, Lines Added: 40, Lines Deleted: 466; 18578 bytes

@@ -111,486 +111,60 @@
 
     <indexterm>
       <primary>SCI (Scalable Coherent Interface)</primary>
-<!-- <see>MySQL Cluster</see> -->
+      <see>MySQL Cluster</see>
     </indexterm>
 
-    <para>
-      In this section, we show how to adapt a cluster configured for
-      normal TCP/IP communication to use SCI Sockets instead. This
-      documentation is based on SCI Sockets version 2.3.0 as of 01
-      October 2004.
-    </para>
+    <remark role="NOTE">
+      [js] Update following para to mention Windows when 6.4 becomes
+      available.
+    </remark>
 
-    <formalpara>
-
-      <title>Prerequisites</title>
-
-      <para>
-        Any machines with which you wish to use SCI Sockets must be
-        equipped with SCI cards.
-      </para>
-
-    </formalpara>
-
     <para>
-      No special builds (other than the <literal>-max</literal> builds)
-      are needed for SCI Sockets because it uses normal TCP/IP socket
-      calls which are already available in MySQL Cluster. However, SCI
-      Sockets are currently supported only on the Linux 2.4 and 2.6
-      kernels. For other operating systems, you can use SCI
-      Transporters, but this requires that the server be built using
-      <option>--with-ndb-sci=/opt/DIS</option>.
+      It is possible employing Scalable Coherent Interface (SCI)
+      technology to achieve a significant increase in connection speeds
+      and throughput between MySQL Cluster data and SQL nodes. To use
+      SCI, it is necessary to obtain and install Dolphin SCI network
+      cards and to use the drivers and other software supplied by
+      Dolphin. You can get information on obtaining these, from
+      <ulink url="http://www.dolphinics.com/">Dolphin Interconnect
+      Solutions</ulink>. SCI SuperSocket or SCI Transporter support is
+      available for 32-bit and 64-bit Linux, Solaris, and other
+      platforms. See the Dolphin documentation referenced later in this
+      section for more detailed information regarding platforms
+      supported for SCI.
     </para>
 
-    <indexterm>
-      <primary>MySQL Cluster</primary>
-      <secondary>SCI software requirements</secondary>
-    </indexterm>
-
-    <para>
-      There are essentially four requirements for SCI Sockets:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          Building the SCI Socket libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          Installation of the SCI Socket kernel libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          Installation of one or two configuration files.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          The SCI Socket kernel library must be enabled either for the
-          entire machine or for the shell where the MySQL Cluster
-          processes are started.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      This process needs to be repeated for each machine in the cluster
-      where you plan to use SCI Sockets for inter-node communication.
-    </para>
-
-    <para>
-      Two packages need to be retrieved to get SCI Sockets working:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          The source code package containing the DIS support libraries
-          for the SCI Sockets libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          The source code package for the SCI Socket libraries
-          themselves.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      Currently, these are available only in source code format. The
-      latest versions of these packages at the time of this writing were
-      available as (respectively)
-      <filename>DIS_GPL_2_5_0_SEP_10_2004.tar.gz</filename> and
-      <filename>SCI_SOCKET_2_3_0_OKT_01_2004.tar.gz</filename>. You
-      should be able to find these (or possibly newer versions) at
-      <ulink url="http://www.dolphinics.com/support/downloads.html"/>.
-    </para>
-
-    <formalpara>
-
-      <title>Package Installation</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI software installation</secondary>
-      </indexterm>
-
-      <para>
-        Once you have obtained the library packages, the next step is to
-        unpack them into appropriate directories, with the SCI Sockets
-        library unpacked into a directory below the DIS code. Next, you
-        need to build the libraries. This example shows the commands
-        used on Linux/x86 to perform this task:
-      </para>
-
-    </formalpara>
-
-<programlisting>
-shell&gt; <userinput>tar xzf DIS_GPL_2_5_0_SEP_10_2004.tar.gz</userinput>
-shell&gt; <userinput>cd DIS_GPL_2_5_0_SEP_10_2004/src/</userinput>
-shell&gt; <userinput>tar xzf ../../SCI_SOCKET_2_3_0_OKT_01_2004.tar.gz</userinput>
-shell&gt; <userinput>cd ../adm/bin/Linux_pkgs</userinput>
-shell&gt; <userinput>./make_PSB_66_release</userinput>
-</programlisting>
-
-    <para>
-      It is possible to build these libraries for some 64-bit procesors.
-      To build the libraries for Opteron CPUs using the 64-bit
-      extensions, run <command>make_PSB_66_X86_64_release</command>
-      rather than <command>make_PSB_66_release</command>. If the build
-      is made on an Itanium machine, you should use
-      <command>make_PSB_66_IA64_release</command>. The X86-64 variant
-      should work for Intel EM64T architectures but this has not yet (to
-      our knowledge) been tested.
-    </para>
-
-    <para>
-      Once the build process is complete, the compiled libraries will be
-      found in a zipped tar file with a name along the lines of
-      <filename>DIS-<replaceable>&lt;operating-system&gt;</replaceable>-<replaceable>time</replaceable>-<replaceable>date</replaceable></filename>.
-      It is now time to install the package in the proper place. In this
-      example we will place the installation in
-      <filename>/opt/DIS</filename>. You most likely need to run the
-      following as the system <literal>root</literal> user.)
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cp DIS_Linux_2.4.20-8_181004.tar.gz /opt/</userinput>
-shell&gt; <userinput>cd /opt</userinput>
-shell&gt; <userinput>tar xzf DIS_Linux_2.4.20-8_181004.tar.gz</userinput>
-shell&gt; <userinput>mv DIS_Linux_2.4.20-8_181004 DIS</userinput>
-</programlisting>
-
-    <formalpara>
-
-      <title>Network Configuration</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>network configuration (SCI)</secondary>
-      </indexterm>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI network configuration</secondary>
-      </indexterm>
-
-      <para>
-        Now that all the libraries and binaries are in their proper
-        place, we need to ensure that the SCI cards have proper node IDs
-        within the SCI address space.
-      </para>
-
-    </formalpara>
-
-    <para>
-      It is also necessary to decide on the network structure before
-      proceeding. There are three types of network structures which can
-      be used in this context:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          A simple one-dimensional ring
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          One or more SCI switches with one ring per switch port
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          A two- or three-dimensional torus.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      Each of these topologies has its own method for providing node
-      IDs. We discuss each of them in brief.
-    </para>
-
-    <para>
-      A simple ring uses node IDs which are non-zero multiples of 4: 4,
-      8, 12,...
-    </para>
-
-    <para>
-      The next possibility uses SCI switches. An SCI switch has 8 ports,
-      each of which can support a ring. It is necessary to make sure
-      that different rings use different node ID spaces. In a typical
-      configuration, the first port uses node IDs below 64 (4 &minus;
-      60), the next 64 node IDs (68 &minus; 124) are assigned to the
-      next port, and so on, with node IDs 452 &minus; 508 being assigned
-      to the eighth port.
-    </para>
-
-    <para>
-      Two- and three-dimensional torus network structures take into
-      account where each node is located in each dimension, incrementing
-      by 4 for each node in the first dimension, by 64 in the second
-      dimension, and (where applicable) by 1024 in the third dimension.
-      See
-      <ulink url="http://www.dolphinics.com/support/index.html">Dolphin's
-      Web site</ulink> for more thorough documentation.
-    </para>
-
-    <para>
-      In our testing we have used switches, although most large cluster
-      installations use 2- or 3-dimensional torus structures. The
-      advantage provided by switches is that, with dual SCI cards and
-      dual switches, it is possible to build with relative ease a
-      redundant network where the average failover time on the SCI
-      network is on the order of 100 microseconds. This is supported by
-      the SCI transporter in MySQL Cluster and is also under development
-      for the SCI Socket implementation.
-    </para>
-
-    <para>
-      Failover for the 2D/3D torus is also possible but requires sending
-      out new routing indexes to all nodes. However, this requires only
-      100 milliseconds or so to complete and should be acceptable for
-      most high-availability cases.
-    </para>
-
-    <para>
-      By placing cluster data nodes properly within the switched
-      architecture, it is possible to use 2 switches to build a
-      structure whereby 16 computers can be interconnected and no single
-      failure can hinder more than one of them. With 32 computers and 2
-      switches it is possible to configure the cluster in such a manner
-      that no single failure can cause the loss of more than two nodes;
-      in this case, it is also possible to know which pair of nodes is
-      affected. Thus, by placing the two nodes in separate node groups,
-      it is possible to build a <quote>safe</quote> MySQL Cluster
-      installation.
-    </para>
-
-    <para>
-      To set the node ID for an SCI card use the following command in
-      the <filename>/opt/DIS/sbin</filename> directory. In this example,
-      <option>-c 1</option> refers to the number of the SCI card (this
-      is always 1 if there is only 1 card in the machine); <option>-a
-      0</option> refers to adapter 0; and <literal>68</literal> is the
-      node ID:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>./sciconfig -c 1 -a 0 -n 68</userinput>
-</programlisting>
-
-    <para>
-      If you have multiple SCI cards in the same machine, you can
-      determine which card has which slot by issuing the following
-      command (again we assume that the current working directory is
-      <filename>/opt/DIS/sbin</filename>):
-    </para>
-
-<programlisting>
-shell&gt; <userinput>./sciconfig -c 1 -gsn</userinput>
-</programlisting>
-
-    <para>
-      This will give you the SCI card's serial number. Then repeat this
-      procedure with <option>-c 2</option>, and so on, for each card in
-      the machine. Once you have matched each card with a slot, you can
-      set node IDs for all cards.
-    </para>
-
-    <para>
-      After the necessary libraries and binaries are installed, and the
-      SCI node IDs are set, the next step is to set up the mapping from
-      hostnames (or IP addresses) to SCI node IDs. This is done in the
-      SCI sockets configuration file, which should be saved as
-      <filename>/etc/sci/scisock.conf</filename>. In this file, each SCI
-      node ID is mapped through the proper SCI card to the hostname or
-      IP address that it is to communicate with. Here is a very simple
-      example of such a configuration file:
-    </para>
-
-<programlisting>
-#host           #nodeId
-alpha           8
-beta            12
-192.168.10.20   16
-</programlisting>
-
-    <para>
-      It is also possible to limit the configuration so that it applies
-      only to a subset of the available ports for these hosts. An
-      additional configuration file
-      <filename>/etc/sci/scisock_opt.conf</filename> can be used to
-      accomplish this, as shown here:
-    </para>
-
-<programlisting>
-#-key                        -type        -values
-EnablePortsByDefault                yes
-EnablePort                  tcp           2200
-DisablePort                 tcp           2201
-EnablePortRange             tcp           2202 2219
-DisablePortRange            tcp           2220 2231
-</programlisting>
-
-    <formalpara>
-
-      <title>Driver Installation</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI drivers</secondary>
-      </indexterm>
-
-      <para>
-        With the configuration files in place, the drivers can be
-        installed.
-      </para>
-
-    </formalpara>
-
-    <para>
-      First, the low-level drivers and then the SCI socket driver need
-      to be installed:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd DIS/sbin/</userinput>
-shell&gt; <userinput>./drv-install add PSB66</userinput>
-shell&gt; <userinput>./scisocket-install add</userinput>
-</programlisting>
-
-    <para>
-      If desired, the installation can be checked by invoking a script
-      which verifies that all nodes in the SCI socket configuration
-      files are accessible:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/sbin/</userinput>
-shell&gt; <userinput>./status.sh</userinput>
-</programlisting>
-
-    <para>
-      If you discover an error and need to change the SCI socket
-      configuration, it is necessary to use
-      <command>ksocketconfig</command> to accomplish this task:
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/util</userinput>
-shell&gt; <userinput>./ksocketconfig -f</userinput>
-</programlisting>
-
-      For more information about <command>ksocketconfig</command>,
-      consult the documentation available from
-      <ulink url="http://www.dolphinics.com/support/documentation.html"/>.
-    </para>
-
-    <formalpara>
-
-      <title>Testing the Setup</title>
-
-      <para>
-        To ensure that SCI sockets are actually being used, you can
-        employ the <command>latency_bench</command> test program. Using
-        this utility's server component, clients can connect to the
-        server to test the latency of the connection. Determining
-        whether SCI is enabled should be fairly simple from observing
-        the latency.
-      </para>
-
-    </formalpara>
-
     <note>
       <para>
-        Before using <command>latency_bench</command>, it is necessary
-        to set the <literal>LD_PRELOAD</literal> environment variable as
-        shown later in this section.
+        Prior to MySQL 4.1.24, there were issues with building MySQL
+        Cluster with SCI support (see Bug #25470), but these have been
+        resolved due to work contributed by Dolphin. SCI Sockets are now
+        correctly supported for MySQL Cluster hosts running recent
+        versions of Linux using the <literal>-max</literal> builds, and
+        versions of MySQL Cluster with SCI Transporter support can be
+        built using either of <command>compile-amd64-max-sci</command>
+        or <command>compile-pentium64-max-sci</command>. Both of these
+        build scripts can be found in the <filename>BUILD</filename>
+        directory of the MySQL Cluster source trees; it should not be
+        difficult to adapt them for other platforms. Generally, all that
+        is necessary is to compile MySQL Cluster with SCI Transporter
+        support is to configure the MySQL Cluster build using
+        <option>--with-ndb-sci=/opt/DIS</option>.
       </para>
     </note>
 
     <para>
-      To set up a server, use the following:
+      Once you have acquired the required Dolphin hardware and software,
+      you can obtain detailed information on how to adapt a MySQL
+      Cluster configured for normal TCP/IP communication to use SCI from
+      the <citetitle>Dolphin Express for MySQL Installation and
+      Reference Guide</citetitle>, available for download at
+      <ulink url="http://docsrva.mysql.com/public/DIS_install_guide_book.pdf"/>
+      (PDF file, 94 pages, 753 KB). This document provides instructions
+      for installing the SCI hardware and software, as well as
+      information concerning network topology and configuration.
     </para>
 
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/bin/socket</userinput>
-shell&gt; <userinput>./latency_bench -server</userinput>
-</programlisting>
-
-    <para>
-      To run a client, use <command>latency_bench</command> again,
-      except this time with the <option>-client</option> option:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/bin/socket</userinput>
-shell&gt; <userinput>./latency_bench -client <replaceable>server_hostname</replaceable></userinput>
-</programlisting>
-
-    <para>
-      SCI socket configuration should now be complete and MySQL Cluster
-      ready to use both SCI Sockets and the SCI transporter (see
-      <xref linkend="mysql-cluster-sci-definition"/>).
-    </para>
-
-    <formalpara>
-
-      <title>Starting the Cluster</title>
-
-      <para>
-        The next step in the process is to start MySQL Cluster. To
-        enable usage of SCI Sockets it is necessary to set the
-        environment variable <literal>LD_PRELOAD</literal> before
-        starting <command>ndbd</command>, <command>mysqld</command>, and
-        <command>ndb_mgmd</command>. This variable should point to the
-        kernel library for SCI Sockets.
-      </para>
-
-    </formalpara>
-
-    <para>
-      To start <command>ndbd</command> in a bash shell, do the
-      following:
-    </para>
-
-<programlisting>
-bash-shell&gt; <userinput>export LD_PRELOAD=/opt/DIS/lib/libkscisock.so</userinput>
-bash-shell&gt; <userinput>ndbd</userinput>
-</programlisting>
-
-    <para>
-      In a tcsh environment the same thing can be accomplished with:
-    </para>
-
-<programlisting>
-tcsh-shell&gt; <userinput>setenv LD_PRELOAD=/opt/DIS/lib/libkscisock.so</userinput>
-tcsh-shell&gt; <userinput>ndbd</userinput>
-</programlisting>
-
-    <note>
-      <para>
-        MySQL Cluster can use only the kernel variant of SCI Sockets.
-      </para>
-    </note>
-
   </section>
 
   <section id="mysql-cluster-interconnects-performance">


Modified: trunk/refman-5.0/mysql-cluster-interconnects.xml
===================================================================
--- trunk/refman-5.0/mysql-cluster-interconnects.xml	2008-11-28 03:28:32 UTC (rev 12687)
+++ trunk/refman-5.0/mysql-cluster-interconnects.xml	2008-11-28 09:51:51 UTC (rev 12688)
Changed blocks: 1, Lines Added: 48, Lines Deleted: 488; 19704 bytes

@@ -96,515 +96,75 @@
   </para>
 
   <section id="mysql-cluster-sci-sockets">
-
+    
     <title>Configuring MySQL Cluster to use SCI Sockets</title>
-
+    
     <indexterm>
       <primary>MySQL Cluster</primary>
       <secondary>network transporters</secondary>
     </indexterm>
-
+    
     <indexterm>
       <primary>MySQL Cluster</primary>
       <secondary>SCI (Scalable Coherent Interface)</secondary>
     </indexterm>
-
+    
     <indexterm>
       <primary>SCI (Scalable Coherent Interface)</primary>
       <see>MySQL Cluster</see>
     </indexterm>
-
+    
+    <remark role="NOTE">
+      [js] Update following para to mention Windows when 6.4 becomes
+      available.
+    </remark>
+    
     <para>
-      In this section, we show how to adapt a cluster configured for
-      normal TCP/IP communication to use SCI Sockets instead. This
-      documentation is based on SCI Sockets version 2.3.0 as of 01
-      October 2004.
+      It is possible employing Scalable Coherent Interface (SCI)
+      technology to achieve a significant increase in connection speeds
+      and throughput between MySQL Cluster data and SQL nodes. To use
+      SCI, it is necessary to obtain and install Dolphin SCI network
+      cards and to use the drivers and other software supplied by
+      Dolphin. You can get information on obtaining these, from
+      <ulink url="http://www.dolphinics.com/">Dolphin Interconnect
+        Solutions</ulink>. SCI SuperSocket or SCI Transporter support is
+      available for 32-bit and 64-bit Linux, Solaris, and other
+      platforms. See the Dolphin documentation referenced later in this
+      section for more detailed information regarding platforms
+      supported for SCI.
     </para>
-
-    <formalpara>
-
-      <title>Prerequisites</title>
-
-      <para>
-        Any machines with which you wish to use SCI Sockets must be
-        equipped with SCI cards.
-      </para>
-
-    </formalpara>
-
-    <para>
-      No special builds (other than the <literal>-max</literal> builds)
-      are needed for SCI Sockets because it uses normal TCP/IP socket
-      calls which are already available in MySQL Cluster. However, SCI
-      Sockets are currently supported only on the Linux 2.4 and 2.6
-      kernels. For other operating systems, you can use SCI
-      Transporters, but this requires that the server be built using
-      <option>--with-ndb-sci=/opt/DIS</option>.
-    </para>
-
-    <para>
-      Prior to MySQL 5.0.44, there were issues with building MySQL
-      Cluster with SCI support (see Bug #25470), but these have been
-      resolved due to work contributed by Dolphin International. SCI
-      Sockets are now correctly supported for MySQL Cluster using the
-      <literal>-max</literal> builds, and versions of MySQL Cluster with
-      SCI Transporter support can be built using either of
-      <command>compile-amd64-max-sci</command> or
-      <command>compile-pentium64-max-sci</command>. Both of these build
-      scripts can be found in the <filename>BUILD</filename> directory
-      of the MySQL &current-series; source; it should not be difficult
-      to adapt them for other platforms.
-    </para>
-
-    <indexterm>
-      <primary>MySQL Cluster</primary>
-      <secondary>SCI software requirements</secondary>
-    </indexterm>
-
-    <para>
-      There are essentially four requirements for SCI Sockets:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          Building the SCI Socket libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          Installation of the SCI Socket kernel libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          Installation of one or two configuration files.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          The SCI Socket kernel library must be enabled either for the
-          entire machine or for the shell where the MySQL Cluster
-          processes are started.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      This process needs to be repeated for each machine in the cluster
-      where you plan to use SCI Sockets for inter-node communication.
-    </para>
-
-    <para>
-      Two packages need to be retrieved to get SCI Sockets working:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          The source code package containing the DIS support libraries
-          for the SCI Sockets libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          The source code package for the SCI Socket libraries
-          themselves.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      Currently, these are available only in source code format. The
-      latest versions of these packages at the time of this writing were
-      available as (respectively)
-      <filename>DIS_GPL_2_5_0_SEP_10_2004.tar.gz</filename> and
-      <filename>SCI_SOCKET_2_3_0_OKT_01_2004.tar.gz</filename>. You
-      should be able to find these (or possibly newer versions) at
-      <ulink url="http://www.dolphinics.com/support/downloads.html"/>.
-    </para>
-
-    <formalpara>
-
-      <title>Package Installation</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI software installation</secondary>
-      </indexterm>
-
-      <para>
-        Once you have obtained the library packages, the next step is to
-        unpack them into appropriate directories, with the SCI Sockets
-        library unpacked into a directory below the DIS code. Next, you
-        need to build the libraries. This example shows the commands
-        used on Linux/x86 to perform this task:
-      </para>
-
-    </formalpara>
-
-<programlisting>
-shell&gt; <userinput>tar xzf DIS_GPL_2_5_0_SEP_10_2004.tar.gz</userinput>
-shell&gt; <userinput>cd DIS_GPL_2_5_0_SEP_10_2004/src/</userinput>
-shell&gt; <userinput>tar xzf ../../SCI_SOCKET_2_3_0_OKT_01_2004.tar.gz</userinput>
-shell&gt; <userinput>cd ../adm/bin/Linux_pkgs</userinput>
-shell&gt; <userinput>./make_PSB_66_release</userinput>
-</programlisting>
-
-    <para>
-      It is possible to build these libraries for some 64-bit procesors.
-      To build the libraries for Opteron CPUs using the 64-bit
-      extensions, run <command>make_PSB_66_X86_64_release</command>
-      rather than <command>make_PSB_66_release</command>. If the build
-      is made on an Itanium machine, you should use
-      <command>make_PSB_66_IA64_release</command>. The X86-64 variant
-      should work for Intel EM64T architectures but this has not yet (to
-      our knowledge) been tested.
-    </para>
-
-    <para>
-      Once the build process is complete, the compiled libraries will be
-      found in a zipped tar file with a name along the lines of
-      <filename>DIS-<replaceable>&lt;operating-system&gt;</replaceable>-<replaceable>time</replaceable>-<replaceable>date</replaceable></filename>.
-      It is now time to install the package in the proper place. In this
-      example we will place the installation in
-      <filename>/opt/DIS</filename>. You most likely need to run the
-      following as the system <literal>root</literal> user.
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cp DIS_Linux_2.4.20-8_181004.tar.gz /opt/</userinput>
-shell&gt; <userinput>cd /opt</userinput>
-shell&gt; <userinput>tar xzf DIS_Linux_2.4.20-8_181004.tar.gz</userinput>
-shell&gt; <userinput>mv DIS_Linux_2.4.20-8_181004 DIS</userinput>
-</programlisting>
-
-    <formalpara>
-
-      <title>Network Configuration</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>network configuration (SCI)</secondary>
-      </indexterm>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI network configuration</secondary>
-      </indexterm>
-
-      <para>
-        Now that all the libraries and binaries are in their proper
-        place, we need to ensure that the SCI cards have proper node IDs
-        within the SCI address space.
-      </para>
-
-    </formalpara>
-
-    <para>
-      It is also necessary to decide on the network structure before
-      proceeding. There are three types of network structures which can
-      be used in this context:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          A simple one-dimensional ring
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          One or more SCI switches with one ring per switch port
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          A two- or three-dimensional torus.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      Each of these topologies has its own method for providing node
-      IDs. We discuss each of them in brief.
-    </para>
-
-    <para>
-      A simple ring uses node IDs which are non-zero multiples of 4: 4,
-      8, 12,...
-    </para>
-
-    <para>
-      The next possibility uses SCI switches. An SCI switch has 8 ports,
-      each of which can support a ring. It is necessary to make sure
-      that different rings use different node ID spaces. In a typical
-      configuration, the first port uses node IDs below 64 (4 &minus;
-      60), the next 64 node IDs (68 &minus; 124) are assigned to the
-      next port, and so on, with node IDs 452 &minus; 508 being assigned
-      to the eighth port.
-    </para>
-
-    <para>
-      Two- and three-dimensional torus network structures take into
-      account where each node is located in each dimension, incrementing
-      by 4 for each node in the first dimension, by 64 in the second
-      dimension, and (where applicable) by 1024 in the third dimension.
-      See
-      <ulink url="http://www.dolphinics.com/support/index.html">Dolphin's
-      Web site</ulink> for more thorough documentation.
-    </para>
-
-    <para>
-      In our testing we have used switches, although most large cluster
-      installations use 2- or 3-dimensional torus structures. The
-      advantage provided by switches is that, with dual SCI cards and
-      dual switches, it is possible to build with relative ease a
-      redundant network where the average failover time on the SCI
-      network is on the order of 100 microseconds. This is supported by
-      the SCI transporter in MySQL Cluster and is also under development
-      for the SCI Socket implementation.
-    </para>
-
-    <para>
-      Failover for the 2D/3D torus is also possible but requires sending
-      out new routing indexes to all nodes. However, this requires only
-      100 milliseconds or so to complete and should be acceptable for
-      most high-availability cases.
-    </para>
-
-    <para>
-      By placing cluster data nodes properly within the switched
-      architecture, it is possible to use 2 switches to build a
-      structure whereby 16 computers can be interconnected and no single
-      failure can hinder more than one of them. With 32 computers and 2
-      switches it is possible to configure the cluster in such a manner
-      that no single failure can cause the loss of more than two nodes;
-      in this case, it is also possible to know which pair of nodes is
-      affected. Thus, by placing the two nodes in separate node groups,
-      it is possible to build a <quote>safe</quote> MySQL Cluster
-      installation.
-    </para>
-
-    <para>
-      To set the node ID for an SCI card use the following command in
-      the <filename>/opt/DIS/sbin</filename> directory. In this example,
-      <option>-c 1</option> refers to the number of the SCI card (this
-      is always 1 if there is only 1 card in the machine); <option>-a
-      0</option> refers to adapter 0; and <literal>68</literal> is the
-      node ID:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>./sciconfig -c 1 -a 0 -n 68</userinput>
-</programlisting>
-
-    <para>
-      If you have multiple SCI cards in the same machine, you can
-      determine which card has which slot by issuing the following
-      command (again we assume that the current working directory is
-      <filename>/opt/DIS/sbin</filename>):
-    </para>
-
-<programlisting>
-shell&gt; <userinput>./sciconfig -c 1 -gsn</userinput>
-</programlisting>
-
-    <para>
-      This will give you the SCI card's serial number. Then repeat this
-      procedure with <option>-c 2</option>, and so on, for each card in
-      the machine. Once you have matched each card with a slot, you can
-      set node IDs for all cards.
-    </para>
-
-    <para>
-      After the necessary libraries and binaries are installed, and the
-      SCI node IDs are set, the next step is to set up the mapping from
-      hostnames (or IP addresses) to SCI node IDs. This is done in the
-      SCI sockets configuration file, which should be saved as
-      <filename>/etc/sci/scisock.conf</filename>. In this file, each SCI
-      node ID is mapped through the proper SCI card to the hostname or
-      IP address that it is to communicate with. Here is a very simple
-      example of such a configuration file:
-    </para>
-
-<programlisting>
-#host           #nodeId
-alpha           8
-beta            12
-192.168.10.20   16
-</programlisting>
-
-    <para>
-      It is also possible to limit the configuration so that it applies
-      only to a subset of the available ports for these hosts. An
-      additional configuration file
-      <filename>/etc/sci/scisock_opt.conf</filename> can be used to
-      accomplish this, as shown here:
-    </para>
-
-<programlisting>
-#-key                        -type        -values
-EnablePortsByDefault                yes
-EnablePort                  tcp           2200
-DisablePort                 tcp           2201
-EnablePortRange             tcp           2202 2219
-DisablePortRange            tcp           2220 2231
-</programlisting>
-
-    <formalpara>
-
-      <title>Driver Installation</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI drivers</secondary>
-      </indexterm>
-
-      <para>
-        With the configuration files in place, the drivers can be
-        installed.
-      </para>
-
-    </formalpara>
-
-    <para>
-      First, the low-level drivers and then the SCI socket driver need
-      to be installed:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd DIS/sbin/</userinput>
-shell&gt; <userinput>./drv-install add PSB66</userinput>
-shell&gt; <userinput>./scisocket-install add</userinput>
-</programlisting>
-
-    <para>
-      If desired, the installation can be checked by invoking a script
-      which verifies that all nodes in the SCI socket configuration
-      files are accessible:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/sbin/</userinput>
-shell&gt; <userinput>./status.sh</userinput>
-</programlisting>
-
-    <para>
-      If you discover an error and need to change the SCI socket
-      configuration, it is necessary to use
-      <command>ksocketconfig</command> to accomplish this task:
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/util</userinput>
-shell&gt; <userinput>./ksocketconfig -f</userinput>
-</programlisting>
-
-      For more information about <command>ksocketconfig</command>,
-      consult the documentation available from
-      <ulink url="http://www.dolphinics.com/support/documentation.html"/>.
-    </para>
-
-    <formalpara>
-
-      <title>Testing the setup</title>
-
-      <para>
-        To ensure that SCI sockets are actually being used, you can
-        employ the <command>latency_bench</command> test program. Using
-        this utility's server component, clients can connect to the
-        server to test the latency of the connection. Determining
-        whether SCI is enabled should be fairly simple from observing
-        the latency.
-      </para>
-
-    </formalpara>
-
+    
     <note>
       <para>
-        Before using <command>latency_bench</command>, it is necessary
-        to set the <literal>LD_PRELOAD</literal> environment variable as
-        shown later in this section.
+        Prior to MySQL 5.0.66, there were issues with building MySQL
+        Cluster with SCI support (see Bug #25470), but these have been
+        resolved due to work contributed by Dolphin. SCI Sockets are now
+        correctly supported for MySQL Cluster hosts running recent
+        versions of Linux using the <literal>-max</literal> builds, and
+        versions of MySQL Cluster with SCI Transporter support can be
+        built using either of <command>compile-amd64-max-sci</command>
+        or <command>compile-pentium64-max-sci</command>. Both of these
+        build scripts can be found in the <filename>BUILD</filename>
+        directory of the MySQL Cluster source trees; it should not be
+        difficult to adapt them for other platforms. Generally, all that
+        is necessary is to compile MySQL Cluster with SCI Transporter
+        support is to configure the MySQL Cluster build using
+        <option>--with-ndb-sci=/opt/DIS</option>.
       </para>
     </note>
-
+    
     <para>
-      To set up a server, use the following:
+      Once you have acquired the required Dolphin hardware and software,
+      you can obtain detailed information on how to adapt a MySQL
+      Cluster configured for normal TCP/IP communication to use SCI from
+      the <citetitle>Dolphin Express for MySQL Installation and
+        Reference Guide</citetitle>, available for download at
+      <ulink url="http://docsrva.mysql.com/public/DIS_install_guide_book.pdf"/>
+      (PDF file, 94 pages, 753 KB). This document provides instructions
+      for installing the SCI hardware and software, as well as
+      information concerning network topology and configuration.
     </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/bin/socket</userinput>
-shell&gt; <userinput>./latency_bench -server</userinput>
-</programlisting>
-
-    <para>
-      To run a client, use <command>latency_bench</command> again,
-      except this time with the <option>-client</option> option:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/bin/socket</userinput>
-shell&gt; <userinput>./latency_bench -client <replaceable>server_hostname</replaceable></userinput>
-</programlisting>
-
-    <para>
-      SCI socket configuration should now be complete and MySQL Cluster
-      ready to use both SCI Sockets and the SCI transporter (see
-      <xref linkend="mysql-cluster-sci-definition"/>).
-    </para>
-
-    <formalpara>
-
-      <title>Starting the cluster</title>
-
-      <para>
-        The next step in the process is to start MySQL Cluster. To
-        enable usage of SCI Sockets it is necessary to set the
-        environment variable <literal>LD_PRELOAD</literal> before
-        starting <command>ndbd</command>, <command>mysqld</command>, and
-        <command>ndb_mgmd</command>. This variable should point to the
-        kernel library for SCI Sockets.
-      </para>
-
-    </formalpara>
-
-    <para>
-      To start <command>ndbd</command> in a bash shell, do the
-      following:
-    </para>
-
-<programlisting>
-bash-shell&gt; <userinput>export LD_PRELOAD=/opt/DIS/lib/libkscisock.so</userinput>
-bash-shell&gt; <userinput>ndbd</userinput>
-</programlisting>
-
-    <para>
-      In a tcsh environment the same thing can be accomplished with:
-    </para>
-
-<programlisting>
-tcsh-shell&gt; <userinput>setenv LD_PRELOAD=/opt/DIS/lib/libkscisock.so</userinput>
-tcsh-shell&gt; <userinput>ndbd</userinput>
-</programlisting>
-
-    <note>
-      <para>
-        MySQL Cluster can use only the kernel variant of SCI Sockets.
-      </para>
-    </note>
-
+    
   </section>
 
   <section id="mysql-cluster-interconnects-performance">


Modified: trunk/refman-5.1/mysql-cluster-interconnects.xml
===================================================================
--- trunk/refman-5.1/mysql-cluster-interconnects.xml	2008-11-28 03:28:32 UTC (rev 12687)
+++ trunk/refman-5.1/mysql-cluster-interconnects.xml	2008-11-28 09:51:51 UTC (rev 12688)
Changed blocks: 2, Lines Added: 41, Lines Deleted: 481; 19604 bytes

@@ -92,7 +92,8 @@
     the TCP/IP stack to one extent or another. We have experimented with
     both of these techniques using the SCI (Scalable Coherent Interface)
     technology developed by
-    <ulink url="http://www.dolphinics.com/">Dolphin</ulink>.
+    <ulink url="http://www.dolphinics.com/">Dolphin Interconnect
+    Solutions</ulink>.
   </para>
 
   <section id="mysql-cluster-sci-sockets">

@@ -114,498 +115,57 @@
       <see>MySQL Cluster</see>
     </indexterm>
 
-    <para>
-      In this section, we show how to adapt a cluster configured for
-      normal TCP/IP communication to use SCI Sockets instead. This
-      documentation is based on SCI Sockets version 2.3.0 as of 01
-      October 2004.
-    </para>
+    <remark role="NOTE">
+      [js] Update following para to mention Windows when 6.4 becomes
+      available.
+    </remark>
 
-    <formalpara>
-
-      <title>Prerequisites</title>
-
-      <para>
-        Any machines with which you wish to use SCI Sockets must be
-        equipped with SCI cards.
-      </para>
-
-    </formalpara>
-
     <para>
-      No special builds (other than the <literal>-max</literal> builds)
-      are needed for SCI Sockets because it uses normal TCP/IP socket
-      calls which are already available in MySQL Cluster. However, SCI
-      Sockets are currently supported only on the Linux 2.4 and 2.6
-      kernels. For other operating systems, you can use SCI
-      Transporters, but this requires that the server be built using
-      <option>--with-ndb-sci=/opt/DIS</option>.
+      It is possible employing Scalable Coherent Interface (SCI)
+      technology to achieve a significant increase in connection speeds
+      and throughput between MySQL Cluster data and SQL nodes. To use
+      SCI, it is necessary to obtain and install Dolphin SCI network
+      cards and to use the drivers and other software supplied by
+      Dolphin. You can get information on obtaining these, from
+      <ulink url="http://www.dolphinics.com/">Dolphin Interconnect
+      Solutions</ulink>. SCI SuperSocket or SCI Transporter support is
+      available for 32-bit and 64-bit Linux, Solaris, and other
+      platforms. See the Dolphin documentation referenced later in this
+      section for more detailed information regarding platforms
+      supported for SCI.
     </para>
 
-    <para>
-      Prior to MySQL 5.1.20, there were issues with building MySQL
-      Cluster with SCI support (see Bug #25470), but these have been
-      resolved due to work contributed by Dolphin International. SCI
-      Sockets are now correctly supported for MySQL Cluster using the
-      <literal>-max</literal> builds, and versions of MySQL Cluster with
-      SCI Transporter support can be built using either of
-      <command>compile-amd64-max-sci</command> or
-      <command>compile-pentium64-max-sci</command>. Both of these build
-      scripts can be found in the <filename>BUILD</filename> directory
-      of the MySQL &current-series; source; it should not be difficult
-      to adapt them for other platforms.
-    </para>
-
-    <indexterm>
-      <primary>MySQL Cluster</primary>
-      <secondary>SCI software requirements</secondary>
-    </indexterm>
-
-    <para>
-      There are essentially four requirements for SCI Sockets:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          Building the SCI Socket libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          Installation of the SCI Socket kernel libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          Installation of one or two configuration files.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          The SCI Socket kernel library must be enabled either for the
-          entire machine or for the shell where the MySQL Cluster
-          processes are started.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      This process needs to be repeated for each machine in the cluster
-      where you plan to use SCI Sockets for inter-node communication.
-    </para>
-
-    <para>
-      Two packages need to be retrieved to get SCI Sockets working:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          The source code package containing the DIS support libraries
-          for the SCI Sockets libraries.
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          The source code package for the SCI Socket libraries
-          themselves.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      Currently, these are available only in source code format. The
-      latest versions of these packages at the time of this writing were
-      available as (respectively)
-      <filename>DIS_GPL_2_5_0_SEP_10_2004.tar.gz</filename> and
-      <filename>SCI_SOCKET_2_3_0_OKT_01_2004.tar.gz</filename>. You
-      should be able to find these (or possibly newer versions) at
-      <ulink url="http://www.dolphinics.com/support/downloads.html"/>.
-    </para>
-
-    <formalpara>
-
-      <title>Package Installation</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI software installation</secondary>
-      </indexterm>
-
-      <para>
-        Once you have obtained the library packages, the next step is to
-        unpack them into appropriate directories, with the SCI Sockets
-        library unpacked into a directory below the DIS code. Next, you
-        need to build the libraries. This example shows the commands
-        used on Linux/x86 to perform this task:
-      </para>
-
-    </formalpara>
-
-<programlisting>
-shell&gt; <userinput>tar xzf DIS_GPL_2_5_0_SEP_10_2004.tar.gz</userinput>
-shell&gt; <userinput>cd DIS_GPL_2_5_0_SEP_10_2004/src/</userinput>
-shell&gt; <userinput>tar xzf ../../SCI_SOCKET_2_3_0_OKT_01_2004.tar.gz</userinput>
-shell&gt; <userinput>cd ../adm/bin/Linux_pkgs</userinput>
-shell&gt; <userinput>./make_PSB_66_release</userinput>
-</programlisting>
-
-    <para>
-      It is possible to build these libraries for some 64-bit
-      processors. To build the libraries for Opteron CPUs using the
-      64-bit extensions, run
-      <command>make_PSB_66_X86_64_release</command> rather than
-      <command>make_PSB_66_release</command>. If the build is made on an
-      Itanium machine, you should use
-      <command>make_PSB_66_IA64_release</command>. The X86-64 variant
-      should work for Intel EM64T architectures but this has not yet (to
-      our knowledge) been tested.
-    </para>
-
-    <para>
-      Once the build process is complete, the compiled libraries will be
-      found in a zipped tar file with a name along the lines of
-      <filename>DIS-<replaceable>&lt;operating-system&gt;</replaceable>-<replaceable>time</replaceable>-<replaceable>date</replaceable></filename>.
-      It is now time to install the package in the proper place. In this
-      example we will place the installation in
-      <filename>/opt/DIS</filename>. You most likely need to run the
-      following as the system <literal>root</literal> user.
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cp DIS_Linux_2.4.20-8_181004.tar.gz /opt/</userinput>
-shell&gt; <userinput>cd /opt</userinput>
-shell&gt; <userinput>tar xzf DIS_Linux_2.4.20-8_181004.tar.gz</userinput>
-shell&gt; <userinput>mv DIS_Linux_2.4.20-8_181004 DIS</userinput>
-</programlisting>
-
-    <formalpara>
-
-      <title>Network configuration</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>network configuration (SCI)</secondary>
-      </indexterm>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI network configuration</secondary>
-      </indexterm>
-
-      <para>
-        Now that all the libraries and binaries are in their proper
-        place, we need to ensure that the SCI cards have proper node IDs
-        within the SCI address space.
-      </para>
-
-    </formalpara>
-
-    <para>
-      It is also necessary to decide on the network structure before
-      proceeding. There are three types of network structures which can
-      be used in this context:
-    </para>
-
-    <itemizedlist>
-
-      <listitem>
-        <para>
-          A simple one-dimensional ring
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          One or more SCI switches with one ring per switch port
-        </para>
-      </listitem>
-
-      <listitem>
-        <para>
-          A two- or three-dimensional torus.
-        </para>
-      </listitem>
-
-    </itemizedlist>
-
-    <para>
-      Each of these topologies has its own method for providing node
-      IDs. We discuss each of them in brief.
-    </para>
-
-    <para>
-      A simple ring uses node IDs which are non-zero multiples of 4: 4,
-      8, 12,...
-    </para>
-
-    <para>
-      The next possibility uses SCI switches. An SCI switch has 8 ports,
-      each of which can support a ring. It is necessary to make sure
-      that different rings use different node ID spaces. In a typical
-      configuration, the first port uses node IDs below 64 (4 &minus;
-      60), the next 64 node IDs (68 &minus; 124) are assigned to the
-      next port, and so on, with node IDs 452 &minus; 508 being assigned
-      to the eighth port.
-    </para>
-
-    <para>
-      Two- and three-dimensional torus network structures take into
-      account where each node is located in each dimension, incrementing
-      by 4 for each node in the first dimension, by 64 in the second
-      dimension, and (where applicable) by 1024 in the third dimension.
-      See
-      <ulink url="http://www.dolphinics.com/support/index.html">Dolphin's
-      Web site</ulink> for more thorough documentation.
-    </para>
-
-    <para>
-      In our testing we have used switches, although most large cluster
-      installations use 2- or 3-dimensional torus structures. The
-      advantage provided by switches is that, with dual SCI cards and
-      dual switches, it is possible to build with relative ease a
-      redundant network where the average failover time on the SCI
-      network is on the order of 100 microseconds. This is supported by
-      the SCI transporter in MySQL Cluster and is also under development
-      for the SCI Socket implementation.
-    </para>
-
-    <para>
-      Failover for the 2D/3D torus is also possible but requires sending
-      out new routing indexes to all nodes. However, this requires only
-      100 milliseconds or so to complete and should be acceptable for
-      most high-availability cases.
-    </para>
-
-    <para>
-      By placing cluster data nodes properly within the switched
-      architecture, it is possible to use 2 switches to build a
-      structure whereby 16 computers can be interconnected and no single
-      failure can hinder more than one of them. With 32 computers and 2
-      switches it is possible to configure the cluster in such a manner
-      that no single failure can cause the loss of more than two nodes;
-      in this case, it is also possible to know which pair of nodes is
-      affected. Thus, by placing the two nodes in separate node groups,
-      it is possible to build a <quote>safe</quote> MySQL Cluster
-      installation.
-    </para>
-
-    <para>
-      To set the node ID for an SCI card use the following command in
-      the <filename>/opt/DIS/sbin</filename> directory. In this example,
-      <option>-c 1</option> refers to the number of the SCI card (this
-      is always 1 if there is only 1 card in the machine); <option>-a
-      0</option> refers to adapter 0; and <literal>68</literal> is the
-      node ID:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>./sciconfig -c 1 -a 0 -n 68</userinput>
-</programlisting>
-
-    <para>
-      If you have multiple SCI cards in the same machine, you can
-      determine which card has which slot by issuing the following
-      command (again we assume that the current working directory is
-      <filename>/opt/DIS/sbin</filename>):
-    </para>
-
-<programlisting>
-shell&gt; <userinput>./sciconfig -c 1 -gsn</userinput>
-</programlisting>
-
-    <para>
-      This will give you the SCI card's serial number. Then repeat this
-      procedure with <option>-c 2</option>, and so on, for each card in
-      the machine. Once you have matched each card with a slot, you can
-      set node IDs for all cards.
-    </para>
-
-    <para>
-      After the necessary libraries and binaries are installed, and the
-      SCI node IDs are set, the next step is to set up the mapping from
-      hostnames (or IP addresses) to SCI node IDs. This is done in the
-      SCI sockets configuration file, which should be saved as
-      <filename>/etc/sci/scisock.conf</filename>. In this file, each SCI
-      node ID is mapped through the proper SCI card to the hostname or
-      IP address that it is to communicate with. Here is a very simple
-      example of such a configuration file:
-    </para>
-
-<programlisting>
-#host           #nodeId
-alpha           8
-beta            12
-192.168.10.20   16
-</programlisting>
-
-    <para>
-      It is also possible to limit the configuration so that it applies
-      only to a subset of the available ports for these hosts. An
-      additional configuration file
-      <filename>/etc/sci/scisock_opt.conf</filename> can be used to
-      accomplish this, as shown here:
-    </para>
-
-<programlisting>
-#-key                        -type        -values
-EnablePortsByDefault                yes
-EnablePort                  tcp           2200
-DisablePort                 tcp           2201
-EnablePortRange             tcp           2202 2219
-DisablePortRange            tcp           2220 2231
-</programlisting>
-
-    <formalpara>
-
-      <title>Driver installation</title>
-
-      <indexterm>
-        <primary>MySQL Cluster</primary>
-        <secondary>SCI drivers</secondary>
-      </indexterm>
-
-      <para>
-        With the configuration files in place, the drivers can be
-        installed.
-      </para>
-
-    </formalpara>
-
-    <para>
-      First, the low-level drivers and then the SCI socket driver need
-      to be installed:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd DIS/sbin/</userinput>
-shell&gt; <userinput>./drv-install add PSB66</userinput>
-shell&gt; <userinput>./scisocket-install add</userinput>
-</programlisting>
-
-    <para>
-      If desired, the installation can be checked by invoking a script
-      which verifies that all nodes in the SCI socket configuration
-      files are accessible:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/sbin/</userinput>
-shell&gt; <userinput>./status.sh</userinput>
-</programlisting>
-
-    <para>
-      If you discover an error and need to change the SCI socket
-      configuration, it is necessary to use
-      <command>ksocketconfig</command> to accomplish this task:
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/util</userinput>
-shell&gt; <userinput>./ksocketconfig -f</userinput>
-</programlisting>
-
-      For more information about <command>ksocketconfig</command>,
-      consult the documentation available from
-      <ulink url="http://www.dolphinics.com/support/documentation.html"/>.
-    </para>
-
-    <formalpara>
-
-      <title>Testing the setup</title>
-
-      <para>
-        To ensure that SCI sockets are actually being used, you can
-        employ the <command>latency_bench</command> test program. Using
-        this utility's server component, clients can connect to the
-        server to test the latency of the connection. Determining
-        whether SCI is enabled should be fairly simple from observing
-        the latency.
-      </para>
-
-    </formalpara>
-
     <note>
       <para>
-        Before using <command>latency_bench</command>, it is necessary
-        to set the <literal>LD_PRELOAD</literal> environment variable as
-        shown later in this section.
+        Prior to MySQL 5.1.20, there were issues with building MySQL
+        Cluster with SCI support (see Bug #25470), but these have been
+        resolved due to work contributed by Dolphin. SCI Sockets are now
+        correctly supported for MySQL Cluster hosts running recent
+        versions of Linux using the <literal>-max</literal> builds, and
+        versions of MySQL Cluster with SCI Transporter support can be
+        built using either of <command>compile-amd64-max-sci</command>
+        or <command>compile-pentium64-max-sci</command>. Both of these
+        build scripts can be found in the <filename>BUILD</filename>
+        directory of the MySQL Cluster source trees; it should not be
+        difficult to adapt them for other platforms. Generally, all that
+        is necessary is to compile MySQL Cluster with SCI Transporter
+        support is to configure the MySQL Cluster build using
+        <option>--with-ndb-sci=/opt/DIS</option>.
       </para>
     </note>
 
     <para>
-      To set up a server, use the following:
+      Once you have acquired the required Dolphin hardware and software,
+      you can obtain detailed information on how to adapt a MySQL
+      Cluster configured for normal TCP/IP communication to use SCI from
+      the <citetitle>Dolphin Express for MySQL Installation and
+      Reference Guide</citetitle>, available for download at
+      <ulink url="http://docsrva.mysql.com/public/DIS_install_guide_book.pdf"/>
+      (PDF file, 94 pages, 753 KB). This document provides instructions
+      for installing the SCI hardware and software, as well as
+      information concerning network topology and configuration.
     </para>
 
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/bin/socket</userinput>
-shell&gt; <userinput>./latency_bench -server</userinput>
-</programlisting>
-
-    <para>
-      To run a client, use <command>latency_bench</command> again,
-      except this time with the <option>-client</option> option:
-    </para>
-
-<programlisting>
-shell&gt; <userinput>cd /opt/DIS/bin/socket</userinput>
-shell&gt; <userinput>./latency_bench -client <replaceable>server_hostname</replaceable></userinput>
-</programlisting>
-
-    <para>
-      SCI socket configuration should now be complete and MySQL Cluster
-      ready to use both SCI Sockets and the SCI transporter (see
-      <xref linkend="mysql-cluster-sci-definition"/>).
-    </para>
-
-    <formalpara>
-
-      <title>Starting the cluster</title>
-
-      <para>
-        The next step in the process is to start MySQL Cluster. To
-        enable usage of SCI Sockets it is necessary to set the
-        environment variable <literal>LD_PRELOAD</literal> before
-        starting <command>ndbd</command>, <command>mysqld</command>, and
-        <command>ndb_mgmd</command>. This variable should point to the
-        kernel library for SCI Sockets.
-      </para>
-
-    </formalpara>
-
-    <para>
-      To start <command>ndbd</command> in a bash shell, do the
-      following:
-    </para>
-
-<programlisting>
-bash-shell&gt; <userinput>export LD_PRELOAD=/opt/DIS/lib/libkscisock.so</userinput>
-bash-shell&gt; <userinput>ndbd</userinput>
-</programlisting>
-
-    <para>
-      In a tcsh environment the same thing can be accomplished with:
-    </para>
-
-<programlisting>
-tcsh-shell&gt; <userinput>setenv LD_PRELOAD=/opt/DIS/lib/libkscisock.so</userinput>
-tcsh-shell&gt; <userinput>ndbd</userinput>
-</programlisting>
-
-    <note>
-      <para>
-        MySQL Cluster can use only the kernel variant of SCI Sockets.
-      </para>
-    </note>
-
   </section>
 
   <section id="mysql-cluster-interconnects-performance">


Thread
svn commit - mysqldoc@docsrva: r12688 - in trunk: refman-4.1 refman-5.0 refman-5.1jon28 Nov