List:Commits« Previous MessageNext Message »
From:jon Date:January 31 2006 9:50am
Subject:bk commit into 5.1 tree (jon:1.2100)
View as plain text  
Below is the list of changes that have just been committed into a local
5.1 repository of jon. When jon does a push these changes will
be propagated to the main repository and, within 24 hours after the
push, to the public repository.
For information on how to access the public repository
see http://dev.mysql.com/doc/mysql/en/installing-source-tree.html

ChangeSet
  1.2100 06/01/31 19:50:26 jon@stripped +4 -0
  Fixes/revisions for new version of NDB API doc.

  storage/ndb/include/ndbapi/ndb_cluster_connection.hpp
    1.15 06/01/31 19:34:22 jon@stripped +27 -25
    Fixes/revisions for new version of NDB API doc.

  storage/ndb/include/ndbapi/NdbError.hpp
    1.11 06/01/31 19:34:20 jon@stripped +35 -35
    Fixes/revisions for new version of NDB API doc.

  storage/ndb/include/ndbapi/NdbDictionary.hpp
    1.65 06/01/31 19:34:19 jon@stripped +463 -447
    Fixes/revisions for new version of NDB API doc.

  storage/ndb/include/ndbapi/NdbBlob.hpp
    1.19 06/01/31 19:34:18 jon@stripped +105 -91
    Fixes/revisions for new version of NDB API doc.

# This is a BitKeeper patch.  What follows are the unified diffs for the
# set of deltas contained in the patch.  The rest of the patch, the part
# that BitKeeper cares about, is below these diffs.
# User:	jon
# Host:	gigan.site
# Root:	/home/jon/bk/mysql-5.1-ndbapi-working

--- 1.64/storage/ndb/include/ndbapi/NdbDictionary.hpp	2006-01-26 07:00:27 +10:00
+++ 1.65/storage/ndb/include/ndbapi/NdbDictionary.hpp	2006-01-31 19:34:19 +10:00
@@ -28,34 +28,34 @@
  * @brief Data dictionary class
  * 
  * The preferred and supported way to create and drop tables and indexes
- * in ndb is through the 
- * MySQL Server (see MySQL reference Manual, section MySQL Cluster).
+ * in NDB is through the MySQL Server (see the MySQL Manual, in 
+ * particular http://dev.mysql.com/doc/refman/5.1/en/ndbcluster.html).
  *
- * Tables and indexes that are created directly through the 
- * NdbDictionary class
- * can not be viewed from the MySQL Server.
- * Dropping indexes directly via the NdbApi will cause inconsistencies
- * if they were originally created from a MySQL Cluster.
+ * @note  Tables and indexes that are created directly through the 
+ *        NdbDictionary class cannot be viewed from the MySQL Server. 
+ *        Dropping indexes via the NDB API that were created originally 
+ *        from a MySQL Cluster causes inconsistencies.
  * 
  * This class supports schema data enquiries such as:
  * -# Enquiries about tables
- *    (Dictionary::getTable, Table::getNoOfColumns, 
- *    Table::getPrimaryKey, and Table::getNoOfPrimaryKeys)
+ *    (Dictionary::getTable(), Table::getNoOfColumns(), 
+ *    Table::getPrimaryKey(), and Table::getNoOfPrimaryKeys()).
  * -# Enquiries about indexes
- *    (Dictionary::getIndex, Index::getNoOfColumns, 
- *    and Index::getColumn)
+ *    (Dictionary::getIndex(), Index::getNoOfColumns(), 
+ *    and Index::getColumn()).
  *
  * This class supports schema data definition such as:
- * -# Creating tables (Dictionary::createTable) and table columns
- * -# Dropping tables (Dictionary::dropTable)
- * -# Creating secondary indexes (Dictionary::createIndex)
- * -# Dropping secondary indexes (Dictionary::dropIndex)
+ * -# Creating tables (Dictionary::createTable()) and table columns.
+ * -# Dropping tables (Dictionary::dropTable()).
+ * -# Creating secondary indexes (Dictionary::createIndex()).
+ * -# Dropping secondary indexes (Dictionary::dropIndex()).
  *
- * NdbDictionary has several help (inner) classes to support this:
- * -# NdbDictionary::Dictionary the dictionary handling dictionary objects
- * -# NdbDictionary::Table for creating tables
- * -# NdbDictionary::Column for creating table columns
- * -# NdbDictionary::Index for creating secondary indexes
+ * NdbDictionary has several helper (inner) classes to support this:
+ * -# NdbDictionary::Dictionary - The dictionary handling dictionary 
+ *    objects.
+ * -# NdbDictionary::Table - Used for creating tables.
+ * -# NdbDictionary::Column - Used for creating table columns.
+ * -# NdbDictionary::Index  - Used for creating secondary indexes.
  *
  * See @ref ndbapi_simple_index.cpp for details of usage.
  */
@@ -63,7 +63,8 @@
 public:
   /**
    * @class Object
-   * @brief Meta information about a database object (a table, index, etc)
+   * @brief Provides meta-information about a database object such as a 
+   *        table or index.
    */
   class Object {
   public:
@@ -71,33 +72,36 @@
      * Status of object
      */
     enum Status {
-      New,                    ///< The object only exist in memory and 
-                              ///< has not been created in the NDB Kernel
-      Changed,                ///< The object has been modified in memory 
-                              ///< and has to be commited in NDB Kernel for 
-                              ///< changes to take effect
-      Retrieved,              ///< The object exist and has been read 
-                              ///< into main memory from NDB Kernel
-      Invalid,                ///< The object has been invalidated
-                              ///< and should not be used
-      Altered                 ///< Table has been altered in NDB kernel
-                              ///< but is still valid for usage
+      New,                    ///< The object exists only in memory, and 
+                              ///< has not yet been created in the NDB 
+                              ///< kernel.
+      Changed,                ///< The object has been modified in 
+                              ///< memory, and must be committed in the 
+                              ///< NDB kernel before changes take 
+                              ///< effect.
+      Retrieved,              ///< The object exists, and has been read 
+                              ///< into main memory from the NDB kernel.
+      Invalid,                ///< The object has been invalidated,
+                              ///< and should no longer be used.
+      Altered                 ///< The table has been altered in the NDB
+                              ///< kernel, but is still available for 
+                              ///< use.
     };
 
     /**
-     * Get status of object
+     * Gets the status of the object.
      */
     virtual Status getObjectStatus() const = 0;
 
     /**
-     * Get version of object
+     * Gets the version of the object.
      */
     virtual int getObjectVersion() const = 0;
     
     virtual int getObjectId() const = 0;
     
     /**
-     * Object type
+     * Object types:
      */
     enum Type {
       TypeUndefined = 0,      ///< Undefined
@@ -116,7 +120,7 @@
     };
 
     /**
-     * Object state
+     * Object states:
      */
     enum State {
       StateUndefined = 0,     ///< Undefined
@@ -129,30 +133,31 @@
     };
 
     /**
-     * Object store
+     * Object storage:
      */
     enum Store {
-      StoreUndefined = 0,     ///< Undefined
-      StoreTemporary = 1,     ///< Object or data deleted on system restart
-      StorePermanent = 2      ///< Permanent. logged to disk
+      StoreUndefined = 0,     ///< Undefined.
+      StoreTemporary = 1,     ///< Object or data deleted on system 
+                              ///< restart.
+      StorePermanent = 2      ///< Permanent; logged to disk.
     };
 
     /**
      * Type of fragmentation.
      *
-     * This parameter specifies how data in the table or index will
-     * be distributed among the db nodes in the cluster.<br>
-     * The bigger the table the more number of fragments should be used.
-     * Note that all replicas count as same "fragment".<br>
-     * For a table, default is FragAllMedium.  For a unique hash index,
-     * default is taken from underlying table and cannot currently
-     * be changed.
+     * This parameter specifies how data in the table or index
+     * is distributed among the storage nodes in the cluster. The larger the 
+     * table, the larger the number of fragments that should be used.
+     * Note that all replicas count as the same fragment. For a table, the 
+     * default is <code>FragAllMedium</code>. For a unique hash index, the default is 
+     * taken from the underlying table and cannot currently be changed.
      */
     enum FragmentType { 
-      FragUndefined = 0,      ///< Fragmentation type undefined or default
-      FragSingle = 1,         ///< Only one fragment
-      FragAllSmall = 2,       ///< One fragment per node, default
-      FragAllMedium = 3,      ///< two fragments per node
+      FragUndefined = 0,      ///< Fragmentation type is undefined or 
+                              ///< the default.
+      FragSingle = 1,         ///< Only one fragment.
+      FragAllSmall = 2,       ///< One fragment per node (default).
+      FragAllMedium = 3,      ///< Two fragments per node.
       FragAllLarge = 4,       ///< Four fragments per node.
       DistrKeyHash = 5,
       DistrKeyLin = 6,
@@ -167,69 +172,70 @@
    * @class Column
    * @brief Represents a column in an NDB Cluster table
    *
-   * Each column has a type. The type of a column is determined by a number 
-   * of type specifiers.
-   * The type specifiers are:
-   * - Builtin type
-   * - Array length or max length
-   * - Precision and scale (not used yet)
-   * - Character set for string types
-   * - Inline and part sizes for blobs
+   * Each column has a type. The type of a column is determined by a 
+   * number of type specifiers, which are:
+   * - Built-in type.
+   * - Array length or maximum length.
+   * - Precision and scale (not yet used).
+   * - Character set for string types.
+   * - Inline and part sizes for blobs.
    *
-   * Types in general correspond to MySQL types and their variants.
-   * Data formats are same as in MySQL.  NDB API provides no support for
-   * constructing such formats.  NDB kernel checks them however.
+   * The types in general correspond to MySQL datatypes and their 
+   * variants. The data formats are same as in MySQL. The NDB API provides 
+   * no support for constructing such formats; however, they are checked 
+   * by the NDB kernel.
    */
   class Column {
   public:
     /**
-     * The builtin column types
+     * The built-in column types. Types NDB_TYPE_TINYINT through 
+     * NDB_TYPE_DOUBLE (in the order listed) can be used in arrays.
      */
     enum Type {
-      Undefined = NDB_TYPE_UNDEFINED,   ///< Undefined 
-      Tinyint = NDB_TYPE_TINYINT,       ///< 8 bit. 1 byte signed integer, can be used in array
-      Tinyunsigned = NDB_TYPE_TINYUNSIGNED,  ///< 8 bit. 1 byte unsigned integer, can be used in array
-      Smallint = NDB_TYPE_SMALLINT,      ///< 16 bit. 2 byte signed integer, can be used in array
-      Smallunsigned = NDB_TYPE_SMALLUNSIGNED, ///< 16 bit. 2 byte unsigned integer, can be used in array
-      Mediumint = NDB_TYPE_MEDIUMINT,     ///< 24 bit. 3 byte signed integer, can be used in array
-      Mediumunsigned = NDB_TYPE_MEDIUMUNSIGNED,///< 24 bit. 3 byte unsigned integer, can be used in array
-      Int = NDB_TYPE_INT,           ///< 32 bit. 4 byte signed integer, can be used in array
-      Unsigned = NDB_TYPE_UNSIGNED,      ///< 32 bit. 4 byte unsigned integer, can be used in array
-      Bigint = NDB_TYPE_BIGINT,        ///< 64 bit. 8 byte signed integer, can be used in array
-      Bigunsigned = NDB_TYPE_BIGUNSIGNED,   ///< 64 Bit. 8 byte signed integer, can be used in array
-      Float = NDB_TYPE_FLOAT,         ///< 32-bit float. 4 bytes float, can be used in array
-      Double = NDB_TYPE_DOUBLE,        ///< 64-bit float. 8 byte float, can be used in array
-      Olddecimal = NDB_TYPE_OLDDECIMAL,    ///< MySQL < 5.0 signed decimal,  Precision, Scale
-      Olddecimalunsigned = NDB_TYPE_OLDDECIMALUNSIGNED,
-      Decimal = NDB_TYPE_DECIMAL,    ///< MySQL >= 5.0 signed decimal,  Precision, Scale
-      Decimalunsigned = NDB_TYPE_DECIMALUNSIGNED,
-      Char = NDB_TYPE_CHAR,          ///< Len. A fixed array of 1-byte chars
-      Varchar = NDB_TYPE_VARCHAR,       ///< Length bytes: 1, Max: 255
-      Binary = NDB_TYPE_BINARY,        ///< Len
-      Varbinary = NDB_TYPE_VARBINARY,     ///< Length bytes: 1, Max: 255
-      Datetime = NDB_TYPE_DATETIME,    ///< Precision down to 1 sec (sizeof(Datetime) == 8 bytes )
-      Date = NDB_TYPE_DATE,            ///< Precision down to 1 day(sizeof(Date) == 4 bytes )
-      Blob = NDB_TYPE_BLOB,        ///< Binary large object (see NdbBlob)
-      Text = NDB_TYPE_TEXT,         ///< Text blob
-      Bit = NDB_TYPE_BIT,          ///< Bit, length specifies no of bits
-      Longvarchar = NDB_TYPE_LONGVARCHAR,  ///< Length bytes: 2, little-endian
-      Longvarbinary = NDB_TYPE_LONGVARBINARY, ///< Length bytes: 2, little-endian
-      Time = NDB_TYPE_TIME,        ///< Time without date
-      Year = NDB_TYPE_YEAR,   ///< Year 1901-2155 (1 byte)
-      Timestamp = NDB_TYPE_TIMESTAMP  ///< Unix time
+      Undefined = NDB_TYPE_UNDEFINED,   ///< Undefined. 
+      Tinyint = NDB_TYPE_TINYINT,       ///< 8-bit, 1-byte signed integer.
+      Tinyunsigned = NDB_TYPE_TINYUNSIGNED,  ///< 8-bit, 1-byte unsigned integer.
+      Smallint = NDB_TYPE_SMALLINT,      ///< 16-bit, 2-byte signed integer.
+      Smallunsigned = NDB_TYPE_SMALLUNSIGNED, ///< 16-bit, 2-byte unsigned integer.
+      Mediumint = NDB_TYPE_MEDIUMINT,     ///< 24-bit, 3-byte signed integer.
+      Mediumunsigned = NDB_TYPE_MEDIUMUNSIGNED,///< 24-bit, 3-byte unsigned integer.
+      Int = NDB_TYPE_INT,           ///< 32-bit. 4-byte signed integer.
+      Unsigned = NDB_TYPE_UNSIGNED,      ///< 32-bit, 4-byte unsigned integer.
+      Bigint = NDB_TYPE_BIGINT,        ///< 64-bit, 8-byte signed integer.
+      Bigunsigned = NDB_TYPE_BIGUNSIGNED,   ///< 64-bit, 8-byte signed integer.
+      Float = NDB_TYPE_FLOAT,         ///< 32-bit, 4-byte float.
+      Double = NDB_TYPE_DOUBLE,        ///< 64-bit, 8-byte float.
+      Olddecimal = NDB_TYPE_OLDDECIMAL,    ///< MySQL < 5.0 signed decimal (precision, scale).
+      Olddecimalunsigned = NDB_TYPE_OLDDECIMALUNSIGNED,   ///< MySQL < 5.0 unsigned decimal (precision, scale).
+      Decimal = NDB_TYPE_DECIMAL,    ///< MySQL >= 5.0 signed decimal (precision, scale).
+      Decimalunsigned = NDB_TYPE_DECIMALUNSIGNED,    ///< MySQL >= 5.0 unsigned decimal (precision, scale),
+      Char = NDB_TYPE_CHAR,          ///< Len. A fixed array of 1-byte chars.
+      Varchar = NDB_TYPE_VARCHAR,       ///< Length bytes: 1, Max: 255.
+      Binary = NDB_TYPE_BINARY,        ///< Length.
+      Varbinary = NDB_TYPE_VARBINARY,     ///< Length bytes: 1, Max: 255.
+      Datetime = NDB_TYPE_DATETIME,    ///< Precision to 1 sec (sizeof(Datetime) == 8 bytes ).
+      Date = NDB_TYPE_DATE,            ///< Precision to 1 day (sizeof(Date) == 4 bytes ).
+      Blob = NDB_TYPE_BLOB,        ///< Binary large object (see NdbBlob).
+      Text = NDB_TYPE_TEXT,         ///< Text blob.
+      Bit = NDB_TYPE_BIT,          ///< Bit, length specifies number of bits.
+      Longvarchar = NDB_TYPE_LONGVARCHAR,  ///< Length bytes: 2, little-endian.
+      Longvarbinary = NDB_TYPE_LONGVARBINARY, ///< Length bytes: 2, little-endian.
+      Time = NDB_TYPE_TIME,        ///< Time without date.
+      Year = NDB_TYPE_YEAR,   ///< Year 1901-2155 (1 byte).
+      Timestamp = NDB_TYPE_TIMESTAMP  ///< Unix time.
     };
 
-    /*
-     * Array type specifies internal attribute format.
+    /**
+     * The Array type specifies the internal attribute format.
      *
-     * - ArrayTypeFixed is stored as fixed number of bytes.  This type
+     * - ArrayTypeFixed is stored as a fixed number of bytes. This type
      *   is fastest to access but can waste space.
      *
      * - ArrayTypeVar is stored as variable number of bytes with a fixed
      *   overhead of 2 bytes.
      *
-     * Default is ArrayTypeVar for Var* types and ArrayTypeFixed for
-     * others.  The default is normally ok.
+     * The default is ArrayTypeVar for Var* types and ArrayTypeFixed for
+     * others. The default is normally sufficient.
      */
     enum ArrayType {
       ArrayTypeFixed = NDB_ARRAYTYPE_FIXED,          // 0 length bytes
@@ -237,10 +243,11 @@
       ArrayTypeMediumVar = NDB_ARRAYTYPE_MEDIUM_VAR // 2 length bytes
     };
 
-    /*
-     * Storage type specifies whether attribute is stored in memory or
-     * on disk.  Default is memory.  Disk attributes are potentially
-     * much slower to access and cannot be indexed in version 5.1.
+    /**
+     * The storage type specifies whether the attribute is stored in 
+     * memory or on disk. The default type is memory. Disk attributes 
+     * are potentially much slower to access and cannot be indexed in 
+     * version 5.1.
      */
     enum StorageType {
       StorageTypeMemory = NDB_STORAGETYPE_MEMORY,
@@ -253,23 +260,24 @@
      */
     
     /**
-     * Get name of column
-     * @return  Name of the column
+     * Gets the name of the column.
+     * @return  Name of the column.
      */
     const char* getName() const;
 
     /**
-     * Get if the column is nullable or not
+     * Gets the column's nullability.
      */
     bool getNullable() const;
     
     /**
-     * Check if column is part of primary key
+     * Checks whether the column is part of the primary key.
      */
     bool getPrimaryKey() const;
 
     /**
-     *  Get number of column (horizontal position within table)
+     *  Gets the number of the column - that is, its horizontal position 
+     *  within the table.
      */
     int getColumnNo() const;
 
@@ -278,9 +286,10 @@
 #endif
 
     /**
-     * Check if column is equal to some other column
-     * @param  column  Column to compare with
-     * @return  true if column is equal to some other column otherwise false.
+     * Checks whether the column is equal to some other column.
+     * @param  column  Column to compare with.
+     * @return true if the column is equal to some other column, 
+     *         otherwise false.
      */
     bool equal(const Column& column) const;
 
@@ -292,78 +301,78 @@
      */
 
     /**
-     * Get type of column
+     * Gets the column type.
      */
     Type getType() const;
 
     /**
-     * Get precision of column.
-     * @note Only applicable for decimal types
+     * Gets the column's precision.
+     * @note This is applicable for decimal types only.
      */
     int getPrecision() const;
 
     /**
-     * Get scale of column.
-     * @note Only applicable for decimal types
+     * Gets the column's scale.
+     * @note This is applicable for decimal types only.
      */
     int getScale() const;
 
     /**
-     * Get length for column
-     * Array length for column or max length for variable length arrays.
+     * Gets the length of the column. This is the array length for the 
+     * column or the maximum length for variable length arrays.
      */
     int getLength() const;
 
     /**
-     * For Char or Varchar or Text, get MySQL CHARSET_INFO.  This
-     * specifies both character set and collation.  See get_charset()
-     * etc in MySQL.  (The cs is not "const" in MySQL).
+     * Gets MySQL CHARSET_INFO for a Char, Varchar or Text. This
+     * specifies both character set and collation. See get_charset()
+     * in the MySQL C API. (Note that the character set is not const in 
+     * MySQL).
      */
     CHARSET_INFO* getCharset() const;
 
-
     /**
-     * For blob, get "inline size" i.e. number of initial bytes
-     * to store in table's blob attribute.  This part is normally in
-     * main memory and can be indexed and interpreted.
+     * Gets the inline size of a blob - that is, the number of initial 
+     * bytes to store in table's blob attribute. This part is normally 
+     * in main memory, and can be indexed and interpreted.
      */
     int getInlineSize() const;
 
     /**
-     * For blob, get "part size" i.e. number of bytes to store in
-     * each tuple of the "blob table".  Can be set to zero to omit parts
-     * and to allow only inline bytes ("tinyblob").
+     * Gets the segment (part) size of a blob - that is, the number of bytes to 
+     * store in each tuple of the blob table. This can be set to zero to 
+     * omit parts and to allow only inline bytes (TINYBLOB).
      */
     int getPartSize() const;
 
     /**
-     * For blob, set or get "stripe size" i.e. number of consecutive
-     * <em>parts</em> to store in each node group.
+     * Gets or sets the stripe size of a blob - that is, the number of 
+     * consecutive <em>segments</em> (parts) of the blob to store in each node 
+     * group.
      */
     int getStripeSize() const;
 
     /**
-     * Get size of element
+     * Gets the size of an element.
      */
     int getSize() const;
 
     /** 
-     * Check if column is part of partition key
+     * Checks whether a column is part of the partition key.
      *
      * A <em>partition key</em> is a set of attributes which are used
-     * to distribute the tuples onto the NDB nodes.
-     * The partition key uses the NDB Cluster hashing function.
+     * to distribute the tuples onto the NDB nodes. The partition key 
+     * uses the NDB Cluster hashing function.
      *
-     * An example where this is useful is TPC-C where it might be
-     * good to use the warehouse id and district id as the partition key. 
-     * This would place all data for a specific district and warehouse 
-     * in the same database node.
+     * An example where this is useful is TPC-C where it might be good 
+     * to use the warehouse ID and district ID together as the partition 
+     * key. This would place all data for a specific district and 
+     * warehouse in the same data node.
      *
-     * Locally in the fragments the full primary key 
-     * will still be used with the hashing algorithm.
+     * Locally in each fragment the full primary key is used with the 
+     * hashing algorithm.
      *
-     * @return  true then the column is part of 
-     *                 the partition key.
+     * @return  true, if the column is part of the partition key.
      */
     bool getPartitionKey() const;
 #ifndef DOXYGEN_SHOULD_SKIP_DEPRECATED
@@ -380,100 +389,102 @@
      * @name Column creation
      * @{
      *
-     * These operations should normally not be performed in an NbdApi program
-     * as results will not be visable in the MySQL Server
+     * These operations should normally <em>not</em> be performed in an NDB API 
+     * program, as any results are not visible to the MySQL Server.
      * 
      */
 
     /**
      * Constructor
-     * @param   name   Name of column
+     * @param   name   Name of column.
      */
     Column(const char * name = "");
     /**
-     * Copy constructor
-     * @param  column  Column to be copied
+     * Copy constructor.
+     * @param  column  Column to be copied.
      */
     Column(const Column& column); 
     ~Column();
 
     /**
-     * Set name of column
-     * @param  name  Name of the column
+     * Sets column name.
+     * @param  name  Name of the column.
      */
     void setName(const char * name);
 
     /**
-     * Set whether column is nullable or not
+     * Sets column's nullability.
      */
     void setNullable(bool);
 
     /**
-     * Set that column is part of primary key
+     * Sets the column as part of the primary key.
      */
     void setPrimaryKey(bool);
 
     /**
-     * Set type of column
-     * @param  type  Type of column
+     * Sets the column type.
+     * @param  type  Type of column.
      *
-     * @note setType resets <em>all</em> column attributes
-     *       to (type dependent) defaults and should be the first
-     *       method to call.  Default type is Unsigned.
+     * @note setType() resets <em>all</em> column attributes to (type 
+     *       dependent) defaults and should be the first method to be called. 
+     *       The default type is Unsigned.
      */
     void setType(Type type);
 
     /**
-     * Set precision of column.
-     * @note Only applicable for decimal types
+     * Sets the column precision.
+     * @note This is applicable only for decimal types.
      */
     void setPrecision(int);
 
     /**
-     * Set scale of column.
-     * @note Only applicable for decimal types
+     * Sets the column scale.
+     * @note This is applicable only for decimal types.
      */
     void setScale(int);
 
     /**
-     * Set length for column
-     * Array length for column or max length for variable length arrays.
+     * Sets the column length; this is either the array length of the column 
+     * or the maximum length for variable length arrays.
      */
     void setLength(int length);
 
     /**
-     * For Char or Varchar or Text, get MySQL CHARSET_INFO.  This
-     * specifies both character set and collation.  See get_charset()
-     * etc in MySQL.  (The cs is not "const" in MySQL).
+     * Gets MySQL CHARSET_INFO for a CHAR, VARCHAR, or TEXT column,
+     * specifying both character set and collation. See get_charset()
+     * in the MySQL C API. (Note that the charcater set is not const in 
+     * MySQL).
      */
     void setCharset(CHARSET_INFO* cs);
 
     /**
-     * For blob, get "inline size" i.e. number of initial bytes
-     * to store in table's blob attribute.  This part is normally in
-     * main memory and can be indexed and interpreted.
+     * Gets the inline size of a blob - that is, the number of initial 
+     * bytes to store in table's blob attribute. This part is normally 
+     * in main memory, and can be indexed and interpreted.
      */
     void setInlineSize(int size);
 
     /**
-     * For blob, get "part size" i.e. number of bytes to store in
-     * each tuple of the "blob table".  Can be set to zero to omit parts
-     * and to allow only inline bytes ("tinyblob").
+     * Gets the segment (part) size of a blob - that is, the number of bytes to 
+     * store in each tuple of the blob table. This can be set to zero to 
+     * omit parts and to allow only inline bytes (TINYBLOB).
      */
     void setPartSize(int size);
 
     /**
-     * For blob, get "stripe size" i.e. number of consecutive
-     * <em>parts</em> to store in each node group.
+     * Gets or sets the stripe size of a blob - that is, number of 
+     * consecutive segments (parts) of the blob to store in each node 
+     * group.
      */
     void setStripeSize(int size);
 
     /** 
-     * Set partition key
+     * Sets the partition key.
      * @see getPartitionKey
      *
-     * @param  enable  If set to true, then the column will be part of 
-     *                 the partition key.
+     * @param  enable  If this is set to true, then the column will be 
+     *                 part of the partition key.
      */
     void setPartitionKey(bool enable);
 #ifndef DOXYGEN_SHOULD_SKIP_DEPRECATED
@@ -531,18 +542,18 @@
    *
    * <em>TableSize</em><br>
    * When calculating the data storage one should add the size of all 
-   * attributes (each attributeconsumes at least 4 bytes) and also an overhead
-   * of 12 byte. Variable size attributes (not supported yet) will have a 
-   * size of 12 bytes plus the actual data storage parts where there is an 
-   * additional overhead based on the size of the variable part.<br>
+   * attributes (each attribute consumes at least 4 bytes) as well as a 
+   * 12-byte overhead. Variable-size attributes have a size of 12 bytes 
+   * plus the actual data storage parts where there is an additional 
+   * overhead based on the size of the variable part.<br>
    * An example table with 5 attributes: 
    * one 64 bit attribute, one 32 bit attribute, 
    * two 16 bit attributes and one array of 64 8 bits. 
    * This table will consume 
-   * 12 (overhead) + 8 + 4 + 2*4 (4 is minimum) + 64 = 96 bytes per record.
-   * Additionally an overhead of about 2 % as page headers and waste should 
-   * be allocated. Thus, 1 million records should consume 96 MBytes
-   * plus the overhead 2 MByte and rounded up to 100 000 kBytes.<br>
+   * 12 (overhead) + 8 + 4 + 2*4 (4 is the minimum) + 64 = 96 bytes per 
+   * record. An additional overhead of about 2% for page headers and 
+   * waste should be allocated. Thus, 1 million records should consume 
+   * 96 MB plus 2 MB overhead 2 MB, or 100 MB.<br>
    *
    */
   class Table : public Object {
@@ -553,36 +564,36 @@
      */
 
     /**
-     * Get table name
+     * Gets the table name.
      */
     const char * getName() const;
 
     /**
-     * Get table id
+     * Gets the table ID.
      */ 
     int getTableId() const;
     
     /**
-     * Get column definition via name.
-     * @return null if none existing name
+     * Gets the column definition (via the column name).
+     * @return <code>NULL</code>. if none exists with this name.
      */
     const Column* getColumn(const char * name) const;
     
     /**
      * Get column definition via index in table.
-     * @return null if none existing name
+     * @return <code>NULL</code>, if none exists with this name.
      */
     Column* getColumn(const int attributeId);
 
     /**
      * Get column definition via name.
-     * @return null if none existing name
+     * @return <code>NULL</code>, if none exists with this name.
      */
     Column* getColumn(const char * name);
     
     /**
      * Get column definition via index in table.
-     * @return null if none existing name
+     * @return <code>NULL</code>, if none exists with this name.
      */
     const Column* getColumn(const int attributeId) const;
     
@@ -593,15 +604,15 @@
      */
 
     /**
-     * If set to false, then the table is a temporary 
-     * table and is not logged to disk.
+     * If this is set to <code>false</code>, then the table is a 
+     * temporary table and is not logged to disk.
      *
-     * In case of a system restart the table will still
-     * be defined and exist but will be empty. 
-     * Thus no checkpointing and no logging is performed on the table.
+     * In the event of a system restart, the table will still be defined 
+     * (and so actually exist) but will be empty. Thus, neither 
+     * checkpointing nor logging is performed on the table.
      *
-     * The default value is true and indicates a normal table 
-     * with full checkpointing and logging activated.
+     * The default value is <code>true</code> and indicates a normal 
+     * table with full checkpointing and logging activated.
      */
     bool getLogging() const;
 
@@ -611,28 +622,27 @@
     FragmentType getFragmentType() const;
     
     /**
-     * Get KValue (Hash parameter.)
-     * Only allowed value is 6.
+     * Gets <var>KValue</var> (Hash parameter).
+     * Currently, the only value permitted is 6.
      * Later implementations might add flexibility in this parameter.
      */
     int getKValue() const;
 
     /**
-     * Get MinLoadFactor  (Hash parameter.)
-     * This value specifies the load factor when starting to shrink 
-     * the hash table. 
-     * It must be smaller than MaxLoadFactor.
-     * Both these factors are given in percentage.
+     * Gets <var>MinLoadFactor</var> (Hash parameter).
+     * This value specifies the load factor when starting to shrink the 
+     * hash table, and must be smaller than <var>MaxLoadFactor</var>.
+     * Both of these factors are given as percentages.
      */
     int getMinLoadFactor() const;
 
     /**
-     * Get MaxLoadFactor  (Hash parameter.)
-     * This value specifies the load factor when starting to split 
-     * the containers in the local hash tables. 
-     * 100 is the maximum which will optimize memory usage.
-     * A lower figure will store less information in each container and thus
-     * find the key faster but consume more memory.
+     * Gets <var>MaxLoadFactor</var> (Hash parameter).
+     * This value specifies the load factor when starting to split the 
+     * containers in the local hash tables. 
+     * 100 is the maximum that optimises memory usage.
+     * A lower figure stores less information in each container and 
+     * thus finds the key faster - but also consumes more memory.
      */
     int getMaxLoadFactor() const;
 
@@ -643,45 +653,45 @@
      */
     
     /**
-     * Get number of columns in the table
+     * Gets the number of columns in the table.
      */
     int getNoOfColumns() const;
     
     /**
-     * Get number of primary keys in the table
+     * Gets the number of primary keys in the table.
      */
     int getNoOfPrimaryKeys() const;
 
     /**
-     * Get name of primary key 
+     * Gets the name of a primary key.
      */
     const char* getPrimaryKey(int no) const;
     
     /**
-     * Check if table is equal to some other table
+     * Determines whether the table is equal to some other table.
      */
     bool equal(const Table&) const;
 
     /**
-     * Get frm file stored with this table
+     * Gets the <code>.frm</code> file stored with this table.
      */
     const void* getFrmData() const;
     Uint32 getFrmLength() const;
 
     /**
-     * Get Fragment Data (id, state and node group)
+     * Gets fragment data (id, state, and node group).
      */
     const void *getFragmentData() const;
     Uint32 getFragmentDataLen() const;
 
     /**
-     * Get Range or List Array (value, partition)
+     * Gets range or list array (value, partition).
      */
     const void *getRangeListData() const;
     Uint32 getRangeListDataLen() const;
 
     /**
-     * Get Tablespace Data (id, version)
+     * Gets tablespace data (ID, version).
      */
     const void *getTablespaceData() const;
     Uint32 getTablespaceDataLen() const;
@@ -692,91 +702,90 @@
      * @name Table creation
      * @{
      *
-     * These methods should normally not be used in an application as
-     * the result is not accessible from the MySQL Server
-     *
+     * These methods should normally not be used in an application, as
+     * the result is not accessible from the MySQL Server. Use MySQL
+     * <code>CREATE TABLE</code> statements instead.
      */
 
     /**
      * Constructor
-     * @param  name   Name of table
+     * @param  name   Name of table.
      */
     Table(const char * name = "");
 
     /** 
      * Copy constructor 
-     * @param  table  Table to be copied
+     * @param  table  Table to be copied.
      */
     Table(const Table& table); 
     virtual ~Table();
     
     /**
-     * Assignment operator, deep copy
-     * @param  table  Table to be copied
+     * Assignment operator for a deep copy.
+     * @param  table  Table to be copied.
      */
     Table& operator=(const Table& table);
 
     /**
-     * Name of table
-     * @param  name  Name of table
+     * Name of the table.
+     * @param  name  Name of table.
      */
     void setName(const char * name);
 
     /**
-     * Add a column definition to a table
-     * @note creates a copy
+     * Adds a column definition to a table.
+     * @note Creates a copy.
      */
     void addColumn(const Column &);
     
     /**
-     * @see NdbDictionary::Table::getLogging.
+     * See @ref NdbDictionary::Table::getLogging().
      */
     void setLogging(bool); 
   
     /**
-     * Set/Get Linear Hash Flag
+     * Sets/Gets the linear hash flag.
      */ 
     void setLinearFlag(Uint32 flag);
     bool getLinearFlag() const;
 
     /**
-     * Set fragment count
+     * Sets the fragment count.
      */
     void setFragmentCount(Uint32);
 
     /**
-     * Get fragment count
+     * Gets the fragment count.
      */
     Uint32 getFragmentCount() const;
 
     /**
-     * Set fragmentation type
+     * Sets the fragmentation type.
      */
     void setFragmentType(FragmentType);
 
     /**
-     * Set KValue (Hash parameter.)
-     * Only allowed value is 6.
+     * Sets <var>KValue</var> (hash parameter).
+     * Only value permitted is 6.
      * Later implementations might add flexibility in this parameter.
      */
     void setKValue(int kValue);
     
     /**
-     * Set MinLoadFactor  (Hash parameter.)
-     * This value specifies the load factor when starting to shrink 
-     * the hash table. 
-     * It must be smaller than MaxLoadFactor.
-     * Both these factors are given in percentage.
+     * Sets <var>MinLoadFactor</var> (hash parameter).
+     * This value specifies the load factor when starting to shrink the 
+     * hash table, and must be smaller than <var>MaxLoadFactor</var>.
+     * Both factors are given as percentages.
      */
     void setMinLoadFactor(int);
 
     /**
-     * Set MaxLoadFactor  (Hash parameter.)
-     * This value specifies the load factor when starting to split 
-     * the containers in the local hash tables. 
-     * 100 is the maximum which will optimize memory usage.
-     * A lower figure will store less information in each container and thus
-     * find the key faster but consume more memory.
+     * Sets <var>MaxLoadFactor</var> (hash parameter).
+     * This value specifies the load factor when starting to split the 
+     * containers in the local hash tables. 
+     * 100 is the maximum which optimises memory usage.
+     * A lower figure stores less information in each container and thus
+     * finds the key faster - but consumes more memory.
      */
     void setMaxLoadFactor(int);
 
@@ -786,75 +795,76 @@
     Uint32 getTablespaceId() const;
 
     /**
-     * Get table object type
+     * Gets the table object type.
      */
     Object::Type getObjectType() const;
 
     /**
-     * Get object status
+     * Gets the object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets the object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Set/Get Maximum number of rows in table (only used to calculate
-     * number of partitions).
+     * Sets/Gets the maximum number of rows in the table (used only to 
+     * calculate the number of partitions).
      */
     void setMaxRows(Uint64 maxRows);
     Uint64 getMaxRows() const;
 
     /**
-     * Set/Get indicator if default number of partitions is used in table.
+     * Sets/Gets an indicator showing whether the default number of 
+     * partitions is used for the table.
      */
     void setDefaultNoPartitionsFlag(Uint32 indicator);
     Uint32 getDefaultNoPartitionsFlag() const;
    
     /**
-     * Get object id
+     * Gets the object ID.
      */
     virtual int getObjectId() const;
 
     /**
-     * Set frm file to store with this table
+     * Sets the <code>.frm</code> file used to store this table.
      */ 
     void setFrm(const void* data, Uint32 len);
 
     /**
-     * Set array of fragment information containing
-     * Fragment Identity
-     * Node group identity
-     * Fragment State
+     * Sets an array of fragment information containing:
+     * - Fragment Identity.
+     * - Node group identity.
+     * - Fragment State.
      */
     void setFragmentData(const void* data, Uint32 len);
 
     /**
-     * Set/Get tablespace names per fragment
+     * Sets/Gets the tablespace name for each fragment.
      */
     void setTablespaceNames(const void* data, Uint32 len);
     const void *getTablespaceNames();
     Uint32 getTablespaceNamesLen() const;
 
     /**
-     * Set tablespace information per fragment
-     * Contains a tablespace id and a tablespace version
+     * Sets the tablespace information for each fragment.
+     * Contains a tablespace ID and a tablespace version.
      */
     void setTablespaceData(const void* data, Uint32 len);
 
     /**
-     * Set array of information mapping range values and list values
+     * Sets an array of information mapping range values and list values
      * to fragments. This is essentially a sorted map consisting of
-     * pairs of value, fragment identity. For range partitions there is
-     * one pair per fragment. For list partitions it could be any number
-     * of pairs, at least as many as there are fragments.
+     * pairs of values and fragment IDs. For range partitions, there is
+     * a single pair per fragment. For list partitions there could be 
+     * any number of pairs - at least as many as there are fragments.
      */
     void setRangeListData(const void* data, Uint32 len);
 
     /**
-     * Set table object type
+     * Sets the table object type.
      */
     void setObjectType(Object::Type type);
 
@@ -898,65 +908,65 @@
   public:
     
     /** 
-     * @name Getting Index properties
+     * @name Obtaining Index properties
      * @{
      */
 
     /**
-     * Get the name of an index
+     * Gets the name of an index.
      */
     const char * getName() const;
     
     /**
-     * Get the name of the table being indexed
+     * Gets the name of the table being indexed.
      */
     const char * getTable() const;
     
     /**
-     * Get the number of columns in the index
+     * Gets the number of columns in the index.
      */
     unsigned getNoOfColumns() const;
 
 #ifndef DOXYGEN_SHOULD_SKIP_DEPRECATED
     /**
-     * Get the number of columns in the index
-     * Depricated, use getNoOfColumns instead.
+     * Gets the number of columns in the index.
+     * @note This is now deprecated in favour of getNoOfColumns().
      */
     int getNoOfIndexColumns() const;
 #endif
 
     /**
-     * Get a specific column in the index
+     * Gets a specific column in the index.
      */
     const Column * getColumn(unsigned no) const ;
 
 #ifndef DOXYGEN_SHOULD_SKIP_DEPRECATED
     /**
-     * Get a specific column name in the index
-     * Depricated, use getColumn instead.
+     * Gets a specific column name in the index.
+     * @note This is now deprecated in favour of getColumn().
      */
     const char * getIndexColumn(int no) const ;
 #endif
 
     /**
-     * Represents type of index
+     * Represents the type of index.
      */
     enum Type {
-      Undefined = 0,          ///< Undefined object type (initial value)
-      UniqueHashIndex = 3,    ///< Unique un-ordered hash index 
-                              ///< (only one currently supported)
-      OrderedIndex = 6        ///< Non-unique ordered index
+      Undefined = 0,          ///< Undefined object type (initial value).
+      UniqueHashIndex = 3,    ///< Unique unordered hash index.
+                              ///< (only one currently supported).
+      OrderedIndex = 6        ///< Non-unique ordered index.
     };
 
     /**
-     * Get index type of the index
+     * Gets the index type of the index.
      */
     Type getType() const;
     
     /**
-     * Check if index is set to be stored on disk
+     * Checks whether the index is to be stored on disk.
      *
-     * @return if true then logging id enabled
+     * @return If true, then the logging ID is enabled.
      *
      * @note Non-logged indexes are rebuilt at system restart.
      * @note Ordered index does not currently support logging.
@@ -964,17 +974,17 @@
     bool getLogging() const;
 
     /**
-     * Get object status
+     * Gets object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Get object id
+     * Gets object ID.
      */
     virtual int getObjectId() const;
 
@@ -984,8 +994,8 @@
      * @name Index creation
      * @{
      *
-     * These methods should normally not be used in an application as
-     * the result will not be visible from the MySQL Server
+     * These methods should normally not be used in an application, 
+     * since the result will not be visible from the MySQL Server.
      *
      */
 
@@ -997,67 +1007,68 @@
     virtual ~Index();
 
     /**
-     * Set the name of an index
+     * Sets the name of an index.
      */
     void setName(const char * name);
 
     /**
-     * Define the name of the table to be indexed
+     * Defines the name of the table to be indexed.
      */
     void setTable(const char * name);
 
     /**
-     * Add a column to the index definition
-     * Note that the order of columns will be in
-     * the order they are added (only matters for ordered indexes).
+     * Adds a column to the index definition.
+     * @note The order of the columns is the order in which they are 
+     *       added. (This is relevant only for ordered indexes.)
      */
     void addColumn(const Column & c);
 
     /**
-     * Add a column name to the index definition
-     * Note that the order of indexes will be in
-     * the order they are added (only matters for ordered indexes).
+     * Adds a column name to the index definition.
+     * @note The order of the columns is the order in which they are 
+     *       added. (This is relevant only for ordered indexes.)
      */
     void addColumnName(const char * name);
 
 #ifndef DOXYGEN_SHOULD_SKIP_DEPRECATED
     /**
-     * Add a column name to the index definition
-     * Note that the order of indexes will be in
-     * the order they are added (only matters for ordered indexes).
-     * Depricated, use addColumnName instead.
+     * Adds a column name to the index definition.
+     * @note The order of the columns is the order in which they are 
+     *       added. (This is relevant only for ordered indexes.)
+     * @note This method is now deprecated in favour of addColumnName().
      */
     void addIndexColumn(const char * name);
 #endif
 
     /**
-     * Add several column names to the index definition
-     * Note that the order of indexes will be in
-     * the order they are added (only matters for ordered indexes).
+     * Adds multiple column names to the index definition.
+     * @note The order of the columns is the order in which they are 
+     *       added. (This is relevant only for ordered indexes.)
      */
     void addColumnNames(unsigned noOfNames, const char ** names);
 
 #ifndef DOXYGEN_SHOULD_SKIP_DEPRECATED
     /**
-     * Add several column names to the index definition
-     * Note that the order of indexes will be in
-     * the order they are added (only matters for ordered indexes).
-     * Depricated, use addColumnNames instead.
+     * Adds multiple column names to the index definition.
+     * @note The order of the columns is the order in which they are 
+     *       added. (This is relevant only for ordered indexes.)
+     * @note This method is now deprecated in favour of 
+     *       addColumnNames().
      */
     void addIndexColumns(int noOfNames, const char ** names);
 #endif
 
     /**
-     * Set index type of the index
+     * Sets the index type of the index.
      */
     void setType(Type type);
 
     /**
-     * Enable/Disable index storage on disk
+     * Enables or disables index storage on disk.
      *
-     * @param enable  If enable is set to true, then logging becomes enabled
+     * @param enable  If set to <code>true</code>, logging is enabled.
      *
-     * @see NdbDictionary::Index::getLogging
+     * @see NdbDictionary::Index::getLogging().
      */
     void setLogging(bool enable); 
 
@@ -1078,7 +1089,7 @@
   };
 
   /**
-   * @brief Represents an Event in NDB Cluster
+   * @brief Represents an Event in the NDB Cluster.
    *
    */
   class Event : public Object  {
@@ -1125,12 +1136,12 @@
       _TE_NODE_FAILURE=10,
       _TE_SUBSCRIBE=11,
       _TE_UNSUBSCRIBE=12,
-      _TE_NUL=13 // internal (e.g. INS o DEL within same GCI)
+      _TE_NUL=13 // internal (INS or DEL within the same GCI)
     };
 #endif
     /**
-     *  Specifies the durability of an event
-     * (future version may supply other types)
+     *  Specifies the durability of an event (future versions may supply 
+     *  additional types).
      */
     enum EventDurability { 
       ED_UNDEFINED
@@ -1143,18 +1154,18 @@
       // and it's deleted after api has disconnected or ndb has restarted
       
       ED_TEMPORARY = 2
-      // All API's can use it,
-      // But's its removed when ndb is restarted
+      // All APIs can use it,
+      // but it's removed when ndb is restarted.
 #endif
-      ,ED_PERMANENT    ///< All API's can use it.
-                       ///< It's still defined after a cluster system restart
+      ,ED_PERMANENT    ///< All APIs can use the event, and it remains 
+                       ///< defined after a cluster system restart.
 #ifndef DOXYGEN_SHOULD_SKIP_INTERNAL
       = 3
 #endif
     };
 
     /**
-     * Specifies reporting options for table events
+     * Specifies reporting options for table events.
      */
     enum EventReport {
       ER_UPDATED = 0,
@@ -1164,134 +1175,136 @@
 
     /**
      *  Constructor
-     *  @param  name  Name of event
+     *  @param  name  Name of event.
      */
     Event(const char *name);
     /**
      *  Constructor
-     *  @param  name  Name of event
-     *  @param  table Reference retrieved from NdbDictionary
+     *  @param  name  Name of event.
+     *  @param  table Reference retrieved from NdbDictionary.
      */
     Event(const char *name, const NdbDictionary::Table& table);
     virtual ~Event();
     /**
-     * Set unique identifier for the event
+     * Sets a unique identifier for the event.
      */
     void setName(const char *name);
     /**
-     * Get unique identifier for the event
+     * Gets unique identifier for the event.
      */
     const char *getName() const;
     /**
-     * Define table on which events should be detected
+     * Defines the table on which events should be detected.
      *
-     * @note calling this method will default to detection
-     *       of events on all columns. Calling subsequent
-     *       addEventColumn calls will override this.
+     * @note This method defaults to detection of events on all columns. 
+     *       This behaviour can be overridden by calling
+     *       addEventColumn().
      *
-     * @param table reference retrieved from NdbDictionary
+     * @param table Reference retrieved from NdbDictionary.
      */
     void setTable(const NdbDictionary::Table& table);
     /**
-     * Set table for which events should be detected
+     * Sets the table on which events should be detected.
      *
-     * @note preferred way is using setTable(const NdbDictionary::Table&)
-     *       or constructor with table object parameter
+     * @note The preferred way of doing this is by either using 
+     *       setTable(const NdbDictionary::Table&) or calling the 
+     *       constructor with a table object parameter.
      */
     void setTable(const char *tableName);
     /**
-     * Get table name for events
+     * Gets the table name for event handling.
      *
      * @return table name
      */
     const char* getTableName() const;
     /**
-     * Add type of event that should be detected
+     * Adds the type of event that should be detected.
      */
     void addTableEvent(const TableEvent te);
     /**
-     * Set durability of the event
+     * Sets the durability of the event.
      */
     void setDurability(EventDurability);
     /**
-     * Get durability of the event
+     * Gets the durability of the event.
      */
     EventDurability getDurability() const;
     /**
-     * Set report option of the event
+     * Sets an event reporting option.
      */
     void setReport(EventReport);
     /**
-     * Get report option of the event
+     * Gets an event reporting option.
      */
     EventReport getReport() const;
 #ifndef DOXYGEN_SHOULD_SKIP_INTERNAL
     void addColumn(const Column &c);
 #endif
     /**
-     * Add a column on which events should be detected
+     * Adds a column on which events should be detected.
      *
-     * @param attrId Column id
+     * @param attrId Column ID.
      *
-     * @note errors will mot be detected until createEvent() is called
+     * @note Errors are not detected until createEvent() is called.
      */
     void addEventColumn(unsigned attrId);
     /**
-     * Add a column on which events should be detected
+     * Adds a column on which events should be detected.
      *
      * @param columnName Column name
      *
-     * @note errors will not be detected until createEvent() is called
+     * @note Errors are not detected until createEvent() is called.
      */
     void addEventColumn(const char * columnName);
     /**
-     * Add several columns on which events should be detected
+     * Adds multiple columns on which events should be detected
      *
      * @param n Number of columns
      * @param columnNames Column names
      *
-     * @note errors will mot be detected until 
-     *       NdbDictionary::Dictionary::createEvent() is called
+     * @note Errors are not detected until 
+     *       NdbDictionary::Dictionary::createEvent() is called.
      */
     void addEventColumns(int n, const char ** columnNames);
 
     /**
-     * Get no of columns defined in an Event
+     * Gets the number of columns defined in an Event.
      *
-     * @return Number of columns, -1 on error
+     * @return Number of columns, or -1 on error.
      */
     int getNoOfEventColumns() const;
 
     /**
-     * The merge events flag is false by default.  Setting it true
-     * implies that events are merged in following ways:
+     * The merge events flag is false by default.  Setting it to true
+     * implies that events are merged in the following ways:
      *
-     * - for given NdbEventOperation associated with this event,
-     *   events on same PK within same GCI are merged into single event
+     * - For a given NdbEventOperation associated with this event,
+     *   events on the same PK within the same GCI are merged into a 
+     *   single event.
      *
-     * - a blob table event is created for each blob attribute
-     *   and blob events are handled as part of main table events
+     * - A blob table event is created for each blob attribute;
+     *   blob events are then handled as part of the main table events.
      *
-     * - blob post/pre data from the blob part events can be read
-     *   via NdbBlob methods as a single value
+     * - Blob post/pre data from blob segment events can be read
+     *   via NdbBlob methods as a single value.
      *
-     * NOTE: Currently this flag is not inherited by NdbEventOperation
+     * @note Currently this flag is not inherited by NdbEventOperation
      * and must be set on NdbEventOperation explicitly.
      */
     void mergeEvents(bool flag);
 
     /**
-     * Get object status
+     * Gets object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Get object id
+     * Gets object ID.
      */
     virtual int getObjectId() const;
 
@@ -1336,17 +1349,17 @@
     Uint64 getUndoFreeWords() const;
 
     /**
-     * Get object status
+     * Gets object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Get object id
+     * Gets object ID.
      */
     virtual int getObjectId() const;
 
@@ -1382,17 +1395,17 @@
     Uint32 getDefaultLogfileGroupId() const;
     
     /**
-     * Get object status
+     * Gets object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Get object id
+     * Gets object ID.
      */
     virtual int getObjectId() const;
 
@@ -1426,17 +1439,17 @@
     Uint32 getFileNo() const;
 
     /**
-     * Get object status
+     * Gets object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Get object id
+     * Gets object ID.
      */
     virtual int getObjectId() const;
 
@@ -1469,17 +1482,17 @@
     Uint32 getFileNo() const;
 
     /**
-     * Get object status
+     * Gets object status.
      */
     virtual Object::Status getObjectStatus() const;
 
     /**
-     * Get object version
+     * Gets object version.
      */
     virtual int getObjectVersion() const;
 
     /**
-     * Get object id
+     * Gets object ID.
      */
     virtual int getObjectId() const;
 
@@ -1491,27 +1504,28 @@
 
   /**
    * @class Dictionary
-   * @brief Dictionary for defining and retreiving meta data
+   * @brief Dictionary for defining and retrieving metadata.
    */
   class Dictionary {
   public:
     /**
      * @class List
-     * @brief Structure for retrieving lists of object names
+     * @brief Structure for retrieving lists of object names.
      */
     struct List {
       /**
        * @struct  Element
-       * @brief   Object to be stored in an NdbDictionary::Dictionary::List
+       * @brief   Object to be stored in an 
+       *          NdbDictionary::Dictionary::List.
        */
       struct Element {
-	unsigned id;            ///< Id of object
-        Object::Type type;      ///< Type of object
-        Object::State state;    ///< State of object
-        Object::Store store;    ///< How object is stored
-	char * database;        ///< In what database the object resides 
-	char * schema;          ///< What schema the object is defined in
-	char * name;            ///< Name of object
+        unsigned id;          ///< ID of object.
+        Object::Type type;    ///< Type of object.
+        Object::State state;  ///< State of object.
+        Object::Store store;  ///< How the object is stored.
+        char * database;      ///< In which database the object resides.
+        char * schema;        ///< In which schema the object is defined.
+        char * name;          ///< Name of object.
         Element() :
           id(0),
           type(Object::TypeUndefined),
@@ -1522,8 +1536,8 @@
           name(0) {
         }
       };
-      unsigned count;           ///< Number of elements in list
-      Element * elements;       ///< Pointer to array of elements
+      unsigned count;           ///< Number of elements in list.
+      Element * elements;       ///< Pointer to array of elements.
       List() : count(0), elements(0) {}
       ~List() {
         if (elements != 0) {
@@ -1546,13 +1560,13 @@
      */
 
     /**
-     * Fetch list of all objects, optionally restricted to given type.
+     * Fetches a list of all objects, optionally restricted to a given 
+     * type.
      *
-     * @param list   List of objects returned in the dictionary
-     * @param type   Restrict returned list to only contain objects of
-     *               this type
+     * @param list   List of objects returned in the dictionary.
+     * @param type   The returned list is restricted to objects of this type.
      *
-     * @return       -1 if error.
+     * @return       -1 on error.
      *
      */
     int listObjects(List & list, Object::Type type = Object::TypeUndefined);
@@ -1560,7 +1574,7 @@
 		    Object::Type type = Object::TypeUndefined) const;
 
     /**
-     * Get the latest error
+     * Gets the latest error.
      *
      * @return   Error object.
      */			     
@@ -1569,31 +1583,34 @@
     /** @} *******************************************************************/
 
     /** 
-     * @name Retrieving references to Tables and Indexes
+     * @name Retrieving references to tables and indexes.
      * @{
      */
 
     /**
-     * Get table with given name, NULL if undefined
-     * @param name   Name of table to get
-     * @return table if successful otherwise NULL.
+     * Gets the table with the given name, or <code>NULL</code> if the
+     * the table is undefined.
+     * @param name   Name of the table.
+     * @return Table on success, otherwise <code>NULL</code>.
      */
     const Table * getTable(const char * name) const;
 
     /**
-     * Get index with given name, NULL if undefined
-     * @param indexName  Name of index to get.
-     * @param tableName  Name of table that index belongs to.
-     * @return  index if successful, otherwise 0.
+     * Gets the index with the given name, or <code>NULL</code> if the 
+     * index is undefined.
+     * @param indexName  Name of index.
+     * @param tableName  Name of the table to which the index belongs.
+     * @return           Index on success, otherwise 0.
      */
     const Index * getIndex(const char * indexName,
 			   const char * tableName) const;
 
     /**
-     * Fetch list of indexes of given table.
-     * @param list  Reference to list where to store the listed indexes
-     * @param tableName  Name of table that index belongs to.
-     * @return  0 if successful, otherwise -1
+     * Fetches a list of indexes on the given table.
+     * @param list  Reference to the list in which to store the listed 
+     *              indexes.
+     * @param tableName  Name of the table to which the index belongs.
+     * @return  0 on success, otherwise -1.
      */
     int listIndexes(List & list, const char * tableName);
     int listIndexes(List & list, const char * tableName) const;
@@ -1605,23 +1622,23 @@
      */
     
     /**
-     * Create event given defined Event instance
-     * @param event Event to create
-     * @return 0 if successful otherwise -1.
+     * Creates an event, given the defined Event instance.
+     * @param event Event to create.
+     * @return 0 on success, otherwise -1.
      */
     int createEvent(const Event &event);
 
     /**
-     * Drop event with given name
-     * @param eventName  Name of event to drop.
-     * @return 0 if successful otherwise -1.
+     * Drops the event with the given name.
+     * @param eventName  Name of the event to drop.
+     * @return 0 on success, otherwise -1.
      */
     int dropEvent(const char * eventName);
     
     /**
-     * Get event with given name.
-     * @param eventName  Name of event to get.
-     * @return an Event if successful, otherwise NULL.
+     * Gets event with the given name.
+     * @param eventName  Name of the event to get.
+     * @return an Event on successl, otherwise <code>NULL</code>.
      */
     const Event * getEvent(const char * eventName);
 
@@ -1631,54 +1648,54 @@
      * @name Table creation
      * @{
      *
-     * These methods should normally not be used in an application as
-     * the result will not be visible from the MySQL Server
+     * These methods should normally not be used in an application, as
+     * the result is not visible from the MySQL Server.
      */
 
     /**
-     * Create defined table given defined Table instance
-     * @param table Table to create
-     * @return 0 if successful otherwise -1.
+     * Creates the defined table given a defined Table instance.
+     * @param table Table to create.
+     * @return 0 on success, otherwise -1.
      */
     int createTable(const Table &table);
 
     /**
-     * Drop table given retrieved Table instance
-     * @param table Table to drop
-     * @return 0 if successful otherwise -1.
+     * Drops a table, given a retrieved Table instance.
+     * @param table Table to drop.
+     * @return 0 on success, otherwise -1.
      */
     int dropTable(Table & table);
 
     /**
-     * Drop table given table name
-     * @param name   Name of table to drop 
-     * @return 0 if successful otherwise -1.
+     * Drops the table with the given table name.
+     * @param name   Name of the table to drop.
+     * @return 0 on success, otherwise -1.
      */
     int dropTable(const char * name);
     
 #ifndef DOXYGEN_SHOULD_SKIP_INTERNAL
     /**
-     * Alter defined table given defined Table instance
-     * @param table Table to alter
-     * @return  -2 (incompatible version) <br>
-     *          -1 general error          <br>
-     *           0 success                 
+     * Alters a defined table given a defined Table instance.
+     * @param table Table to alter.
+     * @return  -2 (Incompatible version). <br>
+     *          -1 General error.          <br>
+     *           0 Success.                 
      */
     int alterTable(const Table &table);
 
     /**
-     * Invalidate cached table object
-     * @param name  Name of table to invalidate
+     * Invalidates a cached table object.
+     * @param name  Name of the table to invalidate.
      */
     void invalidateTable(const char * name);
 #endif
 
     /**
-     * Remove table from local cache
+     * Removes a table from the local cache.
      */
     void removeCachedTable(const char * table);
     /**
-     * Remove index from local cache
+     * Removes an index from the local cache.
      */
     void removeCachedIndex(const char * index, const char * table);
 
@@ -1688,42 +1705,41 @@
      * @name Index creation
      * @{
      *
-     * These methods should normally not be used in an application as
-     * the result will not be visible from the MySQL Server
+     * These methods should normally not be used in an application, as
+     * the result is not visible from the MySQL Server.
      *
      */
     
     /**
-     * Create index given defined Index instance
-     * @param index Index to create
-     * @return 0 if successful otherwise -1.
+     * Creates an index, given a defined Index instance.
+     * @param index Index to create.
+     * @return 0 on success, otherwise -1.
      */
     int createIndex(const Index &index);
 
     /**
-     * Drop index with given name
-     * @param indexName  Name of index to drop.
-     * @param tableName  Name of table that index belongs to.
-     * @return 0 if successful otherwise -1.
+     * Drops the index with the given name.
+     * @param indexName  Name of the index to drop.
+     * @param tableName  Name of the table to which the index belongs.
+     * @return 0 on success, otherwise -1.
      */
     int dropIndex(const char * indexName,
 		  const char * tableName);
     
 #ifndef DOXYGEN_SHOULD_SKIP_INTERNAL
     /**
-     * Invalidate cached index object
+     * Invalidates a cached index object.
      */
     void invalidateIndex(const char * indexName,
                          const char * tableName);
     /**
-     * Force gcp and wait for gcp complete
+     * Forces a GCP and waits for the GCP to be completed.
      */
     int forceGCPWait();
 #endif
 
     /** @} *******************************************************************/
-
-    /** @} *******************************************************************/
+    
     /** 
      * @name Disk data objects
      * @{

--- 1.10/storage/ndb/include/ndbapi/NdbError.hpp	2006-01-13 04:50:32 +10:00
+++ 1.11/storage/ndb/include/ndbapi/NdbError.hpp	2006-01-31 19:34:20 +10:00
@@ -23,7 +23,7 @@
  * @struct NdbError
  * @brief Contains error information
  *
- * A NdbError consists of five parts:
+ * An NdbError consists of five parts:
  * -# Error status         : Application impact
  * -# Error classification : Logical error group
  * -# Error code           : Internal error code
@@ -31,8 +31,8 @@
  * -# Error details        : Context dependent information 
  *                           (not always available)
  *
- * <em>Error status</em> is usually used for programming against errors.
- * If more detailed error control is needed, it is possible to 
+ * The <em>Error status</em> is usually used for programming against 
+ * errors. If more detailed error control is needed, it is possible to 
  * use the <em>error classification</em>.
  *
  * It is not recommended to write application programs dependent on
@@ -41,54 +41,54 @@
  * The <em>error messages</em> and <em>error details</em> may
  * change without notice.
  * 
- * For example of use, see @ref ndbapi_retries.cpp.
+ * For an example of use, see @ref ndbapi_retries.cpp.
  */
 struct NdbError {
   /**
-   * Status categorizes error codes into status values reflecting
-   * what the application should do when encountering errors
+   * <code>Status</code> categorises error codes into status values 
+   * reflecting what the application should do when encountering errors.
    */
   enum Status {
     /**
-     * The error code indicate success<br>
-     * (Includes classification: NdbError::NoError)
+     * The error code indicates success.<br>
+     * (Includes the classification NdbError::NoError.)
      */
     Success = ndberror_st_success,
 
     /**
      * The error code indicates a temporary error.
      * The application should typically retry.<br>
-     * (Includes classifications: NdbError::InsufficientSpace, 
+     * (Includes the classifications NdbError::InsufficientSpace, 
      *  NdbError::TemporaryResourceError, NdbError::NodeRecoveryError,
-     *  NdbError::OverloadError, NdbError::NodeShutdown 
+     *  NdbError::OverloadError, NdbError::NodeShutdown, 
      *  and NdbError::TimeoutExpired.)
      */
     TemporaryError = ndberror_st_temporary,
     
     /**
      * The error code indicates a permanent error.<br>
-     * (Includes classificatons: NdbError::PermanentError, 
+     * (Includes the classifications NdbError::PermanentError, 
      *  NdbError::ApplicationError, NdbError::NoDataFound,
      *  NdbError::ConstraintViolation, NdbError::SchemaError,
-     *  NdbError::UserDefinedError, NdbError::InternalError, and, 
+     *  NdbError::UserDefinedError, NdbError::InternalError, and 
      *  NdbError::FunctionNotImplemented.)
      */
     PermanentError = ndberror_st_permanent,
   
     /**
-     * The result/status is unknown.<br>
-     * (Includes classifications: NdbError::UnknownResultError, and
+     * The result or status is unknown.<br>
+     * (Includes the classifications NdbError::UnknownResultError and
      *  NdbError::UnknownErrorCode.)
      */
     UnknownResult = ndberror_st_unknown
   };
   
   /**
-   * Type of error
+   * Type of error.
    */
   enum Classification {
     /**
-     * Success.  No error occurred.
+     * Success. (No error occurred.)
      */
     NoError = ndberror_cl_none,
 
@@ -103,8 +103,8 @@
     NoDataFound = ndberror_cl_no_data_found,
 
     /**
-     * E.g. inserting a tuple with a primary key already existing 
-     * in the table.
+     * Failure due to a constraint violation, such as attemping to 
+     * insert a tuple with an existing primary key.
      */
     ConstraintViolation = ndberror_cl_constraint_violation,
 
@@ -119,24 +119,24 @@
     UserDefinedError = ndberror_cl_user_defined,
     
     /**
-     * E.g. insufficient memory for data or indexes.
+     * Insufficient memory for data or indexes.
      */
     InsufficientSpace = ndberror_cl_insufficient_space,
 
     /**
-     * E.g. too many active transactions.
+     * Too many active transactions.
      */
     TemporaryResourceError = ndberror_cl_temporary_resource,
 
     /**
-     * Temporary failures which are probably inflicted by a node
-     * recovery in progress.  Examples: information sent between
-     * application and NDB lost, distribution change.
+     * Temporary failures which are probably raised by a node
+     * recovery in progress. Examples: information sent between
+     * application and NDB lost, or distribution change.
      */
     NodeRecoveryError = ndberror_cl_node_recovery,
 
     /**
-     * E.g. out of log file space.
+     * Out of log file space.
      */
     OverloadError = ndberror_cl_overload,
 
@@ -146,7 +146,7 @@
     TimeoutExpired = ndberror_cl_timeout_expired,
     
     /**
-     * Is is unknown whether the transaction was committed or not.
+     * It is not known whether the transaction was committed or not.
      */
     UnknownResultError = ndberror_cl_unknown_result,
     
@@ -166,17 +166,17 @@
     UnknownErrorCode = ndberror_cl_unknown_error_code,
 
     /**
-     * Node shutdown
+     * Node shutdown.
      */
     NodeShutdown = ndberror_cl_node_shutdown,
 
     /**
-     * Schema object already exists
+     * Schema object already exists.
      */
     SchemaObjectExists = ndberror_cl_schema_object_already_exists,
 
     /**
-     * Request sent to non master
+     * Request sent to non-master.
      */
     InternalTemporary = ndberror_cl_internal_temporary
   };
@@ -187,30 +187,30 @@
   Status status;
 
   /**
-   * Error type
+   * Error type.
    */
   Classification classification;
   
   /**
-   * Error code
+   * Error code.
    */
   int code;
 
   /**
-   * Mysql error code
+   * MySQL error code.
    */
   int mysql_code;
 
   /**
-   * Error message
+   * Error message.
    */
   const char * message;
 
   /**
-   * The detailed description.  This is extra information regarding the 
-   * error which is not included in the error message.
+   * The detailed description. This provides additional information regarding 
+   * the error which is not included in the error message.
    *
-   * @note Is NULL when no details specified
+   * @note Is <code>NULL</code> when no details are specified.
    */
   char * details;
 

--- 1.18/storage/ndb/include/ndbapi/NdbBlob.hpp	2006-01-19 23:00:03 +10:00
+++ 1.19/storage/ndb/include/ndbapi/NdbBlob.hpp	2006-01-31 19:34:18 +10:00
@@ -36,67 +36,76 @@
  *
  * Blob data is stored in 2 places:
  *
- * - "header" and "inline bytes" stored in the blob attribute
- * - "blob parts" stored in a separate table NDB$BLOB_<tid>_<cid>
- *
- * Inline and part sizes can be set via NdbDictionary::Column methods
- * when the table is created.
- *
- * NdbBlob is a blob handle.  To access blob data, the handle must be
- * created using NdbOperation::getBlobHandle in operation prepare phase.
- * The handle has following states:
- *
- * - prepared: before the operation is executed
- * - active: after execute or next result but before transaction commit
- * - closed: after transaction commit
- * - invalid: after rollback or transaction close
+ * - The header and inline bytes are stored in the blob attribute.
+ * - The blob segments are stored in a separate table named 
+ *   NDB$BLOB_<var>tid</var>_<var>cid</var>.
+ *
+ * The inline and segment (part) sizes can be set via NdbDictionary::Column() 
+ * methods when the table is created.
+ *
+ * NdbBlob is a blob handle. To access blob data, the handle must be
+ * created using NdbOperation::getBlobHandle() in the operation 
+ * preparation phase. This handle has following states:
+ *
+ * - <b>prepared</b>: prior to operation execution.
+ * - <b>active</b>: following execution or fecth the next result, but 
+ *   before the transaction is committed.
+ * - <b>closed</b>: after the transaction is committed.
+ * - <b>invalid</b>: following a rollback or close of a transaction.
  *
  * NdbBlob supports 3 styles of data access:
  *
- * - in prepare phase, NdbBlob methods getValue and setValue are used to
- *   prepare a read or write of a blob value of known size
- *
- * - in prepare phase, setActiveHook is used to define a routine which
- *   is invoked as soon as the handle becomes active
- *
- * - in active phase, readData and writeData are used to read or write
- *   blob data of arbitrary size
- *
- * The styles can be applied in combination (in above order).
- *
- * Blob operations take effect at next transaction execute.  In some
- * cases NdbBlob is forced to do implicit executes.  To avoid this,
- * operate on complete blob parts.
- *
- * Use NdbTransaction::executePendingBlobOps to flush your reads and
- * writes.  It avoids execute penalty if nothing is pending.  It is not
- * needed after execute (obviously) or after next scan result.
- *
- * NdbBlob also supports reading post or pre blob data from events.  The
- * handle can be read after next event on main table has been retrieved.
- * The data is available immediately.  See NdbEventOperation.
+ * - In the preparation phase, the NdbBlob methods getValue() and 
+ *   setValue() are used to prepare a read or write of a blob value of 
+ *   known size.
+ *
+ * - Also in the preparation phase, setActiveHook() is used to define a 
+ *   routine which is invoked as soon as the handle becomes active.
+ *
+ * - In the active phase, readData() and writeData() is used to read or 
+ *   write blob data of arbitrary size.
+ *
+ * These styles can be applied in combination (in the above order).
+ *
+ * Blob operations take effect when the next transaction is executed. In 
+ * some cases, NdbBlob is forced to perform implicit execution. To avoid 
+ * this, operate on complete blob segments.
+ *
+ * Use NdbTransaction::executePendingBlobOps() to flush reads and 
+ * writes. There is no penalty for doing this if nothing is pending. It 
+ * is not necessary to do so following execution (obviously) or after 
+ * next scan result is obtained.
+ *
+ * NdbBlob also supports reading post- or pre-blob data from events.  The
+ * handle can be read after the next event on the main table has been retrieved.
+ * The data becomes available immediately (see @ref NdbEventOperation).
  *
  * NdbBlob methods return -1 on error and 0 on success, and use output
- * parameters when necessary.
+ * parameters as necessary.
  *
  * Operation types:
- * - insertTuple must use setValue if blob column is non-nullable
- * - readTuple with exclusive lock can also update existing value
- * - updateTuple can overwrite with setValue or update existing value
- * - writeTuple always overwrites and must use setValue if non-nullable
- * - deleteTuple creates implicit non-accessible blob handles
- * - scan with exclusive lock can also update existing value
- * - scan "lock takeover" update op must do its own getBlobHandle
+ * - insertTuple() must use setValue() if the blob column is 
+ *   non-nullable.
+ * - readTuple() with an exclusive lock can also update an existing 
+ *   value.
+ * - updateTuple() can overwrite with setValue() or update an existing 
+ *   value.
+ * - writeTuple() always overwrites and must use setValue() if the blob 
+ *   column is non-nullable.
+ * - deleteTuple() creates implicit, non-accessible blob handles.
+ * - A scan with an exclusive lock can also update an existing value
+ * - A scan update operation with lock takeover must perform its own 
+ *   getBlobHandle().
  *
  * Bugs / limitations:
- * - lock mode upgrade should be handled automatically
- * - lock mode vs allowed operation is not checked
- * - too many pending blob ops can blow up i/o buffers
- * - table and its blob part tables are not created atomically
+ * - Lock mode upgrade should be handled automatically.
+ * - Lock mode vs. an allowed operation is not checked.
+ * - Too many pending blob operations can overflow the I/O buffers.
+ * - The table and its blob segment tables are not created atomically.
  */
 #ifndef DOXYGEN_SHOULD_SKIP_INTERNAL
 /**
- * - there is no support for an asynchronous interface
+ * - There is no support for an asynchronous interface.
  */
 #endif
 
@@ -123,105 +132,110 @@
     Uint64 length;
   };
   /**
-   * Prepare to read blob value.  The value is available after execute.
-   * Use getNull() to check for NULL and getLength() to get the real length
-   * and to check for truncation.  Sets current read/write position to
-   * after the data read.
+   * Prepares to read a blob value. The value is available following 
+   * execution. Uses getNull() to check for NULL and getLength() to get 
+   * the real length and to check for truncation. Sets the current 
+   * read/write position to the point after the data has been read.
    */
   int getValue(void* data, Uint32 bytes);
   /**
-   * Prepare to insert or update blob value.  An existing longer blob
-   * value will be truncated.  The data buffer must remain valid until
-   * execute.  Sets current read/write position to after the data.  Set
-   * data to null pointer (0) to create a NULL value.
+   * Prepares to insert or update a blob value. Any existing blob value 
+   * that is longer is truncated. The data buffer must remain valid 
+   * until the operation is executed. Sets current read/write position 
+   * to the point following the end of the data. You can set the data to 
+   * a null pointer (0) in order to create a <code>NULL</code> value.
    */
   int setValue(const void* data, Uint32 bytes);
   /**
-   * Callback for setActiveHook().  Invoked immediately when the prepared
-   * operation has been executed (but not committed).  Any getValue() or
-   * setValue() is done first.  The blob handle is active so readData or
-   * writeData() etc can be used to manipulate blob value.  A user-defined
-   * argument is passed along.  Returns non-zero on error.
+   * Callback for setActiveHook(). Invoked immediately when the prepared
+   * operation has been executed (but not committed). Any getValue() or
+   * setValue() is done first. The blob handle is active so readData() or
+   * writeData() can be used to manipulate the blob value. A 
+   * user-defined argument is passed along. Returns a nonzero value in 
+   * the event of an error.
    */
   typedef int ActiveHook(NdbBlob* me, void* arg);
   /**
-   * Define callback for blob handle activation.  The queue of prepared
-   * operations will be executed in no commit mode up to this point and
+   * Defines callback for blob handle activation. The queue of prepared
+   * operations will be executed in no-commit mode up to this point;
    * then the callback is invoked.
    */
   int setActiveHook(ActiveHook* activeHook, void* arg);
   /**
-   * Check if blob value is defined (NULL or not).  Used as first call
-   * on event based blob.  The argument is set to -1 for not defined.
-   * Unlike getNull() this does not cause error on the handle.
+   * Checks whether the blob value is defined (<code>NULL</code> or not). This is used 
+   * as the first call on an event-based blob. The argument is set to -1 for 
+   * undefined. Unlike getNull() this does not cause an error on the handle.
    */
   int getDefined(int& isNull);
   /**
-   * Check if blob is null.
+   * Checks whether the blob is <code>NULL</code>.
    */
   int getNull(bool& isNull);
   /**
-   * Set blob to NULL.
+   * Sets the blob to <code>NULL</code>.
    */
   int setNull();
   /**
-   * Get current length in bytes.  Use getNull to distinguish between
-   * length 0 blob and NULL blob.
+   * Gets the current blob length in bytes. Use getNull() to distinguish 
+   * between a blob of zero length and a <code>NULL</code> blob.
    */
   int getLength(Uint64& length);
   /**
-   * Truncate blob to given length.  Has no effect if the length is
-   * larger than current length.
+   * Truncates the blob to a given length. Has no effect if the length 
+   * given is larger than current length.
    */
   int truncate(Uint64 length = 0);
   /**
-   * Get current read/write position.
+   * Gets the current read/write position.
    */
   int getPos(Uint64& pos);
   /**
-   * Set read/write position.  Must be between 0 and current length.
-   * "Sparse blobs" are not supported.
+   * Sets the read/write position. Must be between 0 and current length.
+   * Note that "sparse" blobs are not supported.
    */
   int setPos(Uint64 pos);
   /**
-   * Read at current position and set new position to first byte after
-   * the data read.  A read past blob end returns actual number of bytes
-   * read in the in/out bytes parameter.
+   * Reads from the current position and sets a new position to the first 
+   * byte following the end of the data that was read. A read past the 
+   * end of the blob returns the actual number of bytes read in the 
+   * in/out bytes parameter.
    */
   int readData(void* data, Uint32& bytes);
   /**
-   * Write at current position and set new position to first byte after
-   * the data written.  A write past blob end extends the blob value.
+   * Writes beginning at the current position and sets the new position to the 
+   * first byte following the end of the data written. A write past the 
+   * end of the blob extends the blob value.
    */
   int writeData(const void* data, Uint32 bytes);
   /**
-   * Return the blob column.
+   * Returns the blob column.
    */
   const NdbDictionary::Column* getColumn();
   /**
-   * Get blob parts table name.  Useful only to test programs.
+   * Gets the blob segment's table name. Useful only for testing 
+   * programs.
    */
   static int getBlobTableName(char* btname, Ndb* anNdb, const char* tableName, const char* columnName);
   /**
-   * Get blob event name.  The blob event is created if the main event
+   * Gets the blob event name.  The blob event is created if the main event
    * monitors the blob column.  The name includes main event name.
    */
   static int getBlobEventName(char* bename, Ndb* anNdb, const char* eventName, const char* columnName);
   /**
-   * Return error object.  The error may be blob specific (below) or may
-   * be copied from a failed implicit operation.
+   * Returns an error object. The error may be blob-specific (below) or 
+   * may be copied from a failed implicit operation.
    */
   const NdbError& getNdbError() const;
   /**
-   * Return info about all blobs in this operation.
+   * Returns information about all blobs in this operation.
    *
-   * Get first blob in list.
+   * Gets the first blob in the list.
    */
   NdbBlob* blobsFirstBlob();
   /**
-   * Return info about all blobs in this operation.
+   * Returns info about all blobs in this operation.
    *
-   * Get next blob in list. Initialize with blobsFirstBlob().
+   * Gets the next blob in the list. Initialise with blobsFirstBlob().
    */
   NdbBlob* blobsNextBlob();
 
@@ -342,7 +356,7 @@
   int updateParts(const char* buf, Uint32 part, Uint32 count);
   int deleteParts(Uint32 part, Uint32 count);
   int deletePartsUnknown(Uint32 part);
-  // pending ops
+  // pending operations
   int executePendingBlobReads();
   int executePendingBlobWrites();
   // callbacks

--- 1.14/storage/ndb/include/ndbapi/ndb_cluster_connection.hpp	2006-01-13 04:50:32 +10:00
+++ 1.15/storage/ndb/include/ndbapi/ndb_cluster_connection.hpp	2006-01-31 19:34:22 +10:00
@@ -36,40 +36,41 @@
  * @class Ndb_cluster_connection
  * @brief Represents a connection to a cluster of storage nodes.
  *
- * Any NDB application program should begin with the creation of a
- * single Ndb_cluster_connection object, and should make use of one
- * and only one Ndb_cluster_connection. The application connects to
- * a cluster management server when this object's connect() method is called.
- * By using the wait_until_ready() method it is possible to wait
+ * Any NDB application program should begin with the creation of a 
+ * single Ndb_cluster_connection object, and should make use of one and 
+ * only one Ndb_cluster_connection. The application connects to a 
+ * cluster management server when this object's connect() method is 
+ * called. By using the wait_until_ready() method it is possible to wait
  * for the connection to reach one or more storage nodes.
  */
 class Ndb_cluster_connection {
 public:
   /**
-   * Create a connection to a cluster of storage nodes
+   * Creates a connection to a cluster of storage nodes.
    *
-   * @param connectstring The connectstring for where to find the
-   *                      management server
+   * @param connectstring The connectstring pointing to the locaiton of
+   *                      the management server.
    */
   Ndb_cluster_connection(const char * connectstring = 0);
   ~Ndb_cluster_connection();
 
   /**
-   * Connect to a cluster management server
+   * Connects to a cluster management server.
    *
-   * @param no_retries specifies the number of retries to attempt
-   *        in the event of connection failure; a negative value 
-   *        will result in the attempt to connect being repeated 
-   *        indefinitely
+   * @param no_retries Specifies the number of retries to attempt in the 
+   *                   event of connection failure; a negative value 
+   *                   results in the connection attempt being repeated 
+   *                   indefinitely.
    *
-   * @param retry_delay_in_seconds specifies how often retries should
-   *        be performed
+   * @param retry_delay_in_seconds Specifies how often retries should
+   *                               be performed.
    *
-   * @param verbose specifies if the method should print a report of its progess
+   * @param verbose Specifies if the method should print a report of its 
+   *                progess.
    *
-   * @return 0 = success, 
+   * @return 0 = success.
    *         1 = recoverable error,
-   *        -1 = non-recoverable error
+   *        -1 = non-recoverable error.
    */
   int connect(int no_retries=0, int retry_delay_in_seconds=1, int verbose=0);
 
@@ -78,16 +79,17 @@
 #endif
 
   /**
-   * Wait until the requested connection with one or more storage nodes is successful
+   * Waits until the requested connection with one or more storage nodes 
+   * is successful.
    *
-   * @param timeout_for_first_alive   Number of seconds to wait until
-   *                                  first live node is detected
+   * @param timeout_for_first_alive   Number of seconds to wait for the
+   *                                  first live node to be detected.
    * @param timeout_after_first_alive Number of seconds to wait after
-   *                                  first live node is detected
+   *                                  the first live node is detected.
    *
-   * @return = 0 all nodes live,
-   *         > 0 at least one node live,
-   *         < 0 error
+   * @return = 0: All nodes are "live".
+   *         > 0: At least one node is "live".
+   *         < 0: Error.
    */
   int wait_until_ready(int timeout_for_first_alive,
 		       int timeout_after_first_alive);
Thread
bk commit into 5.1 tree (jon:1.2100)jon31 Jan