List:Commits« Previous MessageNext Message »
From:Ole John Aske Date:November 9 2011 1:11pm
Subject:bzr push into mysql-5.1-telco-7.0 branch (ole.john.aske:4647 to 4648)
Bug#13355055
View as plain text  
 4648 Ole John Aske	2011-11-09
      Fix for bug#13355055: CLUSTER INTERNALS FAILS TO TERMINATE BATCH AT MAX 'BATCHBYTESIZE'
      
        We have observed SCANREQs with a surprisingly small 'BatchSize' argument as part
      of debugging and tuning SPJ. Where we expected 'BatchSize=64' (Default) we
      have observed values around ~10. This directly translated into suboptimal performance.
      
      When debugging this, we found the root cause in NdbRecord::calculate_batch_size(), which
      returns the batchsize (#rows) and  arguments for the SCANREQ signal.
      It contained the following questionable logic:
      
       1) Calculate the worst case record length based on that *all columns* are selected
          from a table, and all varchar() columns being filled to their *max limit*.
      
       2) If that record length is such that 'batchsize * recLength' > ,
          reduce batchsize such that batchbytesize would never be exceeded.
      
      This effectively put ::calculate_batch_size() in control of the batchbytesize
      logic. The negative impact if that logic was that 'batchsize' could be severely
      restricted in cases where we could have delivered a lot more rows in that batch.
      
      However, there are logic in LQH+TUP which are intended to keep the delivered batches
      withing the batchsize limits. This is a much better place to control this as
      LQH & TUP knows the exact size of the TRANSID_AI payload being delivered, taking
      actual varchar length and only the selected columns into acount.
      
      Debugging that logic, it turned out that it contained bugs in how the produced
      batchsize was counted: Actually a mixup between whether the 'length' was in
      specified in number of bytes or Uint32. - So the above questionable
      ::calculate_batch_size() logic seems to have been invented only to
      circumvent this bug......
      
      Fixing that bug allowed us to now leave the entire batch control to
      the LQH block.
      
      - ::calculate_batch_size could then be significantly simplified.
      - The specified BatchSize & BatchByteSize arguments could be used as
        specified directly as args in SCANREQ signals.
      - Will likely give better performance (larger effective batches) when
        scanning a table with 'max record length > BatchByteSize / BatchSize'
        (~500 bytes with default config)
      
      
      Fix number of bytes/Uint32 mixup in how m_curr_batch_size_bytes is counted
      ******
      Fix number of bytes/Uint32 mixup in how the SPJ adaptive parallelism count m_totalBytes
      ******
      Simplify ::calculate_batch_size() as LQH now correctly will stay within the specified batch_size rows/bytes limits
      ******
      Remove NdbRecord::m_max_transid_ai_bytes which is now obsolete
      ******
      Remove unused args from NdbRecord::calculate_batch_size()
      ******
      Fix SPJs adaptive paralellism logic to also handle batchsize termination due to BatchByteSize being exhausted

    modified:
      storage/ndb/include/kernel/signaldata/ScanFrag.hpp
      storage/ndb/include/kernel/signaldata/TupKey.hpp
      storage/ndb/include/ndbapi/NdbReceiver.hpp
      storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp
      storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp
      storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp
      storage/ndb/src/ndbapi/NdbQueryOperation.cpp
      storage/ndb/src/ndbapi/NdbReceiver.cpp
      storage/ndb/src/ndbapi/NdbRecord.hpp
      storage/ndb/src/ndbapi/NdbScanOperation.cpp
 4647 Pekka Nousiainen	2011-11-09 [merge]
      merge 7.0 to wl4124-new5

    modified:
      storage/ndb/src/kernel/blocks/dbspj/Dbspj.hpp
      storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp
=== modified file 'storage/ndb/include/kernel/signaldata/ScanFrag.hpp'
--- a/storage/ndb/include/kernel/signaldata/ScanFrag.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/include/kernel/signaldata/ScanFrag.hpp	2011-11-09 13:10:53 +0000
@@ -177,7 +177,7 @@ public:
   Uint32 fragmentCompleted;
   Uint32 transId1;
   Uint32 transId2;
-  Uint32 total_len;
+  Uint32 total_len;  // Total #Uint32 returned as TRANSID_AI
 };
 
 class ScanFragRef {

=== modified file 'storage/ndb/include/kernel/signaldata/TupKey.hpp'
--- a/storage/ndb/include/kernel/signaldata/TupKey.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/include/kernel/signaldata/TupKey.hpp	2011-11-09 13:10:53 +0000
@@ -91,7 +91,7 @@ private:
    * DATA VARIABLES
    */
   Uint32 userPtr;
-  Uint32 readLength;
+  Uint32 readLength;  // Length in Uint32 words
   Uint32 writeLength;
   Uint32 noFiredTriggers;
   Uint32 lastRow;

=== modified file 'storage/ndb/include/ndbapi/NdbReceiver.hpp'
--- a/storage/ndb/include/ndbapi/NdbReceiver.hpp	2011-08-17 12:36:56 +0000
+++ b/storage/ndb/include/ndbapi/NdbReceiver.hpp	2011-11-09 13:10:53 +0000
@@ -105,16 +105,13 @@ private:
 
   static
   void calculate_batch_size(const NdbImpl&,
-                            const NdbRecord *,
-                            const NdbRecAttr *first_rec_attr,
-                            Uint32, Uint32, Uint32&, Uint32&, Uint32&);
-
-  void calculate_batch_size(Uint32 key_size,
                             Uint32 parallelism,
                             Uint32& batch_size,
-                            Uint32& batch_byte_size,
-                            Uint32& first_batch_size,
-                            const NdbRecord *rec) const;
+                            Uint32& batch_byte_size);
+
+  void calculate_batch_size(Uint32 parallelism,
+                            Uint32& batch_size,
+                            Uint32& batch_byte_size) const;
 
   /*
     Set up buffers for receiving TRANSID_AI and KEYINFO20 signals

=== modified file 'storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp	2011-10-28 14:17:25 +0000
+++ b/storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp	2011-11-09 13:10:53 +0000
@@ -11205,7 +11205,7 @@ void Dblqh::scanTupkeyConfLab(Signal* si
     tdata4 += sendKeyinfo20(signal, scanptr.p, tcConnectptr.p);
   }//if
   ndbrequire(scanptr.p->m_curr_batch_size_rows < MAX_PARALLEL_OP_PER_SCAN);
-  scanptr.p->m_curr_batch_size_bytes+= tdata4;
+  scanptr.p->m_curr_batch_size_bytes+= tdata4 * sizeof(Uint32);
   scanptr.p->m_curr_batch_size_rows = rows + 1;
   scanptr.p->m_last_row = tdata5;
   if (scanptr.p->check_scan_batch_completed() | tdata5){
@@ -11832,6 +11832,7 @@ void Dblqh::releaseScanrec(Signal* signa
 /* ------------------------------------------------------------------------
  * -------              SEND KEYINFO20 TO API                       ------- 
  *
+ * Return: Length in number of Uint32 words
  * ------------------------------------------------------------------------  */
 Uint32 Dblqh::sendKeyinfo20(Signal* signal, 
 			    ScanRecord * scanP, 
@@ -11968,7 +11969,9 @@ Uint32 Dblqh::sendKeyinfo20(Signal* sign
 void Dblqh::sendScanFragConf(Signal* signal, Uint32 scanCompleted) 
 {
   Uint32 completed_ops= scanptr.p->m_curr_batch_size_rows;
-  Uint32 total_len= scanptr.p->m_curr_batch_size_bytes;
+  Uint32 total_len= scanptr.p->m_curr_batch_size_bytes / sizeof(Uint32);
+  ndbassert((scanptr.p->m_curr_batch_size_bytes % sizeof(Uint32)) == 0);
+
   scanptr.p->scanTcWaiting = 0;
 
   if(ERROR_INSERTED(5037)){

=== modified file 'storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp'
--- a/storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp	2011-11-09 08:54:55 +0000
+++ b/storage/ndb/src/kernel/blocks/dbspj/DbspjMain.cpp	2011-11-09 13:10:53 +0000
@@ -5215,6 +5215,7 @@ Dbspj::scanIndex_send(Signal* signal,
   jam();
   ndbassert(bs_bytes > 0);
   ndbassert(bs_rows > 0);
+  ndbassert(bs_rows <= bs_bytes);
   /**
    * if (m_bits & prunemask):
    * - Range keys sliced out to each ScanFragHandle
@@ -5443,6 +5444,7 @@ Dbspj::scanIndex_execSCAN_FRAGCONF(Signa
 
   Uint32 rows = conf->completedOps;
   Uint32 done = conf->fragmentCompleted;
+  Uint32 bytes = conf->total_len * sizeof(Uint32);
 
   Uint32 state = fragPtr.p->m_state;
   ScanIndexData& data = treeNodePtr.p->m_scanindex_data;
@@ -5458,9 +5460,9 @@ Dbspj::scanIndex_execSCAN_FRAGCONF(Signa
 
   requestPtr.p->m_rows += rows;
   data.m_totalRows += rows;
-  data.m_totalBytes += conf->total_len;
+  data.m_totalBytes += bytes;
   data.m_largestBatchRows = MAX(data.m_largestBatchRows, rows);
-  data.m_largestBatchBytes = MAX(data.m_largestBatchBytes, conf->total_len);
+  data.m_largestBatchBytes = MAX(data.m_largestBatchBytes, bytes);
 
   if (!treeNodePtr.p->isLeaf())
   {
@@ -5555,37 +5557,43 @@ Dbspj::scanIndex_execSCAN_FRAGCONF(Signa
         org->batch_size_rows / data.m_parallelism * (data.m_parallelism - 1)
         + data.m_totalRows;
       
-      // Number of rows that we can still fetch in this batch.
+      // Number of rows & bytes that we can still fetch in this batch.
       const Int32 remainingRows 
         = static_cast<Int32>(org->batch_size_rows - maxCorrVal);
-      
+      const Int32 remainingBytes 
+        = static_cast<Int32>(org->batch_size_bytes - data.m_totalBytes);
+
       if (remainingRows >= data.m_frags_not_started &&
+          remainingBytes >= data.m_frags_not_started &&
           /**
            * Check that (remaning row capacity)/(remaining fragments) is 
            * greater or equal to (rows read so far)/(finished fragments).
            */
           remainingRows * static_cast<Int32>(data.m_parallelism) >=
-          static_cast<Int32>(data.m_totalRows * data.m_frags_not_started) &&
-          (org->batch_size_bytes - data.m_totalBytes) * data.m_parallelism >=
-          data.m_totalBytes * data.m_frags_not_started)
+            static_cast<Int32>(data.m_totalRows * data.m_frags_not_started) &&
+          remainingBytes * static_cast<Int32>(data.m_parallelism) >=
+            static_cast<Int32>(data.m_totalBytes * data.m_frags_not_started))
       {
         jam();
         Uint32 batchRange = maxCorrVal;
+        Uint32 bs_rows  = remainingRows / data.m_frags_not_started;
+        Uint32 bs_bytes = remainingBytes / data.m_frags_not_started;
+
         DEBUG("::scanIndex_execSCAN_FRAGCONF() first batch was not full."
               " Asking for new batches from " << data.m_frags_not_started <<
               " fragments with " << 
-              remainingRows / data.m_frags_not_started 
-              <<" rows and " << 
-              (org->batch_size_bytes - data.m_totalBytes)
-              / data.m_frags_not_started 
-              << " bytes.");
+              bs_rows  <<" rows and " << 
+              bs_bytes << " bytes.");
+
+        if (unlikely(bs_rows > bs_bytes))
+          bs_rows = bs_bytes;
+
         scanIndex_send(signal,
                        requestPtr,
                        treeNodePtr,
                        data.m_frags_not_started,
-                       (org->batch_size_bytes - data.m_totalBytes)
-                       / data.m_frags_not_started,
-                       remainingRows / data.m_frags_not_started,
+                       bs_bytes,
+                       bs_rows,
                        batchRange);
         return;
       }

=== modified file 'storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp'
--- a/storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp	2011-10-21 08:59:23 +0000
+++ b/storage/ndb/src/ndbapi/NdbDictionaryImpl.cpp	2011-11-09 13:10:53 +0000
@@ -6795,8 +6795,6 @@ NdbDictionaryImpl::initialiseColumnData(
   recCol->orgAttrSize= col->m_orgAttrSize;
   if (recCol->offset+recCol->maxSize > rec->m_row_size)
     rec->m_row_size= recCol->offset+recCol->maxSize;
-  /* Round data size to whole words + 4 bytes of AttributeHeader. */
-  rec->m_max_transid_ai_bytes+= (recCol->maxSize+7) & ~3;
   recCol->charset_info= col->m_cs;
   recCol->compare_function= NdbSqlUtil::getType(col->m_type).m_cmp;
   recCol->flags= 0;
@@ -6985,7 +6983,6 @@ NdbDictionaryImpl::createRecord(const Nd
   }
 
   rec->m_row_size= 0;
-  rec->m_max_transid_ai_bytes= 0;
   for (i= 0; i<length; i++)
   {
     const NdbDictionary::RecordSpecification *rs= &recSpec[i];

=== modified file 'storage/ndb/src/ndbapi/NdbQueryOperation.cpp'
--- a/storage/ndb/src/ndbapi/NdbQueryOperation.cpp	2011-10-28 13:38:36 +0000
+++ b/storage/ndb/src/ndbapi/NdbQueryOperation.cpp	2011-11-09 13:10:53 +0000
@@ -3058,20 +3058,16 @@ NdbQueryImpl::doSend(int nodeId, bool la
     scanTabReq->transId2 = (Uint32) (transId >> 32);
 
     Uint32 batchRows = root.getMaxBatchRows();
-    Uint32 batchByteSize, firstBatchRows;
+    Uint32 batchByteSize;
     NdbReceiver::calculate_batch_size(* ndb.theImpl,
-                                      root.m_ndbRecord,
-                                      root.m_firstRecAttr,
-                                      0, // Key size.
                                       getRootFragCount(),
                                       batchRows,
-                                      batchByteSize,
-                                      firstBatchRows);
+                                      batchByteSize);
     assert(batchRows==root.getMaxBatchRows());
-    assert(batchRows==firstBatchRows);
+    assert(batchRows<=batchByteSize);
     ScanTabReq::setScanBatch(reqInfo, batchRows);
     scanTabReq->batch_byte_size = batchByteSize;
-    scanTabReq->first_batch_size = firstBatchRows;
+    scanTabReq->first_batch_size = batchRows;
 
     ScanTabReq::setViaSPJFlag(reqInfo, 1);
     ScanTabReq::setPassAllConfsFlag(reqInfo, 1);
@@ -4361,11 +4357,11 @@ NdbQueryOperationImpl
      * We must thus make sure that we do not set a batch size for the scan 
      * that exceeds what any of its scan descendants can use.
      *
-     * Ignore calculated 'batchByteSize' and 'firstBatchRows' 
+     * Ignore calculated 'batchByteSize' 
      * here - Recalculated when building signal after max-batchRows has been 
      * determined.
      */
-    Uint32 batchByteSize, firstBatchRows;
+    Uint32 batchByteSize;
     /**
      * myClosestScan->m_maxBatchRows may be zero to indicate that we
      * should use default values, or non-zero if the application had an 
@@ -4373,18 +4369,14 @@ NdbQueryOperationImpl
      */
     maxBatchRows = myClosestScan->m_maxBatchRows;
     NdbReceiver::calculate_batch_size(* ndb.theImpl,
-                                      m_ndbRecord,
-                                      m_firstRecAttr,
-                                      0, // Key size.
                                       getRoot().m_parallelism
-                                      == Parallelism_max ?
-                                      m_queryImpl.getRootFragCount() :
-                                      getRoot().m_parallelism,
+                                      == Parallelism_max
+                                      ? m_queryImpl.getRootFragCount()
+                                      : getRoot().m_parallelism,
                                       maxBatchRows,
-                                      batchByteSize,
-                                      firstBatchRows);
+                                      batchByteSize);
     assert(maxBatchRows > 0);
-    assert(firstBatchRows == maxBatchRows);
+    assert(maxBatchRows <= batchByteSize);
   }
 
   // Find the largest value that is acceptable to all lookup descendants.
@@ -4554,17 +4546,13 @@ NdbQueryOperationImpl::prepareAttrInfo(U
     Ndb& ndb = *m_queryImpl.getNdbTransaction().getNdb();
 
     Uint32 batchRows = getMaxBatchRows();
-    Uint32 batchByteSize, firstBatchRows;
+    Uint32 batchByteSize;
     NdbReceiver::calculate_batch_size(* ndb.theImpl,
-                                      m_ndbRecord,
-                                      m_firstRecAttr,
-                                      0, // Key size.
                                       m_queryImpl.getRootFragCount(),
                                       batchRows,
-                                      batchByteSize,
-                                      firstBatchRows);
-    assert(batchRows == firstBatchRows);
+                                      batchByteSize);
     assert(batchRows == getMaxBatchRows());
+    assert(batchRows <= batchByteSize);
     assert(m_parallelism == Parallelism_max ||
            m_parallelism == Parallelism_adaptive);
     if (m_parallelism == Parallelism_max)

=== modified file 'storage/ndb/src/ndbapi/NdbReceiver.cpp'
--- a/storage/ndb/src/ndbapi/NdbReceiver.cpp	2011-08-17 12:36:56 +0000
+++ b/storage/ndb/src/ndbapi/NdbReceiver.cpp	2011-11-09 13:10:53 +0000
@@ -155,88 +155,57 @@ NdbReceiver::prepareRead(char *buf, Uint
   Compute the batch size (rows between each NEXT_TABREQ / SCAN_TABCONF) to
   use, taking into account limits in the transporter, user preference, etc.
 
-  Hm, there are some magic overhead numbers (4 bytes/attr, 32 bytes/row) here,
-  would be nice with some explanation on how these numbers were derived.
+  It is the responsibility of the batch producer (LQH+TUP) to
+  stay within these 'batch_size' and 'batch_byte_size' limits.:
 
-  TODO : Check whether these numbers need to be revised w.r.t. read packed
+  - It should stay strictly within the 'batch_size' (#rows) limit.
+  - It is allowed to overallocate the 'batch_byte_size' (slightly)
+    in order to complete the current row when it hit the limit.
+
+  The client should be prepared to receive, and buffer, upto 
+  'batch_size' rows from each fragment.
+  ::ndbrecord_rowsize() might be usefull for calculating the
+  buffersize to allocate for this resultset.
 */
 //static
 void
 NdbReceiver::calculate_batch_size(const NdbImpl& theImpl,
-                                  const NdbRecord *record,
-                                  const NdbRecAttr *first_rec_attr,
-                                  Uint32 key_size,
                                   Uint32 parallelism,
                                   Uint32& batch_size,
-                                  Uint32& batch_byte_size,
-                                  Uint32& first_batch_size)
+                                  Uint32& batch_byte_size)
 {
   const NdbApiConfig & cfg = theImpl.get_ndbapi_config_parameters();
   const Uint32 max_scan_batch_size= cfg.m_scan_batch_size;
   const Uint32 max_batch_byte_size= cfg.m_batch_byte_size;
   const Uint32 max_batch_size= cfg.m_batch_size;
 
-  Uint32 tot_size= (key_size ? (key_size + 32) : 0); //key + signal overhead
-  if (record)
-  {
-    tot_size+= record->m_max_transid_ai_bytes;
-  }
-
-  const NdbRecAttr *rec_attr= first_rec_attr;
-  while (rec_attr != NULL) {
-    Uint32 attr_size= rec_attr->getColumn()->getSizeInBytes();
-    attr_size= ((attr_size + 4 + 3) >> 2) << 2; //Even to word + overhead
-    tot_size+= attr_size;
-    rec_attr= rec_attr->next();
+  batch_byte_size= max_batch_byte_size;
+  if (batch_byte_size * parallelism > max_scan_batch_size) {
+    batch_byte_size= max_scan_batch_size / parallelism;
   }
 
-  tot_size+= 32; //include signal overhead
-
-  /**
-   * Now we calculate the batch size by trying to get upto SCAN_BATCH_SIZE
-   * bytes sent for each batch from each node. We do however ensure that
-   * no more than MAX_SCAN_BATCH_SIZE is sent from all nodes in total per
-   * batch.
-   */
-  if (batch_size == 0)
-  {
-    batch_byte_size= max_batch_byte_size;
+  if (batch_size == 0 || batch_size > max_batch_size) {
+    batch_size= max_batch_size;
   }
-  else
-  {
-    batch_byte_size= batch_size * tot_size;
+  if (unlikely(batch_size > MAX_PARALLEL_OP_PER_SCAN)) {
+    batch_size= MAX_PARALLEL_OP_PER_SCAN;
   }
-  
-  if (batch_byte_size * parallelism > max_scan_batch_size) {
-    batch_byte_size= max_scan_batch_size / parallelism;
+  if (unlikely(batch_size > batch_byte_size)) {
+    batch_size= batch_byte_size;
   }
-  batch_size= batch_byte_size / tot_size;
-  if (batch_size == 0) {
-    batch_size= 1;
-  } else {
-    if (batch_size > max_batch_size) {
-      batch_size= max_batch_size;
-    } else if (batch_size > MAX_PARALLEL_OP_PER_SCAN) {
-      batch_size= MAX_PARALLEL_OP_PER_SCAN;
-    }
-  }
-  first_batch_size= batch_size;
+
   return;
 }
 
 void
-NdbReceiver::calculate_batch_size(Uint32 key_size,
-                                  Uint32 parallelism,
+NdbReceiver::calculate_batch_size(Uint32 parallelism,
                                   Uint32& batch_size,
-                                  Uint32& batch_byte_size,
-                                  Uint32& first_batch_size,
-                                  const NdbRecord *record) const
+                                  Uint32& batch_byte_size) const
 {
   calculate_batch_size(* m_ndb->theImpl,
-                       record,
-                       theFirstRecAttr,
-                       key_size, parallelism, batch_size, batch_byte_size,
-                       first_batch_size);
+                       parallelism,
+                       batch_size,
+                       batch_byte_size);
 }
 
 void

=== modified file 'storage/ndb/src/ndbapi/NdbRecord.hpp'
--- a/storage/ndb/src/ndbapi/NdbRecord.hpp	2011-06-30 15:59:25 +0000
+++ b/storage/ndb/src/ndbapi/NdbRecord.hpp	2011-11-09 13:10:53 +0000
@@ -189,8 +189,6 @@ public:
   Uint32 tableVersion;
   /* Copy of table->m_keyLenInWords. */
   Uint32 m_keyLenInWords;
-  /* Total maximum size of TRANSID_AI data (for computing batch size). */
-  Uint32 m_max_transid_ai_bytes;
   /**
    * Number of distribution keys (usually == number of primary keys).
    *

=== modified file 'storage/ndb/src/ndbapi/NdbScanOperation.cpp'
--- a/storage/ndb/src/ndbapi/NdbScanOperation.cpp	2011-05-17 12:47:21 +0000
+++ b/storage/ndb/src/ndbapi/NdbScanOperation.cpp	2011-11-09 13:10:53 +0000
@@ -2284,16 +2284,13 @@ int NdbScanOperation::prepareSendScan(Ui
    */
   ScanTabReq * req = CAST_PTR(ScanTabReq, theSCAN_TABREQ->getDataPtrSend());
   Uint32 batch_size = req->first_batch_size; // User specified
-  Uint32 batch_byte_size, first_batch_size;
-  theReceiver.calculate_batch_size(key_size,
-                                   theParallelism,
+  Uint32 batch_byte_size;
+  theReceiver.calculate_batch_size(theParallelism,
                                    batch_size,
-                                   batch_byte_size,
-                                   first_batch_size,
-                                   m_attribute_record);
+                                   batch_byte_size);
   ScanTabReq::setScanBatch(req->requestInfo, batch_size);
   req->batch_byte_size= batch_byte_size;
-  req->first_batch_size= first_batch_size;
+  req->first_batch_size= batch_size;
 
   /**
    * Set keyinfo, nodisk and distribution key flags in 

No bundle (reason: useless for push emails).
Thread
bzr push into mysql-5.1-telco-7.0 branch (ole.john.aske:4647 to 4648)Bug#13355055Ole John Aske11 Nov