List:Commits« Previous MessageNext Message »
From:Anurag Shekhar Date:September 14 2009 9:17am
Subject:bzr commit into mysql-5.1-bugteam branch (anurag.shekhar:3124)
Bug#45840
View as plain text  
#At file:///home/anurag/mysqlsrc/mysql-5.1-bugteam-45840/ based on revid:luis.soares@stripped

 3124 Anurag Shekhar	2009-09-14
      Bug #45840 read_buffer_size allocated for each partition when 
         "insert into.. select * from"
      
      When inserting into a partitioned table using 'insert into
      <target> select * from <src>', read_buffer_size bytes of memory are
      allocated for each partition in the target table.
      
      This resulted in large memory consumption when the number of
      partitions are high.
      
      This patch introduces a new method which tries to estimate the
      buffer size required for each partition and limits the maximum buffer
      size used to maximum of 10 * read_buffer_size.
     @ sql/ha_partition.cc
        Introduced a method ha_partition::estimate_read_buffer_size
        to estimate buffer size required for each partition. 
        Method ha_partition::start_part_bulk_insert updated
        to update the read_buffer_size before calling bulk upload
        in storage engines.
     @ sql/ha_partition.h
        Introduced a method ha_partition::estimate_read_buffer_size.

    modified:
      sql/ha_partition.cc
      sql/ha_partition.h
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc	2009-09-11 22:40:23 +0000
+++ b/sql/ha_partition.cc	2009-09-14 09:17:52 +0000
@@ -3284,15 +3284,49 @@ void ha_partition::start_bulk_insert(ha_
 */
 void ha_partition::start_part_bulk_insert(uint part_id)
 {
+  long old_buffer_size;
+  THD *thd;
   if (!bitmap_is_set(&m_bulk_insert_started, part_id) &&
       bitmap_is_set(&m_bulk_insert_started, m_tot_parts))
   {
+    thd = ha_thd ();
+    old_buffer_size= thd->variables.read_buff_size;
+    /* Update read_buffer_size for this partition */
+    thd->variables.read_buff_size= estimate_read_buffer_size(old_buffer_size);
     m_file[part_id]->ha_start_bulk_insert(guess_bulk_insert_rows());
     bitmap_set_bit(&m_bulk_insert_started, part_id);
+    thd->variables.read_buff_size= old_buffer_size;
   }
   m_bulk_inserted_rows++;
 }
 
+/*
+  Estimate the read buffer size for each partition.
+*/
+long ha_partition::estimate_read_buffer_size(long original_size)
+{
+  /*
+    If number of rows to insert is less than 10, but not 0,
+    retian original buffer size.
+  */
+  if (estimation_rows_to_insert && (estimation_rows_to_insert < 10))
+    return (original_size);
+  /*
+    If first insert/partition and monotonic partition function,
+    allow using buffer size originially set.
+   */
+  if (!m_bulk_inserted_rows &&
+      m_part_func_monotonicity_info != NON_MONOTONIC &&
+      m_tot_parts > 1)
+    return original_size;
+  /*
+    Allow total buffer used in all partition to go upto 10*read_buffer_size.
+  */
+
+  if (m_tot_parts < 10)
+      return original_size;
+  return (original_size * 10 / m_tot_parts);
+}
 
 /*
   Try to predict the number of inserts into this partition.

=== modified file 'sql/ha_partition.h'
--- a/sql/ha_partition.h	2009-09-04 13:02:15 +0000
+++ b/sql/ha_partition.h	2009-09-14 09:17:52 +0000
@@ -368,6 +368,7 @@ public:
 private:
   ha_rows guess_bulk_insert_rows();
   void start_part_bulk_insert(uint part_id);
+  long estimate_read_buffer_size(long original_size);
 public:
 
   virtual bool is_fatal_error(int error, uint flags)


Attachment: [text/bzr-bundle] bzr/anurag.shekhar@sun.com-20090914091752-pazl9pxzhbhtj8rc.bundle
Thread
bzr commit into mysql-5.1-bugteam branch (anurag.shekhar:3124)Bug#45840Anurag Shekhar14 Sep
  • Re: bzr commit into mysql-5.1-bugteam branch (anurag.shekhar:3124)Bug#45840Mattias Jonsson14 Sep
  • Re: bzr commit into mysql-5.1-bugteam branch (anurag.shekhar:3124)Bug#45840Mattias Jonsson14 Sep