MySQL Lists are EOL. Please join:

List:Commits« Previous MessageNext Message »
From:He Zhenxing Date:September 29 2008 2:04pm
Subject:bzr push into mysql-5.1 branch (hezx:2689 to 2690) Bug#35843
View as plain text  
 2690 He Zhenxing	2008-09-26
      BUG#35843 Slow replication slave when using partitioned myisam table
            
      In order to improve the performance when replicating to partitioned
      myisam tables with row-based format, the number of rows of current 
      rows log event is estimated and used to setup storage engine for bulk
      inserts.
modified:
  sql/log_event.cc

 2689 Mattias Jonsson	2008-09-20 [merge]
      merge (update of local branch before push)
modified:
  mysql-test/r/compare.result
  mysql-test/t/compare.test
  mysql-test/t/mysqldump-max.test
  sql/item.cc

=== modified file 'sql/log_event.cc'
--- a/sql/log_event.cc	2008-09-03 10:01:18 +0000
+++ b/sql/log_event.cc	2008-09-26 09:39:47 +0000
@@ -8061,7 +8061,6 @@ Write_rows_log_event::do_before_row_oper
     */
   }
 
-  m_table->file->ha_start_bulk_insert(0);
   /*
     We need TIMESTAMP_NO_AUTO_SET otherwise ha_write_row() will not use fill
     any TIMESTAMP column with data from the row but instead will use
@@ -8200,7 +8199,16 @@ Rows_log_event::write_row(const Relay_lo
   
   /* unpack row into table->record[0] */
   error= unpack_current_row(rli); // TODO: how to handle errors?
-
+  if (m_curr_row == m_rows_buf)
+  {
+    /* this is the first row to be inserted, we estimate the rows with
+       the size of the first row and use that value to initialize
+       storage engine for bulk insertion */
+    ulong estimated_rows= (m_rows_end - m_curr_row) / (m_curr_row_end - m_curr_row);
+    m_table->file->ha_start_bulk_insert(estimated_rows);
+  }
+  
+  
 #ifndef DBUG_OFF
   DBUG_DUMP("record[0]", table->record[0], table->s->reclength);
   DBUG_PRINT_BITSET("debug", "write_set = %s", table->write_set);

Thread
bzr push into mysql-5.1 branch (hezx:2689 to 2690) Bug#35843He Zhenxing30 Sep