List:Commits« Previous MessageNext Message »
From:marc.alff Date:December 19 2006 5:52am
Subject:bk commit into 5.0 tree (malff:1.2320) BUG#21554
View as plain text  
Below is the list of changes that have just been committed into a local
5.0 repository of marcsql. When marcsql does a push these changes will
be propagated to the main repository and, within 24 hours after the
push, to the public repository.
For information on how to access the public repository

ChangeSet@stripped, 2006-12-18 22:52:49-07:00, malff@weblab.(none) +5 -0
  Bug#21554 ( violates C++ aliasing rules)
  This is a compile bug, reported by the development GCC team with GCC 4.2
  The issue is that on linux platforms, for which HAVE_ATOMIC_ADD is defined,
  casting a pointer from an 'unsigned long*' to 'atomic_t*' violates
  assumptions made by GCC when optimizing code, which are that pointers to
  different types do not point to the same memory.
  In case of atomic_t, which is found in linux platforms under the following
  header files (for example, these are the headers on my machine):
  - /usr/include/asm-i386/atomic.h
  - /usr/include/asm-x86_64/atomic.h
  the comments around the definition of atomic_t are very specific:
   atomic_t should be 32 bit signed type
   Make sure gcc doesn't try to be clever and move things around
   on us. We need to use _exactly_ the address the user gave us,
   not some alias that contains the same information.
   typedef struct { volatile int counter; } atomic_t;
  For in MySQL, the code performs the following cast when using
  the variable Cversion:
  - static ulong volatile CVersion --> atomic_t
  While it is unknown if this cast really can cause a bug or not, assuming
  that "ulong volatile" and atomic_t, which are maintained in different
  spaces (MySQL and the Linux per architecture asm header files), really have:
  - the same storage qualifier (they don't : volatile, or volatile member),
  - the same sign (they don't),
  - the same size,
  - the same alignment,
  is poor and can be improved.
  With this fix, the code has been changed as follows:
  A new opaque type, thread_safe_counter_t, has been defined for use with
  the existing thread_safe_xxx helpers.
  When the thread_safe_xxx helpers are implemented using atomic_t (Linux),
  the cast to "atomic_t*" has been removed, which forces the caller to
  actually provide a thread_safe_counter_t variable, or fail to compile.
  This is a step towards:
  - removing the cast, which resolves the "C++ aliasing rule violation",
  - making the code more strongly typed, and easier to maintain.
  For example, the following invalid uses will fail to build, where the same
  constructs would previously be compiled and produce bugs at runtime:
    thread_safe_counter_t my_cnt;
    my_cnt = 0; // build break
    my_cnt ++; // build break, previously would ignore thread safety
    foo = my_cnt; // build break
  Instead, the caller is forced to use the thread safe APIs:
    thread_safe_counter_t my_cnt;
    thread_safe_counter_set(my_cnt, 0);
    thread_safe_increment(my_cnt); // safe, uses atomic_add
    foo = thread_safe_counter_read(my_cnt);
  Atomic functions were also used previously when building MySQL with
  #define SAFE_STATISTICS, for statistic_increment counters.
  It is unclear whether SAFE_STATISTICS is truly supported or not,
  since this compile flag is undocumented, and is not set by the autoconf
  configure script.
  Looking at the implementation of fill_status() in however,
  it appears that the intent of SAFE_STATISTICS is to prevent threads to
  modify statistics counters located in thd->status_var while the function
  calc_sum_of_all_status() aggregates counters from all threads.
  This is achieved by holding the LOCK_status during the aggregation.
  The previous implementation of statistic_increment, for platforms providing
  atomic operations (HAVE_ATOMIC_ADD) was however not honoring the mutex,
  making calls to statistic_increment(thd->, &LOCK_status)
  *unsafe* even in SAFE_STATISTICS builds.
  With this change, statistic_increment always honor the mutex.
  This makes statistics safe in SAFE_STATISTICS builds, and also prevents
  the need to introduce an opaque type similar to thread_safe_counter_t
  for every non critical counter.
  Building with SAFE_STATISTICS defined has been tested, and the test suite
  passes with this flag (on a platform with #define HAVE_ATOMIC_ADD).
  The following counters:
  - delayed_insert_writes,
  - delayed_rows_in_use,
  - delayed_insert_errors
  were previously implemented using thread_safe_increment() instead of
  statistic_increment(), and as a result are now using thread_safe_counter_t
  to comply with the original behavior (the counters are always safe).

  include/my_pthread.h@stripped, 2006-12-18 21:47:52-07:00, malff@weblab.(none) +41 -9
    Enforce strong type checking for thread safe counters.

  sql/, 2006-12-18 21:47:52-07:00, malff@weblab.(none) +2 -2
    thd->status_var counters should use statistic_increment (build break).

  sql/mysql_priv.h@stripped, 2006-12-18 21:47:52-07:00, malff@weblab.(none) +11 -2
    Enforce strong type checking for thread safe counters.

  sql/, 2006-12-18 21:47:52-07:00, malff@weblab.(none) +10 -4
    Enforce strong type checking for thread safe counters.

  sql/, 2006-12-18 21:47:52-07:00, malff@weblab.(none) +3 -3
    Enforce strong type checking for thread safe counters.

# This is a BitKeeper patch.  What follows are the unified diffs for the
# set of deltas contained in the patch.  The rest of the patch, the part
# that BitKeeper cares about, is below these diffs.
# User:	malff
# Host:	weblab.(none)
# Root:	/home/marcsql/TREE/mysql-5.0-21554

--- 1.91/include/my_pthread.h	2006-12-18 22:52:54 -07:00
+++ 1.92/include/my_pthread.h	2006-12-18 22:52:54 -07:00
@@ -685,15 +685,33 @@ extern struct st_my_thread_var *_my_thre
 extern pthread_t shutdown_th, main_th, signal_th;
-	/* statistics_xxx functions are for not essential statistic */
 #ifndef thread_safe_increment
+  thread_safe_xxx functions are for critical statistic or counters.
+  The implementation is guaranteed to be thread safe, on all platforms.
+  Note that the lock given may or not not be honored, depending on
+  the platform used:
+  - on platforms providing native atomic operations (currently, Linux),
+  platform specific code is preferred, and the lock is ignored.
+  - on platforms not providing atomic operations, the mutex provided by
+  the caller is used instead.
+  As a result, calling code should *not* assume the counter is protected
+  by the mutex given.
-#define thread_safe_increment(V,L) atomic_inc((atomic_t*) &V)
-#define thread_safe_decrement(V,L) atomic_dec((atomic_t*) &V)
-#define thread_safe_add(V,C,L)     atomic_add((C),(atomic_t*) &V)
-#define thread_safe_sub(V,C,L)     atomic_sub((C),(atomic_t*) &V)
+#define thread_safe_counter_t      atomic_t
+#define thread_safe_counter_init(i) ATOMIC_INIT(i)
+#define thread_safe_counter_set(V, i) atomic_set(& V, i)
+#define thread_safe_counter_read(V) atomic_read(& V)
+#define thread_safe_increment(V,L) atomic_inc(&V)
+#define thread_safe_decrement(V,L) atomic_dec(&V)
+#define thread_safe_add(V,C,L)     atomic_add((C),&V)
+#define thread_safe_sub(V,C,L)     atomic_sub((C),&V)
+#define thread_safe_counter_t      unsigned long
+#define thread_safe_counter_init(i) i
+#define thread_safe_counter_set(V, i) (V) = (i)
+#define thread_safe_counter_read(V) (V)
 #define thread_safe_increment(V,L) \
         (pthread_mutex_lock((L)), (V)++, pthread_mutex_unlock((L)))
 #define thread_safe_decrement(V,L) \
@@ -702,10 +720,24 @@ extern pthread_t shutdown_th, main_th, s
 #define thread_safe_sub(V,C,L) \
         (pthread_mutex_lock((L)), (V)-=(C), pthread_mutex_unlock((L)))
 #endif /* HAVE_ATOMIC_ADD */
+  statistics_xxx functions are for non critical statistic:
+  When compiling with SAFE_STATISTICS:
+  - race conditions can not occur.
+  - the lock given is always honored, which may cause performance degradation.
+  When compiling without SAFE_STATISTICS:
+  - race conditions can occur, making the result slightly inaccurate.
+  - the lock given is not honored.
-#define statistic_increment(V,L)   thread_safe_increment((V),(L))
-#define statistic_decrement(V,L)   thread_safe_decrement((V),(L))
-#define statistic_add(V,C,L)       thread_safe_add((V),(C),(L))
+#define statistic_increment(V,L) \
+        (pthread_mutex_lock((L)), (V)++, pthread_mutex_unlock((L)))
+#define statistic_decrement(V,L) \
+        (pthread_mutex_lock((L)), (V)--, pthread_mutex_unlock((L)))
+#define statistic_add(V,C,L) \
+        (pthread_mutex_lock((L)), (V)+=(C), pthread_mutex_unlock((L)))
 #define statistic_decrement(V,L) (V)--
 #define statistic_increment(V,L) (V)++

--- 1.170/sql/	2006-12-18 22:52:54 -07:00
+++ 1.171/sql/	2006-12-18 22:52:54 -07:00
@@ -1718,8 +1718,8 @@ int ha_myisam::ft_read(byte * buf)
   if (!ft_handler)
     return -1;
-  thread_safe_increment(table->in_use->status_var.ha_read_next_count,
-			&LOCK_status); // why ?
+  statistic_increment(table->in_use->status_var.ha_read_next_count,
+                      &LOCK_status);
   error=ft_handler->please->read_next(ft_handler,(char*) buf);

--- 1.423/sql/mysql_priv.h	2006-12-18 22:52:54 -07:00
+++ 1.424/sql/mysql_priv.h	2006-12-18 22:52:54 -07:00
@@ -1218,8 +1218,17 @@ extern ulong binlog_cache_use, binlog_ca
 extern ulong aborted_threads,aborted_connects;
 extern ulong delayed_insert_timeout;
 extern ulong delayed_insert_limit, delayed_queue_size;
-extern ulong delayed_insert_threads, delayed_insert_writes;
-extern ulong delayed_rows_in_use,delayed_insert_errors;
+extern ulong delayed_insert_threads;
+/** Statistics, counts the number of INSERT DELAYED writes. */
+extern thread_safe_counter_t delayed_insert_writes;
+/** Statistics, counts the number of rows in use by INSERT DELAYED threads. */
+extern thread_safe_counter_t delayed_rows_in_use;
+/** Statistics, counts the number of errors in INSERT DELAYED threads. */
+extern thread_safe_counter_t delayed_insert_errors;
 extern ulong slave_open_temp_tables;
 extern ulong query_cache_size, query_cache_min_res_unit;
 extern ulong slow_launch_threads, slow_launch_time;

--- 1.580/sql/	2006-12-18 22:52:54 -07:00
+++ 1.581/sql/	2006-12-18 22:52:54 -07:00
@@ -395,8 +395,11 @@ ulong refresh_version, flush_version;	/*
 query_id_t query_id;
 ulong aborted_threads, aborted_connects;
 ulong delayed_insert_timeout, delayed_insert_limit, delayed_queue_size;
-ulong delayed_insert_threads, delayed_insert_writes, delayed_rows_in_use;
-ulong delayed_insert_errors,flush_time;
+ulong delayed_insert_threads;
+thread_safe_counter_t delayed_insert_writes;
+thread_safe_counter_t delayed_rows_in_use;
+thread_safe_counter_t delayed_insert_errors;
+ulong flush_time;
 ulong specialflag=0;
 ulong binlog_cache_use= 0, binlog_cache_disk_use= 0;
 ulong max_connections, max_connect_errors;
@@ -6341,8 +6344,11 @@ static void mysql_init_variables(void)
   abort_loop= select_thread_in_use= signal_thread_in_use= 0;
   ready_to_exit= shutdown_in_progress= grant_option= 0;
   aborted_threads= aborted_connects= 0;
-  delayed_insert_threads= delayed_insert_writes= delayed_rows_in_use= 0;
-  delayed_insert_errors= thread_created= 0;
+  delayed_insert_threads= 0;
+  thread_safe_counter_set(delayed_insert_writes, 0);
+  thread_safe_counter_set(delayed_rows_in_use, 0);
+  thread_safe_counter_set(delayed_insert_errors, 0);
+  thread_created= 0;
   specialflag= 0;
   binlog_cache_use=  binlog_cache_disk_use= 0;
   max_used_connections= slow_launch_threads = 0;

--- 1.15/sql/	2006-12-18 22:52:54 -07:00
+++ 1.16/sql/	2006-12-18 22:52:54 -07:00
@@ -22,7 +22,7 @@
 #include "sp_head.h"
 static pthread_mutex_t Cversion_lock;
-static ulong volatile Cversion= 0;
+static thread_safe_counter_t Cversion= thread_safe_counter_init(0);
@@ -130,7 +130,7 @@ void sp_cache_insert(sp_cache **cp, sp_h
     if (!(c= new sp_cache()))
       return;                                   // End of memory error
-    c->version= Cversion;      // No need to lock when reading long variable
+    c->version= thread_safe_counter_read(Cversion);
   DBUG_PRINT("info",("sp_cache: inserting: %.*s", sp->m_qname.length,
@@ -200,7 +200,7 @@ void sp_cache_flush_obsolete(sp_cache **
   if (c)
     ulong v;
-    v= Cversion;                 // No need to lock when reading long variable
+    v= thread_safe_counter_read(Cversion);
     if (c->version < v)
       DBUG_PRINT("info",("sp_cache: deleting all functions"));
bk commit into 5.0 tree (malff:1.2320) BUG#21554marc.alff19 Dec