List:Commits« Previous MessageNext Message »
From:Gleb Shchepa Date:March 31 2010 12:21pm
Subject:bzr commit into mysql-next-mr branch (gshchepa:3128) Bug#30584
Bug#36569
View as plain text  
#At file:///mnt/sda7/work/mysql-next-mr/ based on revid:alik@stripped

 3128 Gleb Shchepa	2010-03-31
      Bug #30584: delete with order by and limit clauses does not
                  use limit efficiently
      Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a
                  filesort even if not required
      
      During the execution of single-table UPDATE and DELETE
      statements with ORDER BY clause an optimizer used the
      filesort algorithm even if not required.
      
      Extra optimization has been added: when applicable,
      single-table UPDATE/DELETE statements use existing index
      instead of filesort like similar SELECT queries do.
      I.e. from now most of single-table UPDATE & DELETE
      statements should show same SHOW STATUS LIKE 'Sort%'
      output as similar SELECT queries.
     @ mysql-test/r/single_delete_update.result
        Test case for bug #30584 and bug #36569.
     @ mysql-test/r/update.result
        Updated result for optimized query, bug #30584.
        Note:
        "Handler_read_last 1" omitted, see bug 52312:
        lost Handler_read_last status variable.
     @ mysql-test/t/single_delete_update.test
        Test case for bug #30584 and bug #36569.
     @ sql/opt_range.cc
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        * get_index_for_order() has been rewritten entirely and moved
          to sql_select.cc
        
        New functions:
        * QUICK_RANGE_SELECT::make_reverse()
        * QUICK_SELECT_I::is_ordered_by()
     @ sql/opt_range.h
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        * get_index_for_order() has been rewritten entirely and moved
          to sql_select.cc
        
        New functions:
        * QUICK_SELECT_I::make_reverse()
        * QUICK_SELECT_I::is_ordered_by()
        * SQL_SELECT::set_quick()
     @ sql/records.cc
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        * init_read_record_idx() has been modified to allow reverse index scan
        
        New functions:
        * rr_index_last()
        * rr_index_desc()
     @ sql/records.h
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        init_read_record_idx() has been modified to allow reverse index scan
     @ sql/sql_delete.cc
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        mysql_delete: an optimization has been added to skip
        double-sorting with ORDER BY clause where select
        result ordering is acceptable.
     @ sql/sql_select.cc
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        * The const_expression_in_where function has been modified
          to accept both Item and Field pointers via intermediate
          wrap_field_or_item class object.
        
        * Static test_if_order_by_key function has been moved to
          public scope to share with QUICK_SELECT_I::is_ordered_by().
        
        New functions:
        * get_index_for_order()
        * test_if_cheaper_ordering() has been extracted from
          test_if_skip_sort_order() to share with get_index_for_order()
        * simple_remove_const()
        * const_item_in_where() and const_field_in_where()
        
        * new internal wrapper class: wrap_field_or_item.
     @ sql/sql_select.h
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        * test_if_order_by_key() has been moved to public scope.
        
        New functions:
        * test_if_cheaper_ordering()
        * const_field_in_where() and const_item_in_where()
        * simple_remove_const()
        * get_index_for_order()
     @ sql/sql_update.cc
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        mysql_update: an optimization has been added to skip
        double-sorting with ORDER BY clause where a select
        result ordering is acceptable.
     @ sql/table.cc
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        New functions:
        * TABLE::update_const_key_parts()
        * is_simple_order()
     @ sql/table.h
        Bug #30584, bug #36569: UPDATE/DELETE ... WHERE ... ORDER BY...
                                always does a filesort even if not required
        
        New functions:
        * TABLE::update_const_key_parts()
        * is_simple_order()

    added:
      mysql-test/r/single_delete_update.result
      mysql-test/t/single_delete_update.test
    modified:
      mysql-test/r/update.result
      sql/opt_range.cc
      sql/opt_range.h
      sql/records.cc
      sql/records.h
      sql/sql_delete.cc
      sql/sql_select.cc
      sql/sql_select.h
      sql/sql_update.cc
      sql/table.cc
      sql/table.h
=== added file 'mysql-test/r/single_delete_update.result'
--- a/mysql-test/r/single_delete_update.result	1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/single_delete_update.result	2010-03-31 12:20:59 +0000
@@ -0,0 +1,423 @@
+DROP TABLE IF EXISTS t1, t2;
+CREATE TABLE t1 (i INT);
+INSERT INTO t1 VALUES (10),(11),(12),(13),(14),(15),(16),(17),(18),(19),
+(20),(21),(22),(23),(24),(25);
+CREATE TABLE t2(a INT, i INT PRIMARY KEY);
+INSERT INTO t2 (i) SELECT i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+a	i
+NULL	11
+NULL	12
+NULL	13
+NULL	14
+NULL	15
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+FLUSH STATUS;
+UPDATE t2 SET a = 10 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+a	i
+10	11
+10	12
+10	13
+10	14
+10	15
+NULL	16
+NULL	17
+NULL	18
+FLUSH STATUS;
+DELETE FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+a	i
+NULL	16
+NULL	17
+NULL	18
+DROP TABLE t2;
+#
+# index on field prefix:
+#
+CREATE TABLE t2(a INT, i CHAR(2), INDEX(i(1)));
+INSERT INTO t2 (i) SELECT i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+a	i
+NULL	11
+NULL	12
+NULL	13
+NULL	14
+NULL	15
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	5
+Sort_scan	1
+FLUSH STATUS;
+UPDATE t2 SET a = 10 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	5
+Sort_scan	1
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+a	i
+10	11
+10	12
+10	13
+10	14
+10	15
+NULL	16
+NULL	17
+NULL	18
+FLUSH STATUS;
+DELETE FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	8
+Sort_scan	1
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+a	i
+NULL	16
+NULL	17
+NULL	18
+DROP TABLE t2;
+#
+# constant inside ORDER BY list, should use filesort
+# on a small table
+#
+CREATE TABLE t2(a INT, b INT, c INT, d INT, INDEX(a, b, c));
+INSERT INTO t2 (a, b, c) SELECT i, i, i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+a	b	c	d
+10	10	10	NULL
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+## should be 5 (previous LIMIT)
+SELECT COUNT(*) FROM t2 WHERE b = 10 AND d = 10 ORDER BY a, c;
+COUNT(*)
+1
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+## should be 5 (previous LIMIT)
+SELECT 1 - COUNT(*) FROM t2 WHERE b = 10;
+1 - COUNT(*)
+1
+DROP TABLE t2;
+#
+# same test as above (constant inside ORDER BY list), but with
+# a larger table - should not use filesort
+#
+CREATE TABLE t2(a INT, b INT, c INT, d INT, INDEX(a, b, c));
+INSERT INTO t2 (a, b, c) SELECT i, i, i FROM t1;
+INSERT INTO t2 (a, b, c) SELECT t1.i, t1.i, t1.i FROM t1, t1 x1, t1 x2;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+a	b	c	d
+10	10	10	NULL
+10	10	10	NULL
+10	10	10	NULL
+10	10	10	NULL
+10	10	10	NULL
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+## should be 5 (previous LIMIT)
+SELECT COUNT(*) FROM t2 WHERE b = 10 AND d = 10 ORDER BY a, c;
+COUNT(*)
+5
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+## should be 5 (previous LIMIT)
+SELECT 257 - COUNT(*) FROM t2 WHERE b = 10;
+257 - COUNT(*)
+5
+DROP TABLE t2;
+#
+# as above + partial index, should use filesort
+#
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), d CHAR(2), INDEX (a,b(1),c));
+INSERT INTO t2 SELECT i, i, i, i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+a	b	c	d
+10	10	10	10
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+a	b	c	d
+10	10	10	10
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+a	b	c	d
+DROP TABLE t2;
+#
+# as above but index is without HA_READ_ORDER flag, should use filesort
+#
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), d CHAR(2), INDEX (a,b,c)) ENGINE=HEAP;
+INSERT INTO t2 SELECT i, i, i, i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+a	b	c	d
+10	10	10	10
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+a	b	c	d
+10	10	10	10
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	1
+Sort_scan	1
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+a	b	c	d
+DROP TABLE t2;
+#
+# quick select is QUICK_INDEX_MERGE_SELECT, should use filesort
+#
+CREATE TABLE t2 (i INT, key1 INT, key2 INT, INDEX (key1), INDEX (key2));
+INSERT INTO t2 (key1, key2) SELECT i, i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+i	key1	key2
+NULL	10	10
+NULL	11	11
+NULL	12	12
+NULL	13	13
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	1
+Sort_rows	4
+Sort_scan	0
+FLUSH STATUS;
+UPDATE t2 SET i = 123 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	4
+Sort_scan	1
+SELECT * FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+i	key1	key2
+123	10	10
+123	11	11
+123	12	12
+123	13	13
+FLUSH STATUS;
+DELETE FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	4
+Sort_scan	1
+SELECT * FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+i	key1	key2
+DROP TABLE t2;
+#
+# mixed sorting direction, should use filesort
+#
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), INDEX (a, b));
+INSERT INTO t2 SELECT i, i, i FROM t1;
+FLUSH STATUS;
+SELECT * FROM t2 ORDER BY a, b DESC LIMIT 5;
+a	b	c
+10	10	10
+11	11	11
+12	12	12
+13	13	13
+14	14	14
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	5
+Sort_scan	1
+FLUSH STATUS;
+UPDATE t2 SET c = 10 ORDER BY a, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	5
+Sort_scan	1
+SELECT * FROM t2 WHERE c = 10 ORDER BY a, b DESC;
+a	b	c
+10	10	10
+11	11	10
+12	12	10
+13	13	10
+14	14	10
+FLUSH STATUS;
+DELETE FROM t2 ORDER BY a, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	16
+Sort_scan	1
+SELECT * FROM t2 ORDER BY a, b DESC;
+a	b	c
+15	15	15
+16	16	16
+17	17	17
+18	18	18
+19	19	19
+20	20	20
+21	21	21
+22	22	22
+23	23	23
+24	24	24
+25	25	25
+DROP TABLE t2;
+#
+# LIMIT with no WHERE and DESC direction, should not use filesort
+#
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), INDEX (a, b));
+INSERT INTO t2 (a, b) SELECT i, i FROM t1;
+INSERT INTO t2 (a, b) SELECT t1.i, t1.i FROM t1, t1 x1, t1 x2;
+FLUSH STATUS;
+SELECT * FROM t2 ORDER BY a, b LIMIT 5;
+a	b	c
+10	10	NULL
+10	10	NULL
+10	10	NULL
+10	10	NULL
+10	10	NULL
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+FLUSH STATUS;
+SELECT * FROM t2 ORDER BY a DESC, b DESC LIMIT 5;
+a	b	c
+25	25	NULL
+25	25	NULL
+25	25	NULL
+25	25	NULL
+25	25	NULL
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+FLUSH STATUS;
+UPDATE t2 SET c = 10 ORDER BY a DESC, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+SELECT * FROM t2 WHERE c = 10 ORDER BY a DESC, b DESC;
+a	b	c
+25	25	10
+25	25	10
+25	25	10
+25	25	10
+25	25	10
+FLUSH STATUS;
+DELETE FROM t2 ORDER BY a DESC, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+Variable_name	Value
+Sort_merge_passes	0
+Sort_range	0
+Sort_rows	0
+Sort_scan	0
+SELECT * FROM t2 WHERE c = 10 ORDER BY a DESC, b DESC;
+a	b	c
+DROP TABLE t1, t2;

=== modified file 'mysql-test/r/update.result'
--- a/mysql-test/r/update.result	2010-03-10 16:10:05 +0000
+++ b/mysql-test/r/update.result	2010-03-31 12:20:59 +0000
@@ -306,8 +306,8 @@ Handler_read_first	0
 Handler_read_key	0
 Handler_read_next	0
 Handler_read_prev	0
-Handler_read_rnd	1
-Handler_read_rnd_next	9
+Handler_read_rnd	0
+Handler_read_rnd_next	0
 alter table t1 disable keys;
 flush status;
 delete from t1 order by a limit 1;

=== added file 'mysql-test/t/single_delete_update.test'
--- a/mysql-test/t/single_delete_update.test	1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/single_delete_update.test	2010-03-31 12:20:59 +0000
@@ -0,0 +1,238 @@
+#
+# Single table specific update/delete tests (mysql_update and mysql_delete)
+#
+
+--disable_warnings
+DROP TABLE IF EXISTS t1, t2;
+--enable_warnings
+
+#
+# Bug #30584: delete with order by and limit clauses does not use
+#             limit efficiently
+#
+# Bug #36569: UPDATE ... WHERE ... ORDER BY... always does a filesort
+#             even if not required
+#
+
+CREATE TABLE t1 (i INT);
+INSERT INTO t1 VALUES (10),(11),(12),(13),(14),(15),(16),(17),(18),(19),
+                      (20),(21),(22),(23),(24),(25);
+
+CREATE TABLE t2(a INT, i INT PRIMARY KEY);
+INSERT INTO t2 (i) SELECT i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET a = 10 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+
+DROP TABLE t2;
+
+--echo #
+--echo # index on field prefix:
+--echo #
+
+CREATE TABLE t2(a INT, i CHAR(2), INDEX(i(1)));
+INSERT INTO t2 (i) SELECT i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET a = 10 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE i > 10 AND i <= 18 ORDER BY i;
+
+DROP TABLE t2;
+
+--echo #
+--echo # constant inside ORDER BY list, should use filesort
+--echo # on a small table
+--echo #
+
+CREATE TABLE t2(a INT, b INT, c INT, d INT, INDEX(a, b, c));
+INSERT INTO t2 (a, b, c) SELECT i, i, i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+let $cnt = `SELECT COUNT(*) FROM t2 WHERE b = 10`;
+
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+--echo ## should be 5 (previous LIMIT)
+SELECT COUNT(*) FROM t2 WHERE b = 10 AND d = 10 ORDER BY a, c;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+--echo ## should be 5 (previous LIMIT)
+eval SELECT $cnt - COUNT(*) FROM t2 WHERE b = 10;
+
+DROP TABLE t2;
+
+--echo #
+--echo # same test as above (constant inside ORDER BY list), but with
+--echo # a larger table - should not use filesort
+--echo #
+
+CREATE TABLE t2(a INT, b INT, c INT, d INT, INDEX(a, b, c));
+INSERT INTO t2 (a, b, c) SELECT i, i, i FROM t1;
+
+INSERT INTO t2 (a, b, c) SELECT t1.i, t1.i, t1.i FROM t1, t1 x1, t1 x2;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+let $cnt = `SELECT COUNT(*) FROM t2 WHERE b = 10`;
+
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+--echo ## should be 5 (previous LIMIT)
+SELECT COUNT(*) FROM t2 WHERE b = 10 AND d = 10 ORDER BY a, c;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+--echo ## should be 5 (previous LIMIT)
+eval SELECT $cnt - COUNT(*) FROM t2 WHERE b = 10;
+
+DROP TABLE t2;
+
+--echo #
+--echo # as above + partial index, should use filesort
+--echo #
+
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), d CHAR(2), INDEX (a,b(1),c));
+INSERT INTO t2 SELECT i, i, i, i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+
+DROP TABLE t2;
+
+--echo #
+--echo # as above but index is without HA_READ_ORDER flag, should use filesort
+--echo #
+
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), d CHAR(2), INDEX (a,b,c)) ENGINE=HEAP;
+INSERT INTO t2 SELECT i, i, i, i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET d = 10 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE b = 10 ORDER BY a, c LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE b = 10 ORDER BY a, c;
+
+DROP TABLE t2;
+
+--echo #
+--echo # quick select is QUICK_INDEX_MERGE_SELECT, should use filesort
+--echo #
+
+CREATE TABLE t2 (i INT, key1 INT, key2 INT, INDEX (key1), INDEX (key2));
+INSERT INTO t2 (key1, key2) SELECT i, i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET i = 123 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+
+FLUSH STATUS;
+DELETE FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE key1 < 13 or key2 < 14 ORDER BY key1;
+DROP TABLE t2;
+
+--echo #
+--echo # mixed sorting direction, should use filesort
+--echo #
+
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), INDEX (a, b));
+INSERT INTO t2 SELECT i, i, i FROM t1;
+
+FLUSH STATUS;
+SELECT * FROM t2 ORDER BY a, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET c = 10 ORDER BY a, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE c = 10 ORDER BY a, b DESC;
+
+FLUSH STATUS;
+DELETE FROM t2 ORDER BY a, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 ORDER BY a, b DESC;
+
+DROP TABLE t2;
+
+--echo #
+--echo # LIMIT with no WHERE and DESC direction, should not use filesort
+--echo #
+
+CREATE TABLE t2 (a CHAR(2), b CHAR(2), c CHAR(2), INDEX (a, b));
+INSERT INTO t2 (a, b) SELECT i, i FROM t1;
+INSERT INTO t2 (a, b) SELECT t1.i, t1.i FROM t1, t1 x1, t1 x2;
+
+FLUSH STATUS;
+SELECT * FROM t2 ORDER BY a, b LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+SELECT * FROM t2 ORDER BY a DESC, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+
+FLUSH STATUS;
+UPDATE t2 SET c = 10 ORDER BY a DESC, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE c = 10 ORDER BY a DESC, b DESC;
+
+FLUSH STATUS;
+DELETE FROM t2 ORDER BY a DESC, b DESC LIMIT 5;
+SHOW SESSION STATUS LIKE 'Sort%';
+SELECT * FROM t2 WHERE c = 10 ORDER BY a DESC, b DESC;
+
+DROP TABLE t1, t2;

=== modified file 'sql/opt_range.cc'
--- a/sql/opt_range.cc	2010-03-01 09:45:36 +0000
+++ b/sql/opt_range.cc	2010-03-31 12:20:59 +0000
@@ -1832,96 +1832,6 @@ SEL_ARG *SEL_ARG::clone_tree(RANGE_OPT_P
 
 
 /*
-  Find the best index to retrieve first N records in given order
-
-  SYNOPSIS
-    get_index_for_order()
-      table  Table to be accessed
-      order  Required ordering
-      limit  Number of records that will be retrieved
-
-  DESCRIPTION
-    Find the best index that allows to retrieve first #limit records in the 
-    given order cheaper then one would retrieve them using full table scan.
-
-  IMPLEMENTATION
-    Run through all table indexes and find the shortest index that allows
-    records to be retrieved in given order. We look for the shortest index
-    as we will have fewer index pages to read with it.
-
-    This function is used only by UPDATE/DELETE, so we take into account how
-    the UPDATE/DELETE code will work:
-     * index can only be scanned in forward direction
-     * HA_EXTRA_KEYREAD will not be used
-    Perhaps these assumptions could be relaxed.
-
-  RETURN
-    Number of the index that produces the required ordering in the cheapest way
-    MAX_KEY if no such index was found.
-*/
-
-uint get_index_for_order(TABLE *table, ORDER *order, ha_rows limit)
-{
-  uint idx;
-  uint match_key= MAX_KEY, match_key_len= MAX_KEY_LENGTH + 1;
-  ORDER *ord;
-  
-  for (ord= order; ord; ord= ord->next)
-    if (!ord->asc)
-      return MAX_KEY;
-
-  for (idx= 0; idx < table->s->keys; idx++)
-  {
-    if (!(table->keys_in_use_for_query.is_set(idx)))
-      continue;
-    KEY_PART_INFO *keyinfo= table->key_info[idx].key_part;
-    uint n_parts=  table->key_info[idx].key_parts;
-    uint partno= 0;
-    
-    /* 
-      The below check is sufficient considering we now have either BTREE 
-      indexes (records are returned in order for any index prefix) or HASH 
-      indexes (records are not returned in order for any index prefix).
-    */
-    if (!(table->file->index_flags(idx, 0, 1) & HA_READ_ORDER))
-      continue;
-    for (ord= order; ord && partno < n_parts; ord= ord->next, partno++)
-    {
-      Item *item= order->item[0];
-      if (!(item->type() == Item::FIELD_ITEM &&
-           ((Item_field*)item)->field->eq(keyinfo[partno].field)))
-        break;
-    }
-    
-    if (!ord && table->key_info[idx].key_length < match_key_len)
-    {
-      /* 
-        Ok, the ordering is compatible and this key is shorter then
-        previous match (we want shorter keys as we'll have to read fewer
-        index pages for the same number of records)
-      */
-      match_key= idx;
-      match_key_len= table->key_info[idx].key_length;
-    }
-  }
-
-  if (match_key != MAX_KEY)
-  {
-    /* 
-      Found an index that allows records to be retrieved in the requested 
-      order. Now we'll check if using the index is cheaper then doing a table
-      scan.
-    */
-    double full_scan_time= table->file->scan_time();
-    double index_scan_time= table->file->read_time(match_key, 1, limit);
-    if (index_scan_time > full_scan_time)
-      match_key= MAX_KEY;
-  }
-  return match_key;
-}
-
-
-/*
   Table rows retrieval plan. Range optimizer creates QUICK_SELECT_I-derived
   objects from table read plans.
 */
@@ -11690,6 +11600,58 @@ void QUICK_RANGE_SELECT::dbug_dump(int i
   /* purecov: end */    
 }
 
+
+/**
+  Create a compatible quick select with the result ordered in an opposite way
+
+  @return NULL or new quick select
+*/
+
+QUICK_SELECT_I *QUICK_RANGE_SELECT::make_reverse()
+{
+  DBUG_ASSERT(!reverse_sorted()); // for sure/child classes
+
+  QUICK_SELECT_DESC *new_quick= new QUICK_SELECT_DESC(this, used_key_parts);
+  if (!new_quick || new_quick->error)
+  {
+    delete new_quick;
+    return NULL;
+  }
+  return new_quick;
+}
+
+
+/**
+  Test if the result set sorting order is compatible with ORDER BY
+
+  @param order            Linked list of ORDER BY arguments
+
+  @return TRUE if ORDER BY may be ignored
+
+  @note
+    This function is for use in single-table mysql_update and
+    mysql_delete function.
+*/
+
+bool QUICK_SELECT_I::is_ordered_by(ORDER *order)
+{
+  if (!order)
+    return TRUE;
+
+  if (index == MAX_KEY ||
+      !(head->file->index_flags(index, 0, 1) & HA_READ_ORDER))
+    return FALSE;
+
+  uint dummy;
+  switch (test_if_order_by_key(order, head, index, &dummy)) {
+  case -1: return reverse_sorted();
+  case  1: return !reverse_sorted();
+  }
+
+  return FALSE;
+}
+
+
 void QUICK_INDEX_MERGE_SELECT::dbug_dump(int indent, bool verbose)
 {
   List_iterator_fast<QUICK_RANGE_SELECT> it(quick_selects);
@@ -11778,7 +11740,7 @@ void QUICK_GROUP_MIN_MAX_SELECT::dbug_du
 }
 
 
-#endif /* NOT_USED */
+#endif /* !DBUG_OFF */
 
 /*****************************************************************************
 ** Instantiate templates 

=== modified file 'sql/opt_range.h'
--- a/sql/opt_range.h	2009-11-24 13:54:59 +0000
+++ b/sql/opt_range.h	2010-03-31 12:20:59 +0000
@@ -259,6 +259,10 @@ public:
   */
   virtual void dbug_dump(int indent, bool verbose)= 0;
 #endif
+
+  virtual QUICK_SELECT_I *make_reverse() { return NULL; }
+
+  bool is_ordered_by(ORDER *order);
 };
 
 
@@ -344,6 +348,7 @@ public:
 #ifndef DBUG_OFF
   void dbug_dump(int indent, bool verbose);
 #endif
+  QUICK_SELECT_I *make_reverse();
 private:
   /* Default copy ctor used by QUICK_SELECT_DESC */
 };
@@ -690,6 +695,7 @@ public:
   int get_next();
   bool reverse_sorted() { return 1; }
   int get_type() { return QS_TYPE_RANGE_DESC; }
+  QUICK_SELECT_I *make_reverse() { return NULL; } // is already reverse sorted
 private:
   bool range_reads_after_key(QUICK_RANGE *range);
   int reset(void) { rev_it.rewind(); return QUICK_RANGE_SELECT::reset(); }
@@ -715,6 +721,11 @@ class SQL_SELECT :public Sql_alloc {
   SQL_SELECT();
   ~SQL_SELECT();
   void cleanup();
+  QUICK_SELECT_I *set_quick(QUICK_SELECT_I *new_quick)
+  {
+    delete quick;
+    return quick= new_quick;
+  }
   bool check_quick(THD *thd, bool force_quick_range, ha_rows limit)
   {
     key_map tmp;
@@ -741,7 +752,6 @@ public:
 QUICK_RANGE_SELECT *get_quick_select_for_ref(THD *thd, TABLE *table,
                                              struct st_table_ref *ref,
                                              ha_rows records);
-uint get_index_for_order(TABLE *table, ORDER *order, ha_rows limit);
 
 #ifdef WITH_PARTITION_STORAGE_ENGINE
 bool prune_partitions(THD *thd, TABLE *table, Item *pprune_cond);

=== modified file 'sql/records.cc'
--- a/sql/records.cc	2010-03-01 09:45:36 +0000
+++ b/sql/records.cc	2010-03-31 12:20:59 +0000
@@ -38,7 +38,9 @@ static int rr_from_cache(READ_RECORD *in
 static int init_rr_cache(THD *thd, READ_RECORD *info);
 static int rr_cmp(uchar *a,uchar *b);
 static int rr_index_first(READ_RECORD *info);
+static int rr_index_last(READ_RECORD *info);
 static int rr_index(READ_RECORD *info);
+static int rr_index_desc(READ_RECORD *info);
 
 
 /**
@@ -55,10 +57,11 @@ static int rr_index(READ_RECORD *info);
   @param print_error  If true, call table->file->print_error() if an error
                       occurs (except for end-of-records error)
   @param idx          index to scan
+  @param reverse      Scan in the reverse direction
 */
 
 void init_read_record_idx(READ_RECORD *info, THD *thd, TABLE *table,
-                          bool print_error, uint idx)
+                          bool print_error, uint idx, bool reverse)
 {
   empty_record(table);
   bzero((char*) info,sizeof(*info));
@@ -73,7 +76,7 @@ void init_read_record_idx(READ_RECORD *i
   if (!table->file->inited)
     table->file->ha_index_init(idx, 1);
   /* read_record will be changed to rr_index in rr_index_first */
-  info->read_record= rr_index_first;
+  info->read_record= reverse ? rr_index_last : rr_index_first;
 }
 
 
@@ -361,6 +364,29 @@ static int rr_index_first(READ_RECORD *i
 
 
 /**
+  Reads last row in an index scan.
+
+  @param info  	Scan info
+
+  @retval
+    0   Ok
+  @retval
+    -1   End of records
+  @retval
+    1   Error
+*/
+
+static int rr_index_last(READ_RECORD *info)
+{
+  int tmp= info->file->index_last(info->record);
+  info->read_record= rr_index_desc;
+  if (tmp)
+    tmp= rr_handle_error(info, tmp);
+  return tmp;
+}
+
+
+/**
   Reads index sequentially after first row.
 
   Read the next index record (in forward direction) and translate return
@@ -385,6 +411,31 @@ static int rr_index(READ_RECORD *info)
 }
 
 
+/**
+  Reads index sequentially from the last row to the first.
+
+  Read the prev index record (in backward direction) and translate return
+  value.
+
+  @param info  Scan info
+
+  @retval
+    0   Ok
+  @retval
+    -1   End of records
+  @retval
+    1   Error
+*/
+
+static int rr_index_desc(READ_RECORD *info)
+{
+  int tmp= info->file->index_prev(info->record);
+  if (tmp)
+    tmp= rr_handle_error(info, tmp);
+  return tmp;
+}
+
+
 int rr_sequential(READ_RECORD *info)
 {
   int tmp;

=== modified file 'sql/records.h'
--- a/sql/records.h	2009-11-06 16:13:33 +0000
+++ b/sql/records.h	2010-03-31 12:20:59 +0000
@@ -71,7 +71,7 @@ void init_read_record(READ_RECORD *info,
 		      SQL_SELECT *select, int use_record_cache,
                       bool print_errors, bool disable_rr_cache);
 void init_read_record_idx(READ_RECORD *info, THD *thd, TABLE *table,
-                          bool print_error, uint idx);
+                          bool print_error, uint idx, bool reverse);
 void end_read_record(READ_RECORD *info);
 
 void rr_unlock_row(st_join_table *tab);

=== modified file 'sql/sql_delete.cc'
--- a/sql/sql_delete.cc	2010-03-25 07:26:12 +0000
+++ b/sql/sql_delete.cc	2010-03-31 12:20:59 +0000
@@ -34,7 +34,7 @@
 */
 
 bool mysql_delete(THD *thd, TABLE_LIST *table_list, COND *conds,
-                  SQL_LIST *order, ha_rows limit, ulonglong options,
+                  SQL_LIST *order_list, ha_rows limit, ulonglong options,
                   bool reset_auto_increment)
 {
   bool          will_batch;
@@ -47,6 +47,9 @@ bool mysql_delete(THD *thd, TABLE_LIST *
   bool          const_cond_result;
   ha_rows	deleted= 0;
   bool          triggers_applicable;
+  bool          reverse= FALSE;
+  ORDER *order= (ORDER *) ((order_list && order_list->elements) ?
+                           order_list->first : NULL);
   uint usable_index= MAX_KEY;
   SELECT_LEX   *select_lex= &thd->lex->select_lex;
   THD::killed_state killed_status= THD::NOT_KILLED;
@@ -73,7 +76,7 @@ bool mysql_delete(THD *thd, TABLE_LIST *
     DBUG_RETURN(TRUE);
 
   /* check ORDER BY even if it can be ignored */
-  if (order && order->elements)
+  if (order)
   {
     TABLE_LIST   tables;
     List<Item>   fields;
@@ -83,9 +86,9 @@ bool mysql_delete(THD *thd, TABLE_LIST *
     tables.table = table;
     tables.alias = table_list->alias;
 
-      if (select_lex->setup_ref_array(thd, order->elements) ||
+      if (select_lex->setup_ref_array(thd, order_list->elements) ||
 	  setup_order(thd, select_lex->ref_pointer_array, &tables,
-                    fields, all_fields, (ORDER*) order->first))
+                    fields, all_fields, order))
     {
       delete select;
       free_underlaid_joins(thd, &thd->lex->select_lex);
@@ -183,6 +186,7 @@ bool mysql_delete(THD *thd, TABLE_LIST *
 
   table->covering_keys.clear_all();
   table->quick_keys.clear_all();		// Can't use 'only index'
+
   select=make_select(table, 0, 0, conds, 0, &error);
   if (error)
     DBUG_RETURN(TRUE);
@@ -224,21 +228,25 @@ bool mysql_delete(THD *thd, TABLE_LIST *
   if (options & OPTION_QUICK)
     (void) table->file->extra(HA_EXTRA_QUICK);
 
-  if (order && order->elements)
+  if (order)
   {
     uint         length= 0;
     SORT_FIELD  *sortorder;
+    bool         need_sort;
     ha_rows examined_rows;
     
-    if ((!select || table->quick_keys.is_clear_all()) && limit != HA_POS_ERROR)
-      usable_index= get_index_for_order(table, (ORDER*)(order->first), limit);
+    table->update_const_key_parts(conds);
+    order= simple_remove_const(order, conds);
 
-    if (usable_index == MAX_KEY)
+    usable_index= get_index_for_order(order, table, select, limit,
+                                      &need_sort, &reverse);
+    if (need_sort)
     {
+      DBUG_ASSERT(usable_index == MAX_KEY);
       table->sort.io_cache= (IO_CACHE *) my_malloc(sizeof(IO_CACHE),
                                                    MYF(MY_FAE | MY_ZEROFILL));
     
-      if (!(sortorder= make_unireg_sortorder((ORDER*) order->first,
+      if (!(sortorder= make_unireg_sortorder(order,
                                              &length, NULL)) ||
 	  (table->sort.found_records = filesort(thd, table, sortorder, length,
                                                 select, HA_POS_ERROR, 1,
@@ -266,10 +274,10 @@ bool mysql_delete(THD *thd, TABLE_LIST *
     free_underlaid_joins(thd, select_lex);
     DBUG_RETURN(TRUE);
   }
-  if (usable_index==MAX_KEY)
+  if (usable_index==MAX_KEY || (select && select->quick))
     init_read_record(&info, thd, table, select, 1, 1, FALSE);
   else
-    init_read_record_idx(&info, thd, table, 1, usable_index);
+    init_read_record_idx(&info, thd, table, 1, usable_index, reverse);
 
   init_ftfuncs(thd, select_lex, 1);
   thd_proc_info(thd, "updating");

=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc	2010-03-25 07:26:12 +0000
+++ b/sql/sql_select.cc	2010-03-31 12:20:59 +0000
@@ -118,7 +118,6 @@ static uint build_bitmap_for_nested_join
 static COND *optimize_cond(JOIN *join, COND *conds,
                            List<TABLE_LIST> *join_list,
 			   Item::cond_result *cond_value);
-static bool const_expression_in_where(COND *conds,Item *item, Item **comp_item);
 static bool open_tmp_table(TABLE *table);
 static bool create_myisam_tmp_table(TABLE *,TMP_TABLE_PARAM *, ulonglong, my_bool);
 static int do_select(JOIN *join,List<Item> *fields,TABLE *tmp_table,
@@ -171,6 +170,11 @@ static COND *make_cond_for_table(COND *c
 				 table_map used_table);
 static Item* part_of_refkey(TABLE *form,Field *field);
 uint find_shortest_key(TABLE *table, const key_map *usable_keys);
+static bool test_if_cheaper_ordering(const JOIN_TAB *tab, TABLE *table,
+                                     ORDER *order, key_map usable_keys,
+                                     int *key, int *key_direction,
+                                     uint *used_key_parts,
+                                     ha_rows *select_limit_arg);
 static bool test_if_skip_sort_order(JOIN_TAB *tab,ORDER *order,
 				    ha_rows select_limit, bool no_changes,
                                     key_map *map);
@@ -7326,8 +7330,7 @@ remove_const(JOIN *join,ORDER *first_ord
 	*simple_order=0;
       else
       {
-	Item *comp_item=0;
-	if (cond && const_expression_in_where(cond,order->item[0], &comp_item))
+        if (cond && const_item_in_where(cond, order->item[0]))
 	{
 	  DBUG_PRINT("info",("removing: %s", order->item[0]->full_name()));
 	  continue;
@@ -7357,6 +7360,42 @@ remove_const(JOIN *join,ORDER *first_ord
 }
 
 
+/**
+  Filter out ORDER items those are equal to constants in WHERE
+
+  @param order            Linked list of ORDER BY arguments
+  @param cond             WHERE expression
+
+  @return pointer to new filtered ORDER list or NULL if whole list eliminated
+
+  @note
+    This function overwrites input order list.
+*/
+
+ORDER *simple_remove_const(ORDER *order, COND *where)
+{
+  if (!order || !where)
+    return order;
+
+  ORDER *first= NULL, *prev= NULL;
+  for (; order; order= order->next)
+  {
+    DBUG_ASSERT(!order->item[0]->with_sum_func); // should never happen
+    if (!const_item_in_where(where, order->item[0]))
+    {
+      if (!first)
+        first= order;
+      if (prev)
+        prev->next= order;
+      prev= order;
+    }
+  }
+  if (prev)
+    prev->next= NULL;
+  return first;
+}
+
+
 static int
 return_zero_rows(JOIN *join, select_result *result,TABLE_LIST *tables,
 		 List<Item> &fields, bool send_row, ulonglong select_options,
@@ -9510,12 +9549,38 @@ test_if_equality_guarantees_uniqueness(I
         l->collation.collation == r->collation.collation)));
 }
 
-/**
-  Return TRUE if the item is a const value in all the WHERE clause.
+/*
+  Generic wrapper to pass Field* or Item* to const_expression_in_where().
 */
+class wrap_field_or_item {
+  Field *field;
+  Item  *item;
+public:
+  explicit wrap_field_or_item(Field *field_arg) : field(field_arg), item(NULL)
+  {}
+  explicit wrap_field_or_item(Item *item_arg) : field(NULL), item(item_arg)
+  {}
+  bool eq(Item *item_arg) const
+  {
+    return item ? item_arg->eq(item, 1)
+                : (item_arg->type() == Item::FIELD_ITEM &&
+                   ((Item_field *)item_arg)->field->eq(field));
+  }
+};
 
+
+/**
+  Test if a field or an item is equal to a constant value in WHERE
+
+  @param        cond            WHERE clause expression
+  @param        comp            Item or Field to find in WHERE expression
+  @param[out]   const_item      intermediate arg, set to Item pointer to NULL 
+
+  @return TRUE if the field is a constant value in WHERE
+*/
 static bool
-const_expression_in_where(COND *cond, Item *comp_item, Item **const_item)
+const_expression_in_where(COND *cond, const wrap_field_or_item &comp,
+                          Item **const_item)
 {
   if (cond->type() == Item::COND_ITEM)
   {
@@ -9525,7 +9590,7 @@ const_expression_in_where(COND *cond, It
     Item *item;
     while ((item=li++))
     {
-      bool res=const_expression_in_where(item, comp_item, const_item);
+      bool res=const_expression_in_where(item, comp, const_item);
       if (res)					// Is a const value
       {
 	if (and_level)
@@ -9544,7 +9609,7 @@ const_expression_in_where(COND *cond, It
       return 0;
     Item *left_item=	((Item_func*) cond)->arguments()[0];
     Item *right_item= ((Item_func*) cond)->arguments()[1];
-    if (left_item->eq(comp_item,1))
+    if (comp.eq(left_item))
     {
       if (test_if_equality_guarantees_uniqueness (left_item, right_item))
       {
@@ -9554,7 +9619,7 @@ const_expression_in_where(COND *cond, It
 	return 1;
       }
     }
-    else if (right_item->eq(comp_item,1))
+    else if (comp.eq(right_item))
     {
       if (test_if_equality_guarantees_uniqueness (right_item, left_item))
       {
@@ -9568,6 +9633,37 @@ const_expression_in_where(COND *cond, It
   return 0;
 }
 
+
+/**
+  Test if an item is a constant value in the WHERE clause expression
+
+  @param        where           WHERE clause expression
+  @param        item            Item to find in WHERE expression
+
+  @return TRUE if the item is a constant value in WHERE
+*/
+bool const_item_in_where(COND *where, Item *item)
+{
+  Item *dummy= NULL;
+  return const_expression_in_where(where, wrap_field_or_item(item), &dummy);
+}
+
+
+/**
+  Test if a field is a constant value in the WHERE clause expression
+
+  @param        where           WHERE clause expression
+  @param        field           Field to find in WHERE expression
+
+  @return TRUE if the field is a constant value in WHERE
+*/
+bool const_field_in_where(COND *where, Field *field)
+{
+  Item *dummy= NULL;
+  return const_expression_in_where(where, wrap_field_or_item(field), &dummy);
+}
+
+
 /****************************************************************************
   Create internal temporary table
 ****************************************************************************/
@@ -13039,8 +13135,8 @@ part_of_refkey(TABLE *table,Field *field
     -1   Reverse key can be used
 */
 
-static int test_if_order_by_key(ORDER *order, TABLE *table, uint idx,
-				uint *used_key_parts)
+int test_if_order_by_key(ORDER *order, TABLE *table, uint idx,
+                         uint *used_key_parts)
 {
   KEY_PART_INFO *key_part,*key_part_end;
   key_part=table->key_info[idx].key_part;
@@ -13523,183 +13619,16 @@ test_if_skip_sort_order(JOIN_TAB *tab,OR
       goto check_reverse_order;
   }
   {
-    /*
-      Check whether there is an index compatible with the given order
-      usage of which is cheaper than usage of the ref_key index (ref_key>=0)
-      or a table scan.
-      It may be the case if ORDER/GROUP BY is used with LIMIT.
-    */
-    uint nr;
-    key_map keys;
     uint best_key_parts= 0;
     int best_key_direction= 0;
-    ha_rows best_records= 0;
-    double read_time;
-    int best_key= -1;
-    bool is_best_covering= FALSE;
-    double fanout= 1;
+    int best_key= ref_key;
     JOIN *join= tab->join;
-    uint tablenr= tab - join->join_tab;
     ha_rows table_records= table->file->stats.records;
-    bool group= join->group && order == join->group_list;
-    ha_rows ref_key_quick_rows= HA_POS_ERROR;
-
-    /*
-      If not used with LIMIT, only use keys if the whole query can be
-      resolved with a key;  This is because filesort() is usually faster than
-      retrieving all rows through an index.
-    */
-    if (select_limit >= table_records)
-    {
-      keys= *table->file->keys_to_use_for_scanning();
-      keys.merge(table->covering_keys);
-
-      /*
-	We are adding here also the index specified in FORCE INDEX clause, 
-	if any.
-        This is to allow users to use index in ORDER BY.
-      */
-      if (table->force_index) 
-	keys.merge(group ? table->keys_in_use_for_group_by :
-                           table->keys_in_use_for_order_by);
-      keys.intersect(usable_keys);
-    }
-    else
-      keys= usable_keys;
-
-    if (ref_key >= 0 && table->covering_keys.is_set(ref_key))
-      ref_key_quick_rows= table->quick_rows[ref_key];
-
-    read_time= join->best_positions[tablenr].read_time;
-    for (uint i= tablenr+1; i < join->tables; i++)
-      fanout*= join->best_positions[i].records_read; // fanout is always >= 1
 
-    for (nr=0; nr < table->s->keys ; nr++)
-    {
-      int direction;
-
-      if (keys.is_set(nr) &&
-          (direction= test_if_order_by_key(order, table, nr, &used_key_parts)))
-      {
-        /*
-          At this point we are sure that ref_key is a non-ordering
-          key (where "ordering key" is a key that will return rows
-          in the order required by ORDER BY).
-        */
-        DBUG_ASSERT (ref_key != (int) nr);
-
-        bool is_covering= table->covering_keys.is_set(nr) ||
-                          (nr == table->s->primary_key &&
-                          table->file->primary_key_is_clustered());
-	
-        /* 
-          Don't use an index scan with ORDER BY without limit.
-          For GROUP BY without limit always use index scan
-          if there is a suitable index. 
-          Why we hold to this asymmetry hardly can be explained
-          rationally. It's easy to demonstrate that using
-          temporary table + filesort could be cheaper for grouping
-          queries too.
-	*/ 
-        if (is_covering ||
-            select_limit != HA_POS_ERROR || 
-            (ref_key < 0 && (group || table->force_index)))
-        { 
-          double rec_per_key;
-          double index_scan_time;
-          KEY *keyinfo= tab->table->key_info+nr;
-          if (select_limit == HA_POS_ERROR)
-            select_limit= table_records;
-          if (group)
-          {
-            /* 
-              Used_key_parts can be larger than keyinfo->key_parts
-              when using a secondary index clustered with a primary 
-              key (e.g. as in Innodb). 
-              See Bug #28591 for details.
-            */  
-            rec_per_key= used_key_parts &&
-                         used_key_parts <= keyinfo->key_parts ?
-                         keyinfo->rec_per_key[used_key_parts-1] : 1;
-            set_if_bigger(rec_per_key, 1);
-            /*
-              With a grouping query each group containing on average
-              rec_per_key records produces only one row that will
-              be included into the result set.
-	    */  
-            if (select_limit > table_records/rec_per_key)
-                select_limit= table_records;
-            else
-              select_limit= (ha_rows) (select_limit*rec_per_key);
-          }
-          /* 
-            If tab=tk is not the last joined table tn then to get first
-            L records from the result set we can expect to retrieve
-            only L/fanout(tk,tn) where fanout(tk,tn) says how many
-            rows in the record set on average will match each row tk.
-            Usually our estimates for fanouts are too pessimistic.
-            So the estimate for L/fanout(tk,tn) will be too optimistic
-            and as result we'll choose an index scan when using ref/range
-            access + filesort will be cheaper.
-	  */
-          select_limit= (ha_rows) (select_limit < fanout ?
-                                   1 : select_limit/fanout);
-          /*
-            We assume that each of the tested indexes is not correlated
-            with ref_key. Thus, to select first N records we have to scan
-            N/selectivity(ref_key) index entries. 
-            selectivity(ref_key) = #scanned_records/#table_records =
-            table->quick_condition_rows/table_records.
-            In any case we can't select more than #table_records.
-            N/(table->quick_condition_rows/table_records) > table_records 
-            <=> N > table->quick_condition_rows.
-          */ 
-          if (select_limit > table->quick_condition_rows)
-            select_limit= table_records;
-          else
-            select_limit= (ha_rows) (select_limit *
-                                     (double) table_records /
-                                      table->quick_condition_rows);
-          rec_per_key= keyinfo->rec_per_key[keyinfo->key_parts-1];
-          set_if_bigger(rec_per_key, 1);
-          /*
-            Here we take into account the fact that rows are
-            accessed in sequences rec_per_key records in each.
-            Rows in such a sequence are supposed to be ordered
-            by rowid/primary key. When reading the data
-            in a sequence we'll touch not more pages than the
-            table file contains.
-            TODO. Use the formula for a disk sweep sequential access
-            to calculate the cost of accessing data rows for one 
-            index entry.
-	  */
-          index_scan_time= select_limit/rec_per_key *
-	                   min(rec_per_key, table->file->scan_time());
-          if ((ref_key < 0 && is_covering) || 
-              (ref_key < 0 && (group || table->force_index)) ||
-              index_scan_time < read_time)
-          {
-            ha_rows quick_records= table_records;
-            if ((is_best_covering && !is_covering) ||
-                (is_covering && ref_key_quick_rows < select_limit))
-              continue;
-            if (table->quick_keys.is_set(nr))
-              quick_records= table->quick_rows[nr];
-            if (best_key < 0 ||
-                (select_limit <= min(quick_records,best_records) ?
-                 keyinfo->key_parts < best_key_parts :
-                 quick_records < best_records))
-            {
-              best_key= nr;
-              best_key_parts= keyinfo->key_parts;
-              best_records= quick_records;
-              is_best_covering= is_covering;
-              best_key_direction= direction; 
-            }
-          }   
-	}      
-      }
-    }
+    if (!test_if_cheaper_ordering(tab, table, order, usable_keys,
+                                  &best_key, &best_key_direction,
+                                  &best_key_parts, &select_limit))
+      best_key= -1;
 
     /*
       filesort() and join cache are usually faster than reading in 
@@ -17357,5 +17286,354 @@ void JOIN::cache_const_exprs()
 
 
 /**
+  Find a cheaper access key than a given @a key
+
+  @param          tab              NULL or JOIN_TAB of the accessed table
+  @param          table            Table if tab == NULL or tab->table
+  @param          order            Linked list of ORDER BY arguments
+  @param          usable_keys      Key map to find a cheaper key in
+  @param [in,out] key
+    - in:
+        * 0 <= key < MAX_KEY       - key number (hint) to start the search
+        * -1                       - no input key number
+    - out:                         key number if the function returns TRUE,
+                                   otherwise undefined
+  @param [out]    key_direction    Return -1 (reverse) or +1 if the function
+                                   returns TRUE, otherwise undefined
+  @param [out]    used_key_parts   Return number of the @a key prefix columns
+  @param [in,out] select_limit_arg Input and return adjusted LIMIT
+
+  @note
+    Former part of the huge test_if_skip_sort_order() - extracted out to
+    share with single table UPDATE/DELETE optimization procedures.
+
+  @note
+    This function takes into account table->quick_condition_rows statistic
+    (that is calculated by the make_join_statistics function).
+    However, single table procedures such as mysql_update() and mysql_delete()
+    never call make_join_statistics, so the have to update it manually
+    (see get_index_for_order()).
+*/
+
+static bool
+test_if_cheaper_ordering(const JOIN_TAB *tab, TABLE *table, ORDER *order,
+                         key_map usable_keys, int *key, int *key_direction,
+                         uint *used_key_parts, ha_rows *select_limit_arg)
+{
+  DBUG_ENTER("test_if_cheaper_ordering");
+  int ref_key= *key;
+  ha_rows select_limit= *select_limit_arg;
+  ha_rows best_select_limit= HA_POS_ERROR;
+  JOIN *join= tab ? tab->join : NULL;
+  { // old formatting as in test_if_skip_sort_order()
+    /*
+      Check whether there is an index compatible with the given order
+      usage of which is cheaper than usage of the ref_key index (ref_key>=0)
+      or a table scan.
+      It may be the case if ORDER/GROUP BY is used with LIMIT.
+    */
+    uint nr;
+    key_map keys;
+    uint best_key_parts= 0;
+    int best_key_direction= 0;
+    ha_rows best_records= 0;
+    double read_time;
+    int best_key= -1;
+    bool is_best_covering= FALSE;
+    double fanout= 1;
+    ha_rows table_records= table->file->stats.records;
+    bool group= join && join->group && order == join->group_list;
+    ha_rows ref_key_quick_rows= HA_POS_ERROR;
+
+    /*
+      If not used with LIMIT, only use keys if the whole query can be
+      resolved with a key;  This is because filesort() is usually faster than
+      retrieving all rows through an index.
+    */
+    if (select_limit >= table_records)
+    {
+      keys= *table->file->keys_to_use_for_scanning();
+      keys.merge(table->covering_keys);
+
+      /*
+	We are adding here also the index specified in FORCE INDEX clause, 
+	if any.
+        This is to allow users to use index in ORDER BY.
+      */
+      if (table->force_index) 
+	keys.merge(group ? table->keys_in_use_for_group_by :
+                           table->keys_in_use_for_order_by);
+      keys.intersect(usable_keys);
+    }
+    else
+      keys= usable_keys;
+
+    if (ref_key >= 0 && table->covering_keys.is_set(ref_key))
+      ref_key_quick_rows= table->quick_rows[ref_key];
+
+    if (join)
+    {
+      uint tablenr= tab - join->join_tab;
+      read_time= join->best_positions[tablenr].read_time;
+      for (uint i= tablenr+1; i < join->tables; i++)
+        fanout*= join->best_positions[i].records_read; // fanout is always >= 1
+    }
+    else
+      read_time= table->file->scan_time();
+
+    for (nr=0; nr < table->s->keys ; nr++)
+    {
+      int direction;
+
+      if (keys.is_set(nr) &&
+          (direction= test_if_order_by_key(order, table, nr, used_key_parts)))
+      {
+        /*
+          At this point we are sure that ref_key is a non-ordering
+          key (where "ordering key" is a key that will return rows
+          in the order required by ORDER BY).
+        */
+        DBUG_ASSERT (ref_key != (int) nr);
+
+        bool is_covering= table->covering_keys.is_set(nr) ||
+                          (nr == table->s->primary_key &&
+                          table->file->primary_key_is_clustered());
+	
+        /* 
+          Don't use an index scan with ORDER BY without limit.
+          For GROUP BY without limit always use index scan
+          if there is a suitable index. 
+          Why we hold to this asymmetry hardly can be explained
+          rationally. It's easy to demonstrate that using
+          temporary table + filesort could be cheaper for grouping
+          queries too.
+	*/ 
+        if (is_covering ||
+            select_limit != HA_POS_ERROR || 
+            (ref_key < 0 && (group || table->force_index)))
+        { 
+          double rec_per_key;
+          double index_scan_time;
+          KEY *keyinfo= table->key_info+nr;
+          if (select_limit == HA_POS_ERROR)
+            select_limit= table_records;
+          if (group)
+          {
+            /* 
+              Used_key_parts can be larger than keyinfo->key_parts
+              when using a secondary index clustered with a primary 
+              key (e.g. as in Innodb). 
+              See Bug #28591 for details.
+            */  
+            rec_per_key= *used_key_parts &&
+                         *used_key_parts <= keyinfo->key_parts ?
+                         keyinfo->rec_per_key[*used_key_parts-1] : 1;
+            set_if_bigger(rec_per_key, 1);
+            /*
+              With a grouping query each group containing on average
+              rec_per_key records produces only one row that will
+              be included into the result set.
+	    */  
+            if (select_limit > table_records/rec_per_key)
+                select_limit= table_records;
+            else
+              select_limit= (ha_rows) (select_limit*rec_per_key);
+          }
+          /* 
+            If tab=tk is not the last joined table tn then to get first
+            L records from the result set we can expect to retrieve
+            only L/fanout(tk,tn) where fanout(tk,tn) says how many
+            rows in the record set on average will match each row tk.
+            Usually our estimates for fanouts are too pessimistic.
+            So the estimate for L/fanout(tk,tn) will be too optimistic
+            and as result we'll choose an index scan when using ref/range
+            access + filesort will be cheaper.
+	  */
+          select_limit= (ha_rows) (select_limit < fanout ?
+                                   1 : select_limit/fanout);
+          /*
+            We assume that each of the tested indexes is not correlated
+            with ref_key. Thus, to select first N records we have to scan
+            N/selectivity(ref_key) index entries. 
+            selectivity(ref_key) = #scanned_records/#table_records =
+            table->quick_condition_rows/table_records.
+            In any case we can't select more than #table_records.
+            N/(table->quick_condition_rows/table_records) > table_records 
+            <=> N > table->quick_condition_rows.
+          */ 
+          if (select_limit > table->quick_condition_rows)
+            select_limit= table_records;
+          else
+            select_limit= (ha_rows) (select_limit *
+                                     (double) table_records /
+                                      table->quick_condition_rows);
+          rec_per_key= keyinfo->rec_per_key[keyinfo->key_parts-1];
+          set_if_bigger(rec_per_key, 1);
+          /*
+            Here we take into account the fact that rows are
+            accessed in sequences rec_per_key records in each.
+            Rows in such a sequence are supposed to be ordered
+            by rowid/primary key. When reading the data
+            in a sequence we'll touch not more pages than the
+            table file contains.
+            TODO. Use the formula for a disk sweep sequential access
+            to calculate the cost of accessing data rows for one 
+            index entry.
+	  */
+          index_scan_time= select_limit/rec_per_key *
+	                   min(rec_per_key, table->file->scan_time());
+          if ((ref_key < 0 && is_covering) || 
+              (ref_key < 0 && (group || table->force_index)) ||
+              index_scan_time < read_time)
+          {
+            ha_rows quick_records= table_records;
+            if ((is_best_covering && !is_covering) ||
+                (is_covering && ref_key_quick_rows < select_limit))
+              continue;
+            if (table->quick_keys.is_set(nr))
+              quick_records= table->quick_rows[nr];
+            if (best_key < 0 ||
+                (select_limit <= min(quick_records,best_records) ?
+                 keyinfo->key_parts < best_key_parts :
+                 quick_records < best_records))
+            {
+              best_key= nr;
+              best_key_parts= keyinfo->key_parts;
+              best_records= quick_records;
+              is_best_covering= is_covering;
+              best_key_direction= direction; 
+              best_select_limit= select_limit;
+            }
+          }   
+	}      
+      }
+    }
+
+    if (best_key < 0 || best_key == ref_key)
+      DBUG_RETURN(FALSE);
+    
+    *key= best_key;
+    *key_direction= best_key_direction;
+    *used_key_parts= best_key_parts;
+    *select_limit_arg= best_select_limit;
+
+    DBUG_RETURN(TRUE);
+  } 
+}
+
+
+/**
+  Find a key to apply single table  UPDATE/DELETE by a given ORDER
+
+  @param       order           Linked list of ORDER BY arguments
+  @param       table           Table to find a key
+  @param       select          Pointer to access/update select->quick (if any)
+  @param       limit           LIMIT clause parameter 
+  @param [out] need_sort       TRUE if filesort needed
+  @param [out] reverse
+    TRUE if the key is reversed again given ORDER (undefined if key == MAX_KEY)
+
+  @return
+    - MAX_KEY if no key found                        (need_sort == TRUE)
+    - MAX_KEY if quick select result order is OK     (need_sort == FALSE)
+    - key number (either index scan or quick select) (need_sort == FALSE)
+
+  @note
+    Side effects:
+    - may deallocate or deallocate and replace select->quick;
+    - may set table->quick_condition_rows and table->quick_rows[...]
+      to table->file->stats.records. 
+*/
+
+uint get_index_for_order(ORDER *order, TABLE *table, SQL_SELECT *select,
+                         ha_rows limit, bool *need_sort, bool *reverse)
+{
+  if (select && select->quick && select->quick->unique_key_range())
+  { // Single row select (always "ordered"): Ok to use with key field UPDATE
+    *need_sort= FALSE;
+    /*
+      Returning of MAX_KEY here prevents updating of used_key_is_modified
+      in mysql_update(). Use quick select "as is".
+    */
+    return MAX_KEY;
+  }
+
+  if (!order)
+  {
+    *need_sort= FALSE;
+    if (select && select->quick)
+      return select->quick->index; // index or MAX_KEY, use quick select as is
+    else
+      return table->file->key_used_on_scan; // MAX_KEY or index for some engines
+  }
+
+  if (!is_simple_order(order)) // just to cut further expensive checks
+  {
+    if (select)
+      select->set_quick(NULL);
+    *need_sort= TRUE;
+    return MAX_KEY;
+  }
+
+  if (select && select->quick)
+  {
+    if (select->quick->index == MAX_KEY)
+    {
+      *need_sort= TRUE;
+      select->set_quick(NULL);
+      return MAX_KEY;
+    }
+
+    uint dummy;
+    switch (test_if_order_by_key(order, table, select->quick->index,
+                                 &dummy)) {
+    case 1: // desired order
+      *need_sort= FALSE;
+      return select->quick->index;
+    case 0: // unacceptable order
+      *need_sort= TRUE;
+      select->set_quick(NULL);
+      return MAX_KEY;
+    case -1: // desired order, but opposite direction
+      if (select->set_quick(select->quick->make_reverse()))
+      {
+        *need_sort= FALSE;
+        return select->quick->index;
+      }
+      else
+      {
+        *need_sort= TRUE;
+        return MAX_KEY;
+      }
+    }
+    DBUG_ASSERT(0);
+  }
+  else if (limit != HA_POS_ERROR)
+  { // check if some index scan & LIMIT is more efficient than filesort
+    
+    /*
+      Update quick_condition_rows since single table UPDATE/DELETE procedures
+      don't call make_join_statistics() and leave this variable uninitialized.
+    */
+    table->quick_condition_rows= table->file->stats.records;
+    
+    int key= -1, direction;
+    uint dummy;
+    if (test_if_cheaper_ordering(NULL, table, order,
+                                 table->keys_in_use_for_order_by,
+                                 &key, &direction, &dummy, &limit) &&
+        !is_key_used(table, key, table->write_set))
+    {
+      *need_sort= FALSE;
+      *reverse= (direction < 0);
+      return key;
+    }
+  }
+  *need_sort= TRUE;
+  return MAX_KEY;
+}
+
+
+/**
   @} (end of group Query_Optimizer)
 */

=== modified file 'sql/sql_select.h'
--- a/sql/sql_select.h	2010-03-20 09:35:40 +0000
+++ b/sql/sql_select.h	2010-03-31 12:20:59 +0000
@@ -804,4 +804,16 @@ inline bool optimizer_flag(THD *thd, uin
   return (thd->variables.optimizer_switch & flag);
 }
 
+int test_if_order_by_key(ORDER *order, TABLE *table, uint idx,
+                         uint *used_key_parts);
+bool test_if_cheaper_ordering(TABLE *table, ORDER *order,
+                              const key_map &usable_keys,
+                              int *key, int *key_direction,
+                              ha_rows select_limit);
+bool const_field_in_where(COND *where, Field *field);
+bool const_item_in_where(COND *where, Item *field);
+ORDER *simple_remove_const(ORDER *order, COND *where);
+uint get_index_for_order(ORDER *order, TABLE *table, SQL_SELECT *select,
+                         ha_rows limit, bool *need_sort, bool *reverse);
+
 #endif /* SQL_SELECT_INCLUDED */

=== modified file 'sql/sql_update.cc'
--- a/sql/sql_update.cc	2010-03-22 10:36:23 +0000
+++ b/sql/sql_update.cc	2010-03-31 12:20:59 +0000
@@ -187,12 +187,13 @@ int mysql_update(THD *thd,
 {
   bool		using_limit= limit != HA_POS_ERROR;
   bool		safe_update= test(thd->variables.option_bits & OPTION_SAFE_UPDATES);
-  bool		used_key_is_modified, transactional_table, will_batch;
+  bool		used_key_is_modified= FALSE, transactional_table, will_batch;
   bool		can_compare_record;
   int           res;
   int		error, loc_error;
-  uint		used_index= MAX_KEY, dup_key_found;
+  uint		used_index, dup_key_found;
   bool          need_sort= TRUE;
+  bool          reverse= FALSE;
 #ifndef NO_EMBEDDED_ACCESS_CHECKS
   uint		want_privilege;
 #endif
@@ -343,11 +344,7 @@ int mysql_update(THD *thd,
     my_ok(thd);				// No matching records
     DBUG_RETURN(0);
   }
-  if (!select && limit != HA_POS_ERROR)
-  {
-    if ((used_index= get_index_for_order(table, order, limit)) != MAX_KEY)
-      need_sort= FALSE;
-  }
+
   /* If running in safe sql mode, don't allow updates without keys */
   if (table->quick_keys.is_clear_all())
   {
@@ -363,23 +360,19 @@ int mysql_update(THD *thd,
 
   table->mark_columns_needed_for_update();
 
-  /* Check if we are modifying a key that we are used to search with */
-  
-  if (select && select->quick)
-  {
-    used_index= select->quick->index;
-    used_key_is_modified= (!select->quick->unique_key_range() &&
-                          select->quick->is_keys_used(table->write_set));
+  table->update_const_key_parts(conds);
+  order= simple_remove_const(order, conds);
+        
+  used_index= get_index_for_order(order, table, select, limit,
+                                  &need_sort, &reverse);
+  if (need_sort)
+  { // Assign table scan index to check below for modified key fields:
+    used_index= table->file->key_used_on_scan;
+  }
+  if (used_index != MAX_KEY)
+  { // Check if we are modifying a key that we are used to search with:
+    used_key_is_modified= is_key_used(table, used_index, table->write_set);
   }
-  else
-  {
-    used_key_is_modified= 0;
-    if (used_index == MAX_KEY)                  // no index for sort order
-      used_index= table->file->key_used_on_scan;
-    if (used_index != MAX_KEY)
-      used_key_is_modified= is_key_used(table, used_index, table->write_set);
-  }
-
 
 #ifdef WITH_PARTITION_STORAGE_ENGINE
   if (used_key_is_modified || order ||
@@ -399,7 +392,7 @@ int mysql_update(THD *thd,
       table->use_all_columns();
     }
 
-    /* note: We avoid sorting avoid if we sort on the used index */
+    /* note: We avoid sorting if we sort on the used index */
     if (order && (need_sort || used_key_is_modified))
     {
       /*
@@ -460,7 +453,7 @@ int mysql_update(THD *thd,
       if (used_index == MAX_KEY || (select && select->quick))
         init_read_record(&info, thd, table, select, 0, 1, FALSE);
       else
-        init_read_record_idx(&info, thd, table, 1, used_index);
+        init_read_record_idx(&info, thd, table, 1, used_index, reverse);
 
       thd_proc_info(thd, "Searching rows for update");
       ha_rows tmp_limit= limit;

=== modified file 'sql/table.cc'
--- a/sql/table.cc	2010-03-24 15:03:44 +0000
+++ b/sql/table.cc	2010-03-31 12:20:59 +0000
@@ -20,6 +20,7 @@
 #include "sql_trigger.h"
 #include <m_ctype.h>
 #include "my_md5.h"
+#include "sql_select.h"
 
 /* INFORMATION_SCHEMA name */
 LEX_STRING INFORMATION_SCHEMA_NAME= {C_STRING_WITH_LEN("information_schema")};
@@ -4891,6 +4892,57 @@ void init_mdl_requests(TABLE_LIST *table
 }
 
 
+/**
+  Update TABLE::const_key_parts for single table UPDATE/DELETE query
+
+  @param conds               WHERE clause expression
+
+  @note
+    Set const_key_parts bits if key fields are equal to constants in
+    the WHERE expression.
+*/
+
+void TABLE::update_const_key_parts(COND *conds)
+{
+  bzero((char*) const_key_parts, sizeof(key_part_map) * s->keys);
+
+  if (conds == NULL)
+    return;
+
+  for (uint index= 0; index < s->keys; index++)
+  {
+    KEY_PART_INFO *keyinfo= key_info[index].key_part;
+    KEY_PART_INFO *keyinfo_end= keyinfo + key_info[index].key_parts;
+
+    for (key_part_map part_map= (key_part_map)1; 
+        keyinfo < keyinfo_end;
+        keyinfo++, part_map<<= 1)
+    {
+      if (const_field_in_where(conds, keyinfo->field))
+        const_key_parts[index]|= part_map;
+    }
+  }
+}
+
+/**
+  Test if the order list consists of simple field expressions
+
+  @param order                Linked list of ORDER BY arguments
+
+  @return TRUE if @a order is empty or consist of simple field expressions
+*/
+
+bool is_simple_order(ORDER *order)
+{
+  for (ORDER *ord= order; ord; ord= ord->next)
+  {
+    if (ord->item[0]->real_item()->type() != Item::FIELD_ITEM)
+      return FALSE;
+  }
+  return TRUE;
+}
+
+
 /*****************************************************************************
 ** Instansiate templates
 *****************************************************************************/

=== modified file 'sql/table.h'
--- a/sql/table.h	2010-03-20 09:35:40 +0000
+++ b/sql/table.h	2010-03-31 12:20:59 +0000
@@ -965,6 +965,8 @@ public:
       file->extra(HA_EXTRA_NO_KEYREAD);
     }
   }
+
+  void update_const_key_parts(COND *conds);
 };
 
 
@@ -1859,4 +1861,6 @@ size_t max_row_length(TABLE *table, cons
 
 void init_mdl_requests(TABLE_LIST *table_list);
 
+bool is_simple_order(ORDER *order);
+
 #endif /* TABLE_INCLUDED */


Attachment: [text/bzr-bundle] bzr/gshchepa@mysql.com-20100331122059-n9h2nwkg8qgpehf3.bundle
Thread
bzr commit into mysql-next-mr branch (gshchepa:3128) Bug#30584Bug#36569Gleb Shchepa31 Mar
  • Re: bzr commit into mysql-next-mr branch (gshchepa:3128) Bug#30584Bug#36569Martin Hansson9 Apr
    • Re: bzr commit into mysql-next-mr branch (gshchepa:3128) Bug#30584Bug#36569Gleb Shchepa16 Apr
      • Re: bzr commit into mysql-next-mr branch (gshchepa:3128) Bug#30584Bug#36569Martin Hansson20 Apr
        • Re: bzr commit into mysql-next-mr branch (gshchepa:3128) Bug#30584Bug#36569Gleb Shchepa20 Apr