MySQL Lists are EOL. Please join:

List:Internals« Previous MessageNext Message »
From:Michael Widenius Date:October 13 2000 11:08pm
Subject:Re: Does someone forget record_count?
View as plain text  

>>>>> "Paul" == Paul Cadach <paul@stripped> writes:

Paul> Hi,
Paul> -----Original Message-----
Paul> From: Michael Widenius <monty@stripped>
Paul> To: Paul Cadach <paul@stripped>
Paul> Date: Friday, October 13, 2000 3:31 PM
Paul> Subject: Does someone forget record_count?

>>>>>>> "Paul" == Paul Cadach <paul@stripped>
> writes:
Paul> Hi,
Paul> It's looks like someone forget to multiply s->found_records on record_count
Paul> at line 1739 (or around) of
Paul> if (s->on_expr)
Paul> {
Paul> tmp=record_count*min(s->found_records,s->worst_seeks);        // Can't use
> read cache
Paul> }
>> tmp should at the above place be the number of seeks you have to do
>> find the rows for this level.
s-> found_rows should either be the number of rows found by an index
>> which we compare with a constants or the number of rows in the table.
>> Why do you think it's wrong?  Can you give us an example where tmp
>> gets an unreasonable value ?

Paul> If it's not wrong so a bug are hidden in SELECT::test_quick_select() because its
> returns invalid rows numbers.

Actually, it doesn't:

mysql> select count(*) from alarms where strattr='T';
| count(*) |
|        5 |
1 row in set (2.56 sec)

This is quite close to 3, which is correct in this case.

Paul> Also, everywhere before this line tmp is multiplied to record_count when it lets
> some number of rows: 

This is a different thing;  In this case tmp should be the number of
required seeks.


mysql> explain select s.stncode,a.created from station s left join alarms a
-> on a.stncode=s.stncode and a.strattr='T' group by s.stncode;
> +-------+------+---------------+------+---------+------+------+-----------------+
Paul> | table | type | possible_keys | key  | key_len | ref  | rows | Extra          
> |
> +-------+------+---------------+------+---------+------+------+-----------------+
Paul> | s     | ALL  | NULL          | NULL |    NULL | NULL |    9 | Using temporary
> |
Paul> | a     | ALL  | i3,i1         | NULL |    NULL | NULL |    2 |                
> |
> +-------+------+---------------+------+---------+------+------+-----------------+
Paul> 2 rows in set (0.00 sec)
Paul> -------- not optimized query --------

Paul> Second query processes slowly while relation between tables s and a still
Paul> exist. Playing with this type of queries points to me that sometime adding
Paul> "dummy" condition (like "a.stncode is null or a.stncode is not null") to
Paul> where clause improves results, but not every time.

Paul> As you can see, in last (non-optimized query) MySQL detects invalid quick rows
> count. When it tries to use key i3 (which must be used because 9 rows only (grouped into 5
> diffirent rows) will be returned by this query from about 33000 rows in total at alarms
> table), it 

Paul> at next place have values:
Paul>         if (best == DBL_MAX ||
Paul>             (tmp  + record_count/(double) TIME_FOR_COMPARE*s->found_records
> <
Paul>              best + record_count/(double) TIME_FOR_COMPARE*records))
Paul> record_count = 9 (given as argument to find_best() from query estimation on first
> table)
Paul> best = 18 (sets as record_count * 2, sorry, I don't remember where its gets '2'
> value)
Paul> tmp = 2 (sets from s->found_records)

Paul> So, previous condition will be true, and reset key usage by setting best_key to
> 0.

This is actually ok;  The best_key was reset to get the optimizer to use
an buffered full join on the key.  'make_join_select' should notice that
tab->quick is set and only scan through the keys in the given range.

The bug was that make_join_select() wasn't called because your query didn't
have a WHERE clause.

I have fixed this by introducing a dummy WHERE clause in this case.

The fix will be in 3.23.26.

Paul> BTW, how to subscribe internals' list and does MySQL have some description of its
> internals?

Just mail to internals-subscribe@stripped

Re: Does someone forget record_count?Paul Cadach13 Oct
  • Re: Does someone forget record_count?Michael Widenius14 Oct