M5 a écrit :
> Sometimes, when I run a query on a table, the index will be used. But if
> I run the SAME query, but on a larger set of rows (e.g., where id
> BETWEEN 1 AND 100000 vs WHERE id BETWEEN 1 AND 500), the index is not
> used. (EXPLAIN tells me this.) And as a result, the query runs slow.
> The question I have is, why is the index not used on larger-range
> queries? Could it be because the key_buffer_size parameter is too small?
> Presently, it's set at the default value of 8M. Could increasing that
> value to, say, 128M, potentially solve this problem? Or is there some
> other factor involved?
> --MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
As far as I know the key_buffer doesn't affect this... It's a decision
that the optimiser take at some point. I think that when he detects that
he would need to scan more than 60% of the index, he decide that that
would be too much read for nothing and that a full scan may be better.
This is generally a wise decision but in some case it's not (because you
don't really scan 60%, or because all the row you wanna read are
sequential or ...)
To be sure, he has the best information to take his decision, make sure
your table is analyze, especially if you do many delete in your table!
You could force him to use the index but beware of that solution as it
may have an impact if you dataset change with time (growth)
GPG keys available @ http://rounoff.darktech.org