Actually that query is not my concern :

i have a query that is taking so much time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _____________
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
# Attribute          total     min     max     avg     95%  stddev  median
# ============     ======= ======= ======= ======= ======= ======= =======
# Exec time         80887s   192us   2520s    415s   1732s    612s     80s
# Lock time           13ms       0   133us    68us   103us    23us    69us
# Rows sent        430.89k       0  17.58k   2.21k  12.50k   3.96k   49.17
# Rows examine      32.30M       0 466.46k 169.63k 440.37k 186.02k 117.95k
# Query size        65.45k       6     577  343.70  563.87  171.06  246.02

In the logs output :
# Query_time: 488.031783  Lock_time: 0.000041 Rows_sent: 50  Rows_examined: 471150
SET timestamp=1346655789;
SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name, t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time, t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow' ORDER BY t0.created_time DESC LIMIT 0, 50;

The table is near about 30 GB and growing day by day. 

Attaching the table definition & indexes output. I have a index on bean type column but cann't understand why it 
examined the all rows of table.

Thanks



On Wed, Sep 5, 2012 at 12:24 AM, Rick James <rjames@yahoo-inc.com> wrote:
100 is tantamount to turning off the log.  I prefer 2.

select  count(ENTITY_NAME)
    from  ALERT_EVENTS
    where  EVENT_TIME > date_sub(now(),INTERVAL 60 MINUTE)
      and  status=upper('failed')
      and  ENTITY_NAME='FETL-ImpressionRC-conversion';
begs for the _compound_ index
INDEX(ENTITY_NAME, EVENT_TIME)
This would be even better:
INDEX(ENTITY_NAME, status, EVENT_TIME)

COUNT(*) should be used if you don't need to check the column for being NULL.

> Rows_examined: 141145
That is probably the entire table.  Will the table grow?  If so, the query will get slower.  Meanwhile, the index I suggested will (probably) be much faster.


> -----Original Message-----
> From: Suresh Kuna [mailto:sureshkumarilu@gmail.com]
> Sent: Saturday, September 01, 2012 1:03 AM
> To: Adarsh Sharma
> Cc: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
>
> Disable log-queries-not-using-indexes to log only queries > 100 sec.
>
> Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.
>
> On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
> <eddy.adarsh@gmail.com>wrote:
>
> > Hi all,
> >
> > I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
> > query log by setting below parameters in my.cnf :
> >
> > log-slow-queries=/usr/local/mysql/slow-query.log
> > long_query_time=100
> > log-queries-not-using-indexes
> >
> > I am assuming from the inf. from the internet that long_query_time is
> > in seconds , but i see the slow query log ,  there are lots of
> > statements ( queries ) :
> >
> > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.052784  Lock_time: 0.000043 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409734;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > # Time: 120831 10:43:14
> > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.053599  Lock_time: 0.000079 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409794;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-click-enhancer-deferred';
> > # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.054185  Lock_time: 0.000086 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409794;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > # Time: 120831 10:43:22
> > # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.000163  Lock_time: 0.000045 Rows_sent: 1
> Rows_examined:
> > 13
> >
> >
> >
> > I don't  understand the query time unit in slow query log because i
> > expect queries to be logged that takes > 100 s. I tested with sleep
> > command for 60s , it doesn't logged in slow query log and when i
> sleep
> > for 120 s it logged but i don't why the other queries are logging in
> slow log.
> >
> > # Query_time: 120.000259  Lock_time: 0.000000 Rows_sent: 1
> Rows_examined:
> > 0
> > SET timestamp=1346443103;
> > SELECT SLEEP(120);
> >
> > And also my slow log is increasing and decided to purge thorogh below
> > command :
> >
> > cat /dev/null > /var/lib/mysql/slow-queries.log
> >
> >
> > Anyone any ideas about this.
> >
> >
> > Thanks
> >
>
>
>
> --
> Thanks
> Suresh Kuna
> MySQL DBA