List:General Discussion« Previous MessageNext Message »
From:Michael Widenius Date:January 4 2000 1:22am
Subject:MYSQL 3.23.8
View as plain text  
Hi!

>>>>> "Peter" == Peter Zaitsev <pz@stripped> writes:

Peter> I Reported some BUGS and unusual behavior of MYSQL 3.23.7. I was told some
Peter> bugs where fixed in new release so I'm reposting some bugs which still
Peter> unsolved in my enviroment with this release.
Peter> MYSQL Compiled with charset win1251 on Sun Sparc Ultra-2 Solaris 2.6  GCC
Peter> 2.95.2


Peter> 1)  Troubles with loosing connection to MYSQL server - I have a large number
Peter> of these records in a log file and I can reproduce connection loss error by
Peter> running one of included scripts.  This test passed on the same mashine then
Peter> MYSLQ 3.22.27 was up.
Peter> <(root@spylog)/soft/mysql/mysql-3.23.8/tests># perl fork3_test.pl
Peter> Testing 3 multiple connections to a server with 1 insert, 1 delete
Peter> and 1 select connections.
Peter> Creating table bench_f1 in database test
Peter> Use of uninitialized value at fork3_test.pl line 94.
Peter> Got error on insert:
Peter> Database handle destroyed without explicit disconnect.
Peter> thread 'insert' finnished with exit code 255
Peter> Test_delete: Deleted all rows 11 times
Peter> thread 'delete' finnished with exit code 0
Peter> Test_select: ok
Peter> thread 'select1' finnished with exit code 0
Peter> Test failed
Peter> Total time: 260 wallclock secs ( 0.04 usr +  0.01 sys =  0.05 CPU)
Peter> <(root@spylog)/soft/mysql/mysql-3.23.8/tests>#

I just run the above on an Ultra sparc with Solaris 2.7 without any
problems:

(/my/monty/sql-3.23/tests) fork3_test.pl
Testing 3 multiple connections to a server with 1 insert, 1 delete
and 1 select connections.
Creating table bench_f1 in database test
Test_insert: Inserted 100000 rows
thread 'insert' finnished with exit code 0
Test_delete: Deleted all rows 12 times
thread 'delete' finnished with exit code 0
Test_select: ok
thread 'select1' finnished with exit code 0
Test ok
Total time: 106 secs ( 0.02 usr  0.04 sys =  0.06 cpu)

I hope this is solvable by just recompiling your clients...

Peter> 2) Troubles with MYISAM format: Then table rating is in ISAM format query
Peter> return correct result but converting it to MYISAM makes result really
Peter> unusual (the table is small so I can upload it to you)

Peter> ISAM: mysql> select count(*) from rating where
Peter> unix_timestamp(now())-unix_timestamp(rating.created)<84600;
Peter> +----------+
Peter> | count(*) |
Peter> +----------+
Peter> |       44 |
Peter> +----------+
Peter> 1 row in set (0.07 sec)

mysql> alter table rating TYPE=MYISAM;
Peter> Query OK, 2592 rows affected (16.95 sec)
Peter> Records: 2592  Duplicates: 0  Warnings: 2593

mysql> select count(*) from rating where
Peter> unix_timestamp(now())-unix_timestamp(rating.created)<84600;
Peter> +----------+
Peter> | count(*) |
Peter> +----------+
Peter> |      734 |
Peter> +----------+
Peter> 1 row in set (0.09 sec)


The following query shows the problem:

mysql> select unix_timestamp(now()),unix_timestamp(created),created from rating2
 limit 10;
+-----------------------+-------------------------+----------------+
| unix_timestamp(now()) | unix_timestamp(created) | created        |
+-----------------------+-------------------------+----------------+
|             946947021 |                       0 | 00000000000000 |
|             946947021 |             -1483268040 | 19991103211431 |
|             946947021 |             -1544675272 | 19991104032539 |
|             946947021 |               204022072 | 19991104073452 |
|             946947021 |              -472178376 | 19991104201755 |
|             946947021 |              1131946552 | 19991105072507 |
|             946947021 |              -791272904 | 19991105140832 |
|             946947021 |             -1557519560 | 19991105200611 |
|             946947021 |              1111171896 | 19991105211706 |
|             946947021 |              -717085896 | 19991105214925 |
+-----------------------+-------------------------+----------------+

(In other words, unix_timestamp() doesn't work on a pure timestamp
field on Solaris with MyISAM (It works on Intel)).

The fix is to replace 'created' with 'created+0' or apply the
following patch:

*** /my/monty/master/mysql-3.23.8-alpha/sql/field.h	Thu Dec 30 03:12:09 1999
--- ./field.h	Tue Jan  4 02:53:29 2000
***************
*** 498,503 ****
--- 494,503 ----
    void set_time();
    inline long get_time()
    {
+ #ifdef WORDS_BIGENDIAN
+     if (table->db_low_byte_first)
+       return sint4korr(ptr);
+ #endif
      long tmp;
      longget(tmp,ptr);
      return tmp;

<cut>

Peter> 3) Query which returned incorrect result in 3.23.7 now works (that's cool)
Peter> but now it runs VERY slow - unusual for query having all indexes it needs:

mysql> select 
Peter> id,ts,flag,session_id,visitor_id,ip,page_group,referer_page_id,history,onlin
Peter> e_users from st00.g00hits where id>466902 and ts>'19991228203447' and
Peter> counter_id=102 limit 10;
Peter> +-------------+------+----------------------------------------------+-------
Peter> -----+---------+-------+-------+------------+
Peter> | table       | type | possible_keys                                | key
Peter> | key_len | ref   | rows  | Extra      |
Peter> +-------------+------+----------------------------------------------+-------
Peter> -----+---------+-------+-------+------------+
Peter> | g00hitstest | ref  | PRIMARY,counter_id,counter_id_2,counter_id_3 |
Peter> counter_id |       4 | const | 21274 | where used |
Peter> +-------------+------+----------------------------------------------+-------
Peter> -----+---------+-------+-------+------------+
Peter> 1 row in set (0.01 sec

Peter> this query runs 32 Seconds !!! (I bet MYSQL is doint the full table scan
Peter> here !)

Nope, even worse;  It uses indexes :(

The problem here is that MySQL will use the counter_id field to find
the 10 first rows fast.

In your table:

mysql> select count(*) from g00hits where counter_id=102;
+----------+
| count(*) |
+----------+
|   354118 |
+----------+
1 row in set (1.40 sec)

The problem here is that MySQL must fetch many rows in key order to
find a row that satisfies your WHERE.  As the rows are fetched by keys,
you will in the worst case a seek for each row.  This is MUCH slower
than table scanning if it takes a long time to find the matched rows
and MUCH faster if there is a lot of rows that matches the criteria.

In this case MySQL uses a very good guess for the best possible index
for the query, but doesn't succeed.  Based on the information MySQL has
available it's VERY hard to get the above query right.

mysql> select
Peter> id,ts,flag,session_id,visitor_id,ip,page_group,referer_page_id,history,onlin
Peter> e_users from st00.g00hits where id>466902  and counter_id=102 limit 10;

Peter> +---------+-------+----------------------------------------------+---------+
Peter> ---------+------+-------+------------+
Peter> | table   | type  | possible_keys                                | key     |
Peter> key_len | ref  | rows  | Extra      |
Peter> +---------+-------+----------------------------------------------+---------+
Peter> ---------+------+-------+------------+
Peter> | g00hits | range | PRIMARY,counter_id,counter_id_2,counter_id_3 | PRIMARY |
Peter> NULL | NULL | 42500 | where used |
Peter> +---------+-------+----------------------------------------------+---------+
Peter> ---------+------+-------+------------+
Peter> 1 row in set (0.00 sec)

Peter> this query runs 0.01 Second !!!

Of course;  MySQL will do exactly the same thing in the above query,
but in this case starts scanning based on the PRIMARY KEY, which in
this case is a good choice.  Note that for some other value for
counter_id it's probably not a good choice to scan the table based on
the PRIMARY ID, as MySQL may have to scan many rows before finding a
matching counter_id.

The only 'easy' solution is to make it possible to tell MySQL which
keys it should use for a particular query, but this will only help if
you have hard coded queries and not queries that are generated on the fly.

(MySQL uses the counter_id index in the first case as it's longer than
the counter_id key;  Normally this is a good rule, but not in this case)

<cut>

Peter> 4) Other optimization problem - MYSQL does not use index there it should:

Peter> select sum(layers.stat_vishosts.hits) as val,st00.g00domip.domip as
Peter> dom,st00.g00domip.ip as ip from
Peter> layers.layers_stat_vishosts,layers.stat_vishosts,st00.g00domip where
Peter> layers.layers_stat_vishosts.type=2 and
Peter> layers.layers_stat_vishosts.page_group=255 and
Peter> layers.layers_stat_vishosts.counter=101 and
Peter> layers.layers_stat_vishosts.begin >= '1999-12-01' and
Peter> layers.layers_stat_vishosts.begin <= '1999-12-26' and
Peter> layers.layers_stat_vishosts.id = layers.stat_vishosts.layer_id and
Peter> layers.stat_vishosts.id = st00.g00domip.ip group by
Peter> layers.stat_vishosts.id
Peter> ORDER BY val desc limit 10;

Peter> +----------------------+--------+----------------+---------+---------+------
Peter> -------------------------------------------+-------+------------+
Peter> | table                | type   | possible_keys  | key     | key_len | ref
Peter> | rows  | Extra      |
Peter> +----------------------+--------+----------------+---------+---------+------
Peter> -------------------------------------------+-------+------------+
Peter> | layers_stat_vishosts | range  | search,PRIMARY | search  |    NULL | NULL
Peter> |    24 | where used |
Peter> | g00domip             | ALL    | PRIMARY        | NULL    |    NULL | NULL
Peter> | 58794 |            |
Peter> | stat_vishosts        | eq_ref | PRIMARY        | PRIMARY |       8 |
Peter> layers.layers_stat_vishosts.id,st00.g00domip.ip |     1 |            |
Peter> +----------------------+--------+----------------+---------+---------+------
Peter> -------------------------------------------+-------+------------+

Peter> The query DOES NOT use index on g00domip table and prefers to do the full
Peter> table scan :(   So it runs really slow (more then a minute)
Peter> Then replacing table st00.g00domip with exactly the same table having about
Peter> 20 times more rows query starts to run MUCH faster and explain reports it
Peter> starts to use indexes.

Sorry, but as I didn't get the hits table, I can't comment on this
anymore than I have already commented.  Didn't you mail that myisamchk -a 
solved this for you?

Thanks for the help anyway to find bugs!  (At least you found one that
was easy to fix :)

Regards,
Monty
Thread
MYSQL 3.23.8Peter Zaitsev2 Jan
  • MYSQL 3.23.8Michael Widenius2 Jan
  • MYSQL 3.23.8Michael Widenius4 Jan