MySQL Lists are EOL. Please join:

List:Internals« Previous MessageNext Message »
From:Samuel Ziegler Date:January 25 2007 12:41am
Subject:Unbounded memory growth in 5.0.24/27
View as plain text  
I'm seeing a problem with MySQL 5.0.24 & 5.0.27 where the mysqld
process grows in memory quickly, hits the 3G process limit, reports
out of memory then the process shuts down.  Sometimes cleanly, sometimes
not so cleanly.

I know that this is a generic problem which can be due to a multitude
end user issues, but the more I dig into this, the more it looks like
a mysql thing.

Database layout (which is, admittedly, a little funky):
- ~1000 individual databases
- each database has ~15 stored procedures & ~6 innodb tables
- < 20 database connections at any given time
- frequent switching between databases
- frequent use of prepared statements
- frequent use of stored procedures
- config file is the stock 4G one but with the innodb cache dropped to 1G

Ran mysqld under valgrind and no leaks were reported when doing a
clean shutdown.

I abnormally terminated the mysql process under valgrind when it was close
to the 3G limit and got a good trace of the active memory.

The biggest user was the innodb row cache, understandably.

However, the two next biggest users seemed to affiliated with the
stored procedures & prepared statements, which struck me as odd.

The full output of valgrind is huge, so here are some interesting
snippets:

Frist stored proc entry:
==2281== 69,383,952 bytes in 8,478 blocks are still reachable in loss
record 398 of 399
==2281==    at 0x401A812: malloc (vg_replace_malloc.c:149)
==2281==    by 0x8384721: my_malloc (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x838A39F: init_dynamic_array (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x838A192: _hash_init (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x8103D8D: Query_tables_list::reset_query_tables_list(bool)
(in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81042A0: st_lex::st_lex() (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x826A3FF: sp_head::reset_lex(THD*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x8197FBC: MYSQLparse(void*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x82711A9: db_load_routine(THD*, int, sp_name*, sp_head**,
unsigned long, char const*, char const*, char const*, st_sp_chistics&,
char const*, long long, long long) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x8270D54: db_find_routine(THD*, int, sp_name*, sp_head**)
(in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x82724E9: sp_cache_routines_and_add_tables_aux(THD*,
st_lex*, Sroutine_hash_entry*, bool, bool*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x8270648: sp_cache_routines_and_add_tables(THD*, st_lex*,
bool, bool*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)

Note the large memory usage & 8k of blocks.  There are several more
entires from the stored procedure code which total ~50k individual
malloc blocks.

Prepared statement snippet:
==2281== 61,809,024 bytes in 318 blocks are still reachable in loss record
396 of 399
==2281==    at 0x401A812: malloc (vg_replace_malloc.c:149)
==2281==    by 0x8384721: my_malloc (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x8384DDB: alloc_root (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81B296B: openfrm(THD*, char const*, char const*,
unsigned, unsigned, unsigned, st_table*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81B0418: open_unireg_entry(THD*, st_table*, char const*,
char const*, char const*, st_table_list*, st_mem_root*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81AC225: open_table(THD*, st_table_list*, st_mem_root*,
bool*, unsigned) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81AD1D2: open_tables(THD*, st_table_list**, unsigned*,
unsigned) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81AD5DD: open_and_lock_tables(THD*, st_table_list*) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81DAC3B: mysql_test_select(Prepared_statement*,
st_table_list*, bool) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81DA01F: check_prepared_statement(Prepared_statement*,
bool) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81D950D: Prepared_statement::prepare(char const*,
unsigned) (in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)
==2281==    by 0x81D7DF7: mysql_stmt_prepare(THD*, char const*, unsigned)
(in
/local/apollo/package/local_1/Generic/A2ZEB-SDS-MySQL/A2ZEB-SDS-MySQL-2.0-0/mysql/bin/mysqld)

Bigger than I would have guessed, but in only 318 blocks.

Another interesting point is that the memory reported in use by
valgrind was only ~730MB.

My current theory is that the large number of mallocs from the stored
procedure code is causing significant memory fragmentation which in
turn is making the address space go bye-bye.

I have not yet tried the latest 5.0.33 build, this problem may be
resolved in there.

So... Questions:
- Is this behavior expected/known?
- Is my theory the likely cause?
- How can I fix it?

Thanks in advance and for making such an awesome db!
  - Sam

Thread
Unbounded memory growth in 5.0.24/27Samuel Ziegler25 Jan
  • Re: Unbounded memory growth in 5.0.24/27Dmitri Lenev25 Jan
    • Re: Unbounded memory growth in 5.0.24/27Samuel Ziegler25 Jan
      • Re: Unbounded memory growth in 5.0.24/27Dmitri Lenev30 Jan
        • Re: Unbounded memory growth in 5.0.24/27Samuel Ziegler1 Feb