List:General Discussion« Previous MessageNext Message »
From:Dan Trainor Date:May 11 2006 9:20am
Subject:Questions about InnoDB, innodb_buffer_pool_size and friends
View as plain text  
Hello, all -

It's been a while since I've posted here, but I read the list daily. 
Thanks for all the help.

What I'm dealing with here is memory problems using MySQL 5.0.19 under 
FreeBSD.  Although I've enabled allocation of more memory per-process, 
as described by the FreeBSD notes for 5.0.x, I'm still seeing problems.

So I've done some research, and think I've come up with a solution. 
First, I'll provide some details as to my current setup.  Since I'm 
still relatively new to MySQL, and have even less experience with 
InnoDB, I'm hoping to receive some criticism for my configuration.  This 
would be fantastic.


# InnoDB settings
innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = 
set-variable = innodb_buffer_pool_size=1600M
set-variable = innodb_additional_mem_pool_size=180M
query_cache_type = 1
query_cache_size = 40M

We're trying to make our database 'hot', and stick as much of it as we 
can, into memory.  I see a few problems with the current configuration 
(hopefully others can see more problems than I do):

I don't have a innodb_log_file_size in there.  I think that I would 
benefit from using this one, because the default is 5M.  I believe that 
our bottleneck has a lot to do with disk I/O as well, so I think 
bringing this up substantially would help.

If we have four ibdataN files of 500M each, there's no way that we can 
make this database 'hot', especially when taking into consideration that 
this is a 32bit platform.  On top of that, our innodb_buffer_pool_size 
is set to 1600M.

Now, where do I go from here?  A few ideas that have come to mind, is 
going back to 3 data files.  I guess this process would involve dumping 
the data, as described by the documentation, adjusting the data file 
sizes accordingly, and re-importing the data.  I believe that this can 
be accomplished because the Comment field of 'SHOW TABLE STATUS;' says 
"InnoDB free: 580608 kB".  When I read that, I want to say that I can 
assume that I have as much as at least one data file's worth of data 
that is not being used, but is allocated on disk - which I can dump.  I 
believe that I can safely do this, and have the data files grow by using 
innodb_autoextend_increment.  Would this improve our chances of making 
the database completely 'hot'?

Another idea would involve dumping a 32bit platform in favor of a 64bit 
platform, and just throw more memory at it.  But who's employer would be 
fond of that?  ;)

Again, I appreciate your patience as you review this post, and thank you 
for taking the time to read it.  I understand that it has taken a while, 
but I can only hope that your response would benefit not only myself, 
but others that might have questions about the way that InnoDB actually 
works.  Even more.

Questions about InnoDB, innodb_buffer_pool_size and friendsDan Trainor11 May
  • Re: Questions about InnoDB, innodb_buffer_pool_size and friendssheeri kritzer12 May