correct. mysqldump by default has --lock-tables enabled, which means it
tries to lock all tables to be dumped before starting the dump. And doing
LOCK TABLES t1, t2, ... for really big number of tables will inevitably
exhaust all available file descriptors, as LOCK needs all tables to be
Workarounds: --skip-lock-tables will disable such a locking completely.
Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES
WITH READ LOCK which locks all tables in all databases (without opening
them). In this case mysqldump will automatically disable --lock-tables
because it makes no sense when --lock-all-tables is used. or try with add
--single_transaction to your mysqldump command
On Fri, Sep 23, 2011 at 9:49 AM, Dan Nelson <dnelson@stripped> wrote:
> In the last episode (Sep 23), Shafi AHMED said:
> > I have a mysql database of 200G size and the backup fails due to the
> > Issue.
> > mysqldump: Got error: 1017: Can't find file:
> > './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
> > Can someone assist pls.?
> $ perror 24
> OS error code 24: Too many open files
> You need to bump up the max files limit in your OS. It may be defaulting
> a small number like 1024. If you can't change that limit, edit your my.cnf
> and lower the table_open_cache number. You'll lose performance though,
> since mysql will have to stop accessing some tables to open others.
> Dan Nelson
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat