Creating a table in a cluster SQL node is an expensive operation, even
if the storage engine is not ndbcluster. The process creating the
table(s) needs to acquire the "global cluster schema lock". And that
takes some time. In my experience it is better to avoid creating
(temporary) tables in time critical processes.
Banksystem Front- und Middleoffice
Telefon +49 375 27008-580
Telefax +49 375 27008-25
Mobil +49 152 22999 584
schrieb Raymond Peachey:
> No, there was no ALTER TABLE done recently. The job that runs, however,
> does create tables, and does indeed do a INSERT .. SELECT from the change
> tracking table and the master tables to the result tables. However, NDB is
> READ COMMITTED/ROW Based logging, so all reads should be consistent reads,
> non locking (correct me if I'm wrong, perhaps it's different with NDB?).
> We are currently running version: 5.5.29-ndb-7.2.10.
> There are a lot of queries that query against INFORMATION_SCHEMA to
> determine primary keys etc for the change tracking tables. The main culprit
> here probably is the query that joins the trigger tables to the master
> tables per job run:
> REPLACE result_table
> SELECT <column_list>
> FROM db.master_table AS P LEFT OUTER JOIN db.trigger_table AS CT ON
> p.pk_col = CT.pk_col
> WHERE timestamp_column > 'datetime';
> The number of records varies, but we have these jobs running every 5
> minutes, and so it is typically only ~500 records per table, so in this
> case, a total of about ~5000 per job run, and there is a little parallelism
> going on in that there's actually two jobs running simultaneously (against
> two separate databases), that make up that 5000 records processed.
> I could try removing the frm files and see if that helps. Additionally, I
> will look into increasing the execution threads today to see if that helps.
> On Thu, Oct 10, 2013 at 1:32 PM, Johan Andersson
>> Did you recently do an alter table on TABLE_XYZ and is it the same table
>> you run the query on?
>> Not sure what that error message means "NDB online alter table commit:
>> distributing TABLE_XYZ timed out. Ignoring..."
>> But usually when i have problems like this, do stop the mysql server,
>> remove all the .frm files belonging to NDB tables, and start the server
>> This way the server gets new fresh table meta data from the dictionary in
>> the data nodes.
>> Run the query again, does it work?
>> What does the query look like? Is the query trying to acquire locks?
>> If you are not locking any records then the 233 error is super strange
>> (have you upgraded to the latest 7.2.12 ?).
>> How many records are you trying to read?
>> Best regards
>> On Thu, Oct 10, 2013 at 6:47 PM, Raymond Peachey
>>> In the error log, I see this:
>>> 131009 20:02:53 [ERROR] NDB online alter table commit: distributing
>>> TABLE_XYZ timed out. Ignoring...
>>> 131009 20:03:52 [ERROR] Got error 233 when reading table
>>> I see the first error a lot -- the second error I see specifically during
>>> the trouble periods. 233 I think has to do with reaching a max limit of
>>> concurrent operations.
>>> We do not have TransactionDeadlockDetectionTimeout defined, which means
>>> it should be at it's default of 1200 ms.
>>> SHOW GLOBAL VARIABLES LIKE '%timeout%' ;
>>> # Variable_name, Value
>>> connect_timeout, 10
>>> delayed_insert_timeout, 300
>>> innodb_lock_wait_timeout, 50
>>> innodb_rollback_on_timeout, OFF
>>> interactive_timeout, 28800
>>> lock_wait_timeout, 31536000
>>> net_read_timeout, 30
>>> net_write_timeout, 60
>>> slave_net_timeout, 3600
>>> wait_timeout, 28800
>>> Additionally, here's other settings we have configured, that I think may
>>> On Thu, Oct 10, 2013 at 11:38 AM, Johan Andersson <johan@stripped
>>> MySQL Cluster Mailing List
>>> For list archives: http://lists.mysql.com/cluster
>>> To unsubscribe: http://lists.mysql.com/cluster