Know that indexes are good for select(s), but very bad for massive
insert,update and delete.
If you want to do a massive delete with a cron, it's better to :
* select the rows to delete (using indexes)
* delete indexes
* delete rows (already marked)
* recreate indexes
Another way if you want to delete a big percentage of your table, is to
copy the stating records, drop table and recreate it with those record.
Then recreate indexes.
I assume that you're not in a massive transactional situation, and maybe
myisam storage. If not, show processlist may help you to track using or
not of internal temporary tables, ...
From: Jigal van Hemert [mailto:jigal@stripped]
Sent: dimanche 24 avril 2005 17:05
To: Almar van Pel; mysql@stripped
Subject: Re: Performance issues when deleting and reading on large table
From: "Almar van Pel"
> After some time (sometimes a week sometimes a month) it appears that
> index of the table gets stuck.
> It tries to read from the table but does not get response. This causes
> connectionqueue to fill up
> and the load on the system increases dramatically. In other words,
> do an optimize table , the system
> hangs. Most of the times you see that the index is getting 20 Mb off.
> When I do check table (before optimizing) there are no errors.
> Is there any way to see this problem coming, so I can outrun it?
> having to schedule optimize, wich = downtime, every week..)
You should run optimize table regularly (once a week or so) in some
It's a probably a case of not having the cardinality of indexes right
thus making wrong decisions for queries.
> Trying to get the previous table clean, I created some jobs deleting
> records. When I delete a lot of records at in one job,
> the system also nearly hangs. (+/- 10 to 15.000 records) The load
> increases dramatically. I tried every trick in the book, but cannot
> why this action is so heavy for the system.
Deleting a lot of records will have impact on the indexes, so it's quite
job. The inserts/updates/deletes will also block the table for reading
case of MyISAM.
Such a large table in a high concurrency situation (many writes and many
reads) can be a job for InnoDB tables.
They seem slow for small tables, but have the tendency to keep the same
speed for large tables, while MyISAM will probably get slower the bigger
table is under these circumstances.
If you can use the PRIMARY index in a query and keep the 'PRIMARY' index
short as possible, InnoDB can be a very fast table handler.
Depending on the size of the resulting record sets, your system must
enough memory to handle it. Otherwise a lot of temporary tables will end
on disk (slow) and also indexes cannot be loaded in memory (slow).
Running large databases is sometimes a bit of a challenge; finding the
queries, setting up the right index(es), etc.
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=1