>>>>> "MV" == Michael Villalba <mjv@stripped> writes:
MV> Thanks, Fred. The delete job has been running for 21 hours
MV> at this point, and now rather simple queries on other tables hang
MV> (e.g. SELECT COUNT(*) FROM other_table WHERE index=some_value).
MV> I'll let it run overnight to see what happens, but I'm not hopeful.
This is a major issue for us as well. We have some tables that need
periodic cleaning of old data and it can take hours upon hours to do
that. What our workaround is, and it is IMHO, sub-optimal, is to use
the LIMIT feature of DELETE in MySQL, and issue the delete in a loop
until no more items are deleted. Something like this, in Perl:
my $sth = $dbh->prepare("DELETE LOW_PRIORITY FROM playlog $clause LIMIT ?");
$sleeptime = 0;
print "Sleeping... $sleeptime seconds\n";
my $stime = time;
print scalar localtime $stime," (start delete)\n";
$affectedRows = $sth->rows();
my $etime = time;
print scalar localtime $etime,": ";
my $diff = $etime-$stime;
print "Deleted $affectedRows rows... $diff seconds\n";
# sleep half the time it took to delete, if longer than 30 seconds.
$sleeptime = $diff > 30 ? int($diff/2) : 1;
} until ($affectedRows < $limit);
Assuming $clause = 'WHERE .....' is your match for rows to delete,
this will delete things in batch size of $limit (I use 200 rows), and
sleep between the deletes to let other threads do their thing with not
too much delay.
Ideally, MySQL would allow the delete to be concurrent with other
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: khera@stripped Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/