After a power outage, we had a large table (~2GiB, 23e6 records) get
corrupted. Only a few rows were actually affected, so nobody noticed for
a few weeks though. A repair fixed the problem.
We are now considering running the "check table" command regularly
(every 15 minutes?) and plugging that into our monitoring system, i.e.
if "check table" reports an error, our script beeps/emails someone.
My question is this: is it okay to run "check table" frequently? What
impact will it have on production systems? How quickly will it check
very large tables? Is there an alternative (better) solution?
Etiamsi occiderit me, in ipso sperabo