I'm trying to wrap my head around a performance problem our institution is
having on our Mysql Server which hosts all of our logs from around campus.
Specifically our MailLogs tables.
What I'm having a hard time with is we have metadata such as:
CREATE TABLE mail00 (
host varchar(32) default NULL,
fromMTA varchar(44) default NULL,
nextMTA varchar(44) default NULL,
messageID varchar(44) default NULL,
messageID2 varchar(44) default NULL,
sender varchar(80) default NULL,
recipient varchar(120) default NULL,
recipient2 varchar(120) default NULL,
date date default NULL,
time time default NULL,
program varchar(44) default NULL,
ACTION varchar(44) default NULL,
detail varchar(120) default NULL,
seq int(10) unsigned NOT NULL auto_increment,
PRIMARY KEY (seq),
KEY host (host),
KEY sender (sender),
KEY recipient (recipient),
KEY MessageID (messageID),
KEY seq (seq),
KEY time (time),
KEY date (date),
KEY ACTION (ACTION),
KEY messageID2 (messageID2),
KEY fromMTA (fromMTA)
) TYPE=MyISAM MAX_ROWS=99000000;
We might end up storing two to three gigs of logging data per day from our
When we had the process setup to purge data out of this table that is older
than 14 days, it would lock and take great deal of time process the request.
My question is, is the structure of my table the problem or is it just the
size of the data that is just going to take that long due to Disk IO?
This isn't a table structure that I came up with, I'm just looking to
optimize the performance of the server.
Currently around midnight the sysadmin currently drops table 13, then moves
12 -> 13, 11 -> 12, etc... and creates a 00.
All of this is because it takes to long to purge out one days worth of data
. This will eventually cause a problem when we try to develop a program to
scan the logs, it will need to scan through all 14 tables instead of just one.
Is there a better way that mitigates the performance and flexibility?
Or just a better way in general?