> > I have a curious issue here, maybe someone can help.
> > I have a single process that inserts data into tables that contain
> > purely logging information. This table is then searched by our Care
> > department to troubleshoot issues. I am looking for the best way to
> > store this data, and the structure on the backend.
> > There are 50 million inserts into table LOG a day. The primary index
> > on the table is seconds from 1971. I only need to keep 60 days worth
> > of data, and the table is only used for read purposes. This is my
> > design criteria, but my problem is how to delete old data without
> > crashing the log writer that is atteched to the table.
> OK, how about this:
> Use MyISAM and MERGE tables. Keep one table per day. E.g.:
> Use MERGE tables to give you the 60 day (and perhaps e.g. 14 day, 30
> day, 7 day, etc.) read views that you need, like so:
> CREATE TABLE log_view_7day (
> ) TYPE=MERGE UNION=(
> You can then do all of your inserts from the log writer into the "today"
> table, and do your reads against the various MERGEs.
> Every day at exactly midnight, you would use ALTER TABLE (which is
> atomic) to redefine the UNION of the MERGE definition of the various
Modifying metadata because you need a different view at your
Am I the only one to which this sound ugly?
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL