Jacek Becla wrote:
> Is there a way to dynamically split a big table
> into n smaller tables by doing a single scan of
> the table that is being split? Here is more
> * Suppose I have a million row MyISAM table X, with
> relatively small number of columns. It has
> a column "chunkId" with values between 1 and 100.
> * I need to split this table into 100 in-memory
> tables, essentially I need to do:
> INSERT INTO X001 SELECT * FROM X WHERE chunkId=1;
> INSERT INTO X002 SELECT * FROM X WHERE chunkId=2;
> INSERT INTO X003 SELECT * FROM X WHERE chunkId=3;
> and so on.
> Based on the tests, each of these individual INSERTS
> costs ~50% of a full table scan of X, even with
> a clustered index on chunkId. The cost is totally
> dominated by the "SELECT * FROM X WHERE chunkId=<n>"
> Since we need to do such split many times, this
> performance is not acceptable. It feels it should
> be possible to do the split through a single scan.
> Any suggestions?
Have you looked into MySQL partitioning? If you're using version 5.1,
it might really help.. just partition the big table on chunk ID.