> Stop indexing a couple columns? :)
Yah - thanks :) There are only 5 columns in the table. Several need to be
indexed. There are over 30 million rows in the database, and we want to pull
in some old legacy data.
The MySQL team can do one of a few things:
1) Make Merge tables Partitions (or create partitions). Split data based on
a column (or columns). The database can determine where data is inserted (or
how data moves when it's updated), and where data is selected from. Indexes
can be build on different partitions. This would also provide a nice
2) Create tablespaces and datafiles (unlikely).
3) Split indexes up with the raid option like the tables are broken up.
This means we'll probably end up using InnoDB.
----- Original Message -----
From: "Dan Nelson" <dnelson@stripped>
To: "Keith C. Ivey" <keith@stripped>
Cc: <mysql@stripped>; "David Griffiths" <dgriffiths@stripped>
Sent: Friday, April 25, 2003 4:16 PM
Subject: Re: Index File Size
> In the last episode (Apr 25), Keith C. Ivey said:
> > On 25 Apr 2003 at 15:31, David Griffiths wrote:
> > > We've used the Raid option with MySQL-Max to split this large table
> > > up; I haven't come across anything to split up the indexes....
> > >
> > > Any suggestions?
> Not really. Stop indexing a couple columns? :) As long as your OS can
> handle files over 2gb there shouldn't really be any problems with an
> index that big. Same for the data also. RAID was really just a hack
> for Linux 2.2 kernels. You'll save a lot of file pointers by just
> creating one large table if you can.
> > You might consider merge tables, especially if you're not always
> > using all the data -- for example, if you're often only interested in
> > the last few months' worth.
> Mysql currently can't optimize MERGE tables very well at all, though,
> so they're not as useful as you might think. Maybe in 4.1 they will be
> implemented with the subquery code and get optimized better.
> Dan Nelson