On 3/14/06, Martijn Tonies <m.tonies@stripped> wrote:
> Hello Paul,
> I suggest you reply to the mailinglist :-) ...
> > The developer insists that for scalability issues, this was the
> > answer. It is likely, for example in my deployment, that these tables
> > would see upwards of 10 million records or more.
> Well, if there are problems with scalability, I guess you could
> split it up in a few (not 1600) tables and have them avaialble
> on different physical hard drives...
As an example:
There was a table called event.
This table is now broken up like this:
So for every sensor, and every day, there is now a new table. So if I
have 20 sensors, every day I will have 20 new tables.
With this in mind, does this design make sense?
how will this scale?
Is there anything I can do through configuration (I doubt the
developer will change the design) to speed things up? or a workaround
that I could do on my end to compensate?
> But -> why try to fix something that ain't broken (yet)?
> Were you experiencing problems already? If the application
> is fast WITHOUT merge tables, why bother?
> Martijn Tonies
> Database Workbench - development tool for MySQL, and more!
> Upscene Productions
> My thoughts:
> Database development questions? Check the forum!
> > >
> > > > One of the databases I use just switched to using merge tables and
> > > > my queries are painfully slow. One table, initially had about 2.5
> > > > million records and now with the change this information is spread
> > > > across about 1600 tables. A simple query, say select count(*) has
> > > > from .04 to about 30 seconds, sometimes even longer.
> > >
> > > Why on earth would you spread this information across 1600 (!!!)
> > > tables? That's 1600 files to maintain instead of 1.