> > > The developer insists that for scalability issues, this was the
> > > answer. It is likely, for example in my deployment, that these tables
> > > would see upwards of 10 million records or more.
> > Well, if there are problems with scalability, I guess you could
> > split it up in a few (not 1600) tables and have them avaialble
> > on different physical hard drives...
> As an example:
> There was a table called event.
> This table is now broken up like this:
> event _<sensor>_<date>.
> So for every sensor, and every day, there is now a new table. So if I
> have 20 sensors, every day I will have 20 new tables.
> With this in mind, does this design make sense?
> how will this scale?
According to you, it doesn't :-)
> Is there anything I can do through configuration (I doubt the
> developer will change the design) to speed things up? or a workaround
> that I could do on my end to compensate?
What you're doing here is fixing something that isn't broken.
Give your database a test with 20 million rows to see how your queries
are performing, make sure your queries make sense and that you use
the proper indices.
Remember, database systems are designed to handle lots of rows.
Database Workbench - development tool for MySQL, and more!
Database development questions? Check the forum!