Daevid Vincent wrote:
> We currently have some tables that are approaching 1 BILLION rows (real
> Billion, with nine zeros, not that silly six zero version). Trying to do an
> "ALTER" on them to add a column can sometimes take hours.
A few years ago I have tested possible table structures for an
application which had to store at least a million profiles of persons.
Because we expected that properties would be added (and/or removed) from
the database quite often I also tested a structure where the properties
of a single profile were stored in tables based on the data type.
So we had tables with integers, strings, dates, etc. and used a record
for each property; columns were like: id, property name, value, and a
few other relevant things to handle and display the data.
Most select queries were about as fast as they would be with a single
table. Database size was approximately the same because not all profiles
used all properties, so we only needed to store the properties a certain
profile would use.
The only limitation at that time was 31 joins, but I don't think we've
ever hit that limit.
Adding properties was easy, just adding them to the configuration of the
application was enough.
It really depends on the situation of your application which table
structure is the most suitable. Test the performance of all kinds of
operations you need to do with realistic data and various amounts of
data to see how it scales.
Jigal van Hemert