Ian Daysh wrote:
> what is considered a "sufficiently large
> result set" for the Query::store object?
It's simply a question of whether the result set fits in available RAM.
If you run the system out of RAM or VM, switching to a use() query is
one of several ways out of the pitfall. It's pretty low on my list of
preferred alternatives, though:
- Put a little more thought into your WHERE clauses: let MySQL do as
much filtering as is practical. Saves memory, saves bandwidth, can
even save CPU time.
- If you can't put the filter in the query, maybe you can do it with
Query::store_if(). This is built atop a use() query, so it only
stores the records that the functor returns true for.
- Second-guess all "SELECT *" queries: do you really need _all_ the
columns in the table at this time?
When calculating this, beware that MySQL++ deals exclusively in textual
forms of data from the database. This results in a kind of storage
bloat when dealing with "binary" data types, such as numeric and BLOB
types. For instance, a MEDIUMINT takes two bytes on disk, but it's as
much as 5 characters in text form, plus the overheads required by the C
API and MySQL++. Thus, if you know each row takes 1 KB on disk and you
pull a million rows, you're going to need a whole lot more than 1 GB of
memory to hold it.
You gots to axe yourself, though: do you really need a million rows all
at once? That's what motivates the list above. Fix the data volume
problem at the source before you tackle the matter of storing the entire
result set in RAM or dealing with it one record at a time.