Ann W. Harrison wrote:
> I suspect that the gopher never expected to run out of record memory -
> and I'm not really sure how it can, since, at least in my simplistic
> diagram, it moves records from the serial log into the page cache
> and shouldn't be mucking with the record cache at all.
Bug#45746 (falcon_backlog test fails due to Gopher hitting "record memory is exhausted")
is assigned to me, and it turns out for good reason. Back on 5/19/2009 10:46 PM, I
committed a change that added a dirty little function called Transaction::thawAll(). This
is mucking with the record cache!
Here was the reason explained in the patch;
"In order to prevent 'unthawable' records, make sure all records attached to a
transaction are thawed by the gopher thread when the transaction is fully complete. Do
this by adding a new function; Transaction::thawAll()"
This nasty little function is called by the gopher thread;
SerialLogTransaction::commit() -> Transaction::fullyCommitted() ->
Because it thaws every chilled record in the transaction after being completed, it ruins
the effect of thawing in the first place! Woops! Huge insert transaction that are larger
than the record cache can get committed, but then the gopher crashes with error 305
"record memory is exhausted" trying to thaw all those records.
Once a record has been put into the page it cannot be thawed from the serial log. And if
it has already been superceded, there will soon be no way to thaw the record, once that
superceding record version gets completed.
I think I can greatly reduce the effect of thawAll() by checking if the completed record
has been superceded before thawing it. This way the huge insert transactions in
falcon_backlog and falcon_bug_36294-big.test may not actually have to thaw anything at
complete time. I am trying this out...