Jim, Thanks for the ideas. Comments below;
Jim Starkey wrote:
> Kevin raised the point that a problem with cycle locking is that records
> recovered during low memory handling are not available until the next
> cycle. This is true, but there is a simple workaround. As soon as a
> low memory condition is recognized, do a manual wake-up of the cycle
> manager thread. It will still have to wait for other threads with a
> cycle lock to finish, but none of these threads are going to block
> holding cycle locks except, perhaps, the thread that invoked the low
> memory operation. Given the very few places that record memory is
> allocated, a careful analysis might reveal that there is no danger of
> releasing the cycle lock at the point of memory allocation.
I think there is a danger. Th data buffer is most often allocated by the client thread
trying to fill a record from various fields in SetValue() or setEncodedRecord(). It is
not likely that these client threads have active pointers to memory in purgatory.
But data.record can also get allocated (and run out of memory) during thaw, which can be
called by the scavenger in order to do garbage collection on an older record it is
pruning. The thaw can occur on any record in that chain. The scavenger may have a pointer
to a record in that same record chain that is now in purgatory since it was recently
released by a client thread releasing a savepoint.
> The situations that the cycle manager primarily addresses is
> traversing record chains, which is generally unassociated with
> allocating new record objects.
Garbage collection associates traversing a record chain with thawing records.
I was also looking at the possibility of releasing record memory immediately, without the
CycleManager, as a part of normal retiring of records by the scavenger. But I don't think
that is safe either. Even if we know this was a newly inserted record. A client thread
may be trying to check for possible duplicate values identified by a scanIndex. If it
catches a record pointer from the RecordLeaf just before the record is retired, it may
still be looking at the data.record while the record is retired. So we have to use the
CycleManager when retiring.
One successful place I found where the CycleManager does not need to be used is when an
insert fails for any error and the recently allocated data buffer is no longer needed or
viewed by anybody. The current code puts the whole RecordVersion into purgatory,
data.record attached. The destructor of RecordVersion called by the Cyclemanager
currently frees data.record. So now I am calling deleteRecord(true) before
The main thing I am working on where the CycleManager can be bypassed when record cache
memory is needed *right now* is by chilling a few records. In particular, if those few
records happen to involve records that are newly inserted ( this means they are currently
pending and have no priorVersion), then those data.record buffers are not being accessed
by any other thread and can be freed without the CycleManager.
As a side note, I activated backlogging again and found a few holes that need to be worked
Specifically, Record::encoding is not put into or restored from the backlog stream. In a
non-debug build the encoding of a backlogged and restored record would be 0, almost always
correct. But in a debug build, it has a value of 0xcc which causes problems when
data.record is freed.
Another field not backlogged or reconstituted is RecordVersion::nextInTransaction.
Basically, the reconstituted record is not being added back to the transaction. That's a
big problem, which I am currently working on.
If the immediate release of memory for newly inserted and pending chilled records does not
solve our "Gotta get some memory now" problems in the record cache, then we may need to
look again at how to move along the CycleManager.