Sorry for very late response due to long Chinese spring festival and other stuff.
I implemented a SRLStream subclass with variable number of segments that SRL_WINDOW_SIZE
size for each segment. And implemented RecordVersion::getRecord(SRLStream) to put each
record version into SRLSteam segments.
When writing SRLStream to serial log, I just need to write segment one by one with
START_RECORD protected, and eliminate the estimate of the size of next record version by
only testing the segment length. This way should also decrease the number of judging the
free window size.
However the sad news is no performance gain. :-( The patch is attached.
From: Jim Starkey [mailto:jstarkey@stripped]
Sent: Wednesday, January 21, 2009 3:25 AM
To: Hu, Xuekun
Cc: Kevin Lewis; FalconDev
Subject: Re: Are multiple serial logs feasible or profitable?
Hu, Xuekun wrote:
> Hi, Jim
> I'm thinking to implement the SRLStream subclass. I'm asking for the coding
> suggestion, before I really start to write code. :-)
> 1. Since recordTableSpaceId, recordNumber, sectionId and record must be as a whole
> record body to put into the window, if the free space is not enough, the serial log window
> must be flushed first. Currently, since only one complet SRL record is built, how to
> separate the different set of whole record body? My thinking is to put them into different
> segment, and one set whole record body must be in one single segment. Right?
> 2. When writing to serial log, there is still a loop to write the set of record boy
> (one by one) to serial log, since need to judge if the set of record size exceed the free
> space in the windows and also need to update each record virtualOffset. Right?
Take a look at SerialLog::putData. If the record being built overflows
the serial log window, the window is flushed without the record, and the
partial record is copied to a new window. This works as long as the
total record doesn't exceed the window size.
You could take advantage of this by keeping the separate stream in which
the SRL record is being built to safely below the window size. But keep
in mind that once you execute the START_RECORD macro, you've got the
exclusive lock on the serial log.
If you write the entire record to a separate stream, you can probably
eliminate the relatively expensive estimate of the size of the next
record version in favor of a cruder test of each field at max length.
Writing a serial log block a few bytes short of optimal is no consequence...
You will probably want to write a SerialLog::putStream analogous to
SerialLog::putData but doesn't require a contiguous block of data with
I realize this is a little high level, so if anything isn't clear, feel
free to ask. Now that we've got Obama safely inaugurated, I can turn my
attention back to software.