On 31.03.11 21.49, Guilhem Bichot wrote:
>>>> +bool JOIN_TAB::extend_and_fix_jt_and_sel_cond(Item *add_cond, uint
>>>> + if (extend_and_fix_cond(add_cond, line))
>>> could add unlikely()
>> I'm a bit afraid of the "code contamination" of these unlikely() macros.
> Well... we have those macros... I find it's good if it can help lower the
> performance impact of all those rarely executed "if (out of memory)".
> What bad do they do?
>> There is also an option to gather branch prediction statistics using two-stage
>> compilations. (The first stage gathers statistics, the second uses statistics
>> to apply branch prediction hints). The benefit of this is that code need no
>> changes, the drawback is the need for two-stage compilation, which makes it
>> hard to do in developer sandboxes.
> I'm not aware of two-stage compilation. What compiler does that?
> And, isn't it dangerous? I mean, OOM happens rarely on the Lab machine where the
> release is built, as rarely as on the customer's, so if(OOM) will be optimized
> the same way, but for the other if()s, they may be optimized in some way on the
> Lab machine, in a way which isn't representative of a real load, and thus would
> perform poorly at the customer's??
We did that for Clustra/HADB on Solaris using SunStudio compilers several years
ago. The speedup was amazing, probably 10-15%. Nowadays, I think gcc is capable
of this as well.
I think the way it works is that regardless of load, most if tests have a branch
that is taken more often than others. And if there is an occational miss, the
result is barely noticeable.