Hi Hema, all,
Hema Sridharan wrote:
> Hi Joerg,
> Thanks for your comments. Please see my replies inline.
I append comments there.
1) It seems your leads have different preferences than I have, and so my
comments are more directed at their decisions than at you.
2) In the release builds, we create several different configurations
and run all test suites at each of them. Typically, this raises many
more test failures than a developer gets from "compile-CPU-max".
I will file bugs for such failures.
>> Hi !
>> Let me join here:
>> From a build team point of view, I very strongly support that:
>> It is mandatory for us that exactly those tests are run which
>> check components (table handlers, collations, ...) configured
>> in the build, and tests for other components get skipped.
>> For examples of this, please see the tests written by
>> Matthias Leich in the "funcs1" suite, whose result files
>> differ by the engine tested:
> Hema: I went through the funcs_1 suite and noticed a separate test/result
> file for each storage engine, testing the same functionality. Unfortunately
> this was not approved by my team when I followed exactly the same method
> sometime ago. They wanted me to have common test/result file for different
> storage engines which inturn will not duplicate the test and result file.
I hope they tried that on different configurations, but from past
experience I doubt it. I had to file plenty of bugs until developers had
realized there is a "classic" configuration (no InnoDB) on which the
tests have to pass, too.
We'll see how it turns out.
> For this reason I used something called "combinations" file, which will have
> list of storage engines listed in it. Then each test from the suite will run
> for all the storage engines listed in the combinations file.
> Note: Just FYI, this mechanism can be seen currently in rpl suite where each
> test runs for all replication formats.
I fear the output of backup tests will vary more by engine than the
output of replication tests varies by log format.
>> In these tests, you should see the mechanism used to skip
>> (controlled by the presence of a feature) and to write a
>> meaningful comment into the output (which is in the release
>> build log we keep archived).
> Hema: Skip could be because of couple of reasons
> 1. Skip because of a defect pertinent to a specific engine.(I believe that
> this is what you are referring)
> 2. Skip the tests based on storage engines compiled in the Build.(I'm
> implementing this option also)
From the test client, the difference between "engine wasn't compiled"
and "engine switched off" can not be told by the "have_*" include files:
If the engine switched itself off because of a defect, and the server
tells so to the test client, then the test will be skipped in the same
way as if the engine wasn't compiled at all.
(We had that recently when running test on a NFS file system where
InnoDB could not lock files, so it seemed to be missing.)
If you refer to "the test checks which engines *should* be present":
This is a completely different mechanism, mentioned next:
>> For another way of skipping, controlled by the comment string
>> which we set in the release build, see the
>> "charset_collation_*.test" files in the same directory.
I do not (yet) propose to do this for backup tests, because I fear the
association between configuration names and table handlers included has
not yet stabilized enough.
>> IMNSHO, the silent replacement of one storage engine by
>> another is a perfect way to make tests fail.
>> In production environments, it might make the system behave
>> in an un-expected and un-intended way, which may cause wrong
>> data and so be a bad failure for the user/customer.
>> For a recent example, see my report of bug#37976.
> Hema: From the bug report I could see that silent replacement of MyISAM for
> Innodb in slave is probably the
> expected behavior because of --loose-innodb option. By using loose option,
> MySQL Server will ignore the option if it's not supported. Maybe you should
> try in slave without --loose-innodb option and with less disk space.
It isn't me who wrote that option into the test, and I think it was
wrong to do that, but I agree this could be the reason.
Yes, that experiment might tell us more, but I haven't run it yet.
> Let me know if you have further questions.
We will see how these tests do when being used on several different
Joerg Bruehe, MySQL Build Team, joerg@stripped (+49 30) 417 01 487
Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028