Then we list the tests that we expect them to pass. For each test and each language we run the test with that language's runscrip and report the the wrong and exception counts. 0/0 means all tests passed.
We will group tests into categories. We begin with the two examples from fit.c2.com that are expected to be free of errors. These are the 'tests' recommended in TipsForCoreImplementors.
Now we proceed with more through specification and testing of exactly what an implementation must do to be a version 1.0 implementation.
Some implementations have capabilities beyond that currently considered part of the fit core. These are under consideration as core pending positive experience in real testing and support from a majority of implementations.
(Nothing here yet. This is an active area of development.)
And finally some facts about this run.
These fixtures are really test runners written as fixtures. The Frameworks fixture collects information about the framework implementations to be tested. The Tests fixture combines this information with a list of test pages and reports run results in the remaining blank cells of the table.
The actual running of the tests is delegated to cgi scripts that can be scattered around the internet. Not all implementations need be hosted here at fit.c2.com. We ask that implementors provide a cgi specifically for running these tests that doesn't exploit the HttpReferer trick common with RunScript. This makes tracking down test failures much simpler.
The interaction with these cgi scripts is model on the fixture used in the WebPageExample. The td tags present in the cgi output are classified as red, green, yellow or gray based on the presence of bgcolor attributes in the tags. The various colors are counted and selected totals reported.
See the source.
|Last edited September 17, 2003
Return to WelcomeVisitors