Based on a long-running conversation, mostly on slack with @oxinabox and @c42f, with a little bit @evizero. One issue that led to it is the desire to do XML reports of test runs, which exposes some issues with the extension mechanisms currently in place for Base.Test. I think to figure out how much rearchitecting is necessary we should answer a few questions.
oxinabox says DataStructures.jl has some testing-related hacks, we should investigate what they've had to do to see what might be needed in Base.Test
at what granularity? Currently TestSetExtensions splits at the file level, which seems pretty reasonable.
seems like this should happen at the same level of granularity as running subsets
e.g. whether to store all the info on passing tests, which was disabled a while ago to keep memory reasonable when running Base tests, but is sometimes needed. This seems like something that might be useful to do on a per-testset level…maybe.
to figure out whether we can extend it we should think about the ways people might want to extend it. There are some existing examples in the wild, but we probably also want to think about other possibilities for the future.
example - TestReports generates an XML report that can be ingested by other tools
example - TestSetExtensions prints dots as the tests run, and prints failures more attractively than the default, including showing diffs. Note this is actually not great for running within Atom, so that could be another use-case we could think about.
basically an implementation detail so that @test
outside of a testset will just thrown an error on failure
This is like 99.9% of all uses of Testset since it is the default if you use Testset without ever (in any parent) specifying a testset type. It does console output. It is quiet pretty.
displays a green dot for every pass and a prettier error for every failure (including diffs). It wraps another testset with configurable type and defers most handlling to it.
very similar to DefaultTestSet
except that it doesn't throw an error at the end when there's a failing test.
@c42f enumerated the following purposes for testsets:
- Group tests logically
- Capture test results
- Serve as a scope to catch exceptions which occur outside test macros
- Format or otherwise communicate test results
- A scope to provide consistency for certain global state, for example seeding global RNGs
B is for Behavior, R is for Render
I don't actually like this very much or think it's necessary, I think @testset
descriptions are good enough. I tend to use pretty small leaf @testset
's.
@oxinabox reported this - it seems like a straight-up bug, and makes it impossible to do anything with Pass
data from a testset. source link, reported here