swift-testing icon indicating copy to clipboard operation
swift-testing copied to clipboard

Add a type for complex input generation that performs exploratory testing

Open SeanROlszewski opened this issue 1 year ago • 4 comments

Description

In order to support testing patterns where randomized input generation is used to explore a program's state space (such as fuzz, differential, property-based, or mutation testing) we need to add some affordance to developers to specify the following:

  1. The next value to pass into a @Test
  2. A way to receive the prior value and test result as a result of executing @Test
  3. A way to receive additional information, such as changes in code coverage, from the last @Test
  4. A way to stop input generation based on some heuristic or predicate.

Having these 4 items would be the essential set of primitives to really streamline the creation of tools like QuickCheck, libFuzzer, etc. with Swift Testing.

Perhaps this could take the form of a protocol, potentially named TestCaseInputGenerator, that has the following shape:

protocol TestCaseInputGenerator {
    associatedtype AdditionalContext
    func shouldStop<InputType>(given priorRun: (InputType, TestResult)?, additionalContext: AdditionalContext?) -> Bool
    func nextValue<InputType>(given priorRun: (InputType, TestResult)?, additionalContext: AdditionalContext?) -> InputType
}

AdditionalContext can encapsulate data of interest to the test; previously tried values, their results, and other data like code coverage, crashes, etc.

I'm filing this issue rather quickly, in between tasks, so apologizes in advance for oversights on the type signatures and compilability of the proposed API. :)

SeanROlszewski avatar Sep 25 '23 16:09 SeanROlszewski

There doesn’t seem to be an awful lot of interest in this feature - let me know what I can do to help? I’ve been working (very slowly) on a hedgehog port. The types of things I’d be looking for are:

  • Top requirement is suppressing expectation failures - generally for property based tests you may have hundreds of failures but you only want the minimal failing case reported; the rest is noise
  • It’s not clear how/if test runs can be parallel. Ideally the root level tests would run in parallel, but shrinks should be tested sequentially
  • Unsure whether variadic inputs would be supported? Vanilla parameterised tests appear to only support up to two arguments, but this restriction may not be as applicable to property tests which don’t attempt all combinations
  • Reporting output: I assume the relevant info would need to be shoehorned into the existing issue reporting process - it looks like it should work, but for reference, the typical output is:
  • Passed: number of (root) tests
  • Failed: number of tests, number of shrinks, input value(s), failure description (ie expectation failure narrative), seed/path info to reproduce the test. Inputs & failures should be catered for, the rest may need to go to stdout?

Most libraries also have a verbose option that logs out inputs for all tests run, this could potentially be recorded the same way as the current parameterised test runs, although the number of sub-tests is potentially large.

An alternative approach may be some sort of test interceptor or embedded runner that can receive, filter, modify & generate test events/issues within its test context - this is the workaround I ended up using for XCTest support. I don’t think this would be as well-integrated into swift-testing overall but would be more flexible and may be simpler to implement.

samritchie avatar Jun 18 '24 05:06 samritchie

Just want to put myself forward - would be interested in helping out with adding property (randomized) tests to swift-testing in case there's a demand for it.

Some credentials: I've previously implemented such test runner for SerenityOS in C++ and am the maintainer of the official property testing library for the Elm language (reimplemented it from the hedgehog-like RoseTree approach to the Hypothesis-like approach).

Janiczek avatar Sep 18 '24 17:09 Janiczek

We're starting to plan out what we want to do for the next major Swift release. This is on our spreadsheet, although I don't know if we've assigned any particular priority to it. It would help to get some more information about your use cases @samritchie @Janiczek. 😄

grynspan avatar Sep 18 '24 18:09 grynspan

Thanks @grynspan, the very high level use case is that there’s nothing equivalent currently available for Swift except for SwiftCheck, which still works but is largely unmaintained since Robert moved to Apple.

Would the plan be to integrate a standard random testing feature, or provide extension points to support third party frameworks like Sean suggested in the original issue? There’s value in terms of simplicity/adoption/availability/acceptance to have a builtin feature, but this would end up blocking alternatives and needs to be well designed & targeted.

If the goal is extension points & third party frameworks, my comments above still stand - however it’s probably worth finishing the hedgehog port and using it in a couple of projects, and then looking at how swift-testing integration could make it better. I have struggled a bit with time & motivation - @Janiczek I’d be happy to team up on this if you’re interested? I’ve used Elm Fuzzer before and I like that approach as well.

samritchie avatar Sep 19 '24 00:09 samritchie