junit5
junit5 copied to clipboard
Introduce error handling mechanism for validation and discovery errors
Overview
Validation errors (e.g., for invalid @BeforeEach
method declarations) should not abort the entire discovery phase. Instead the discovery phase should continue, with the error tracked and reported during the execution phase.
Areas of Applicability
- invalid
@BeforeAll
,@AfterAll
,@BeforeEach
, and@AfterEach
method declarations - invalid
@Test
,@TestFactory
,@RepeatedTest
, and@ParameterizedTest
method declarations (see #2244) - invalid
@Nested
test class declarations (see #1223, #2717, and #1736) - invalid
@Suite
class declarations - blank display names supplied by the user, as discussed in #743
- exceptions thrown by test engines, as discussed in #750
- unresolvable unique IDs, as discussed in #210
- note, however, that a TestEngine should not report a unique ID as unresolvable if the unique ID does not apply to the particular TestEngine (see #1026 ).
- non-static test class declaration (see #2311)
Proposals
- Allow engines to track errors by creating a special type of
TestDescriptor
such as anAlwaysFailingTestDescriptor
,DeadOnArrivalTestDescriptor
, orErrorReportingTestDescriptor
.- Tracked errors for such a corresponding
TestDescriptor
could then be thrown as an exception during the execution phase instead of executing the corresponding container or test.
- Tracked errors for such a corresponding
- Introduce a new property in
TestDescriptor
that signals an error that was encountered during the discovery phase. - Pass a reporting object from the
Launcher
into eachTestEngine
to report errors.- See https://github.com/junit-team/junit5/issues/242#issuecomment-294301875
Related Issues
- #121
- #210
- #743
- #750
- #835
- #876
- #949
- #971
- #1026
- #1074
- #1223
- #1944
- #2244
- #2311
- #2717
Deliverables
- [ ] Introduce a mechanism that allows validation and configuration errors to be tracked during the test engine discovery phase and then thrown or reported during the execution phase.
- [ ] Use the new mechanism to replace the current ad hoc use of logging and transparent defaulting as a work-around.
- Search for
TODO [#242]
within the code base. - See https://github.com/junit-team/junit5/issues/750#issuecomment-294296045
- Search for
- [ ] Determine where else the new mechanism can be used and apply it.
- [ ] Revisit the results of #835 and update the code accordingly by tracking such errors instead of just ignoring such incorrect usage.
- [ ] Revisit the changes made in #971 and determine if it makes sense to move the look-up of lifecycle methods back to the constructor of
ClassTestDescriptor
.
The same is true for invalid custom display names supplied by the user, as described in #743.
I will therefore update the description of this issue to reflect the broader scope.
FYI: I have introduced a minimum set of Deliverables.
I am thinking this reminds me the approach we took for vintage to handle multiple assertion errors using a MultipleFailuresError.
We could be possible that we could store all validation errors in a Throwable - using a MultipleFailuresError when there is more than one, that would be carried over to the execution phase and when this is nonEmpty we would throw the exception instead of executing the test.
It might also be worth considering for this mechanism to track all exceptions that occur while processing a single discovered test and throw it in the execution phase.
This way instead of the whole process stopping, we could still process the rest of the tests that encountered no problems and only fail on single tests.
PS While I was writing, this @sbrannen changed the title and description and now it sound more like what I am writing here! :smiley:
@gaganis, thanks for sharing your ideas.
A few notes...
No need to use MultipleFailuresError
.
We're on Java 8 and can use suppressed exceptions.
Plus, we already have a ThrowableCollector
within the JUnit Jupiter engine that could potentially be used for this purpose as well.
The scope of this issue has been broadened in order to encompass the Platform as well.
Rather than using exceptions of any kind, having some kind of notification collector as a return or in/out parameter seems more flexible since any exception will stop an engine's discovery phase whereas many notifications are just warnings or pieces of information .
Rather than using exceptions of any kind, having some kind of notification collector as a return or in/out parameter seems more flexible since any exception will stop an engine's discovery phase whereas many notifications are just warnings or pieces of information .
Good point!
I'll change the title of this issue accordingly.
FYI: title and description have updated.
If this issue is addressed in time for 5.0 M6, we might want to move #949 to M6 as well.
Added deliverable regarding changes made in #971.
Updated Proposals based on discussions in #210.
Introduced Areas of Applicability section.
Overhauled and expanded Areas of Applicability section.
Result of a team discussion: Move to 5.1 Backlog. I will provide a PR for a possible solution and we can contiunue discussing it on the actual PR.
I have analyzed the scenarios mentioned within the issue. With the current state of the discovery mechanism in place, I can hardly see a way to solve this issue in some kind of general approach.
The places where the cases occur have very less in common and are spread throughout most of the discovery code base. In most situations, e.g. the lookup of the DisplayName
in org.junit.jupiter.engine.descriptor.JupiterTestDescriptor#determineDisplayName
one is not even able to create some kind of ErrorReportingTestDescriptor
as it would require reporting the newly created ErrorReportingTestDescriptor
without knowledge of the current context.
Therefore, a handling, in that case, would have to be implemented with the caller of the constructor of ClassTestDescriptor
and all MethodBasedTestDescriptor
s or better change those to factory methods and handle it within those. Still, this solves only one of the required cases of the following:
- invalid method livecycle declarations
- invalid test method declarations
- blank display names supplied by the user
- exceptions thrown by test engines
- unresolvable unique IDs
For me, I think most of them are much better handled during the test execution phase. Therefore, I wonder, if we might revisit the issues and think about moving the preconditions and verifications to the execution phase.
Let me quickly go through them one-by-one:
-
invalid method lifecycle declarations: When registering an extension for the test lifecycle (Before/After-Each/All), we do not actually create a
TestDescriptor
. We would rather have to keep the exception within the context and create anErrorReportingTestDescriptor
for eachTestDescriptor
found while the context holds the exception. - invalid test method declarations: Can be solved on an individual base, still very distributed through the code base of the discovery resolvers.
- blank display names supplied by the user: tricky, see example mentioned above.
-
exceptions thrown by test engines: I think, this should more or less turn the engine's
TestDescriptor
turning into something that reports the exception for that particularTestEngine
. But again, this is more a topic for the execution phase, as it should stop the affectedTestEngine
from executing any tests. - unresolvable unique IDs: Can be solved on an individual base, still very distributed through the code base of the discovery resolvers.
Conclusion
I think we should split the topic into at least two topics:
- Where applicable, move certain validations / exceptions from the discovery into the execution phase
- Refactor / Simplify of the discovery resolvers to support a general mechanism to report discvoery errors in form of a
ErrorReportingTestDescriptor
While 1 could also be solved with 2 being in-place, we still could provide a temporary solution for those cases.
@junit-team/junit-lambda: What do you think?
Introduced "invalid @Nested
class declaration" as mentioned in #1223.
Also related to #121.
@nicolaiparlog The CSV-related warnings/errors may apply here -- or in one of the related issues.
Also related to #876.
This issue has been automatically marked as stale because it has not had recent activity. Given the limited bandwidth of the team, it will be automatically closed if no further activity occurs. Thank you for your contribution.
Following up here on the issue posted at https://github.com/junit-team/junit5/issues/2659 (whoops):
This behavior can cause quite a headache and be very confusing. If I were to try to PR a minimal change which just did the below:
Map<String, JavaType> methodNameTypeMap = new Map()
for test in tests:
methodNameTypeMap.insert(test.name, test.method.returnType)
when "test results are reported":
for (testName, testFnType) in methodNameTypeMap.entries():
if testFnType != "void" then:
print(f"[WARNING]: Found test {testName} with type {testFnType}. JUnit only resolves tests of type void -- test was skipped")
How difficult would this change be to make as someone who didn't know the codebase? The reason why I ask is because this leads to scenarios like these:
- https://github.com/quarkusio/quarkus/issues/18373
- https://quarkusio.zulipchat.com/#narrow/stream/187038-dev/topic/.22No.20Tests.20Found.22.2C.20last.20issue.20for.20Scala.203.20support
This was a multi-day investigation for me =/ I don't want anyone else to have to go through that again. Can we make this better for everyone?
Should this also cover @Suite
? Its predicate class IsSuiteClass
seems to silently ignore the class if the predicate does not match, despite being explicitly annotated.
Yes, this should also apply to @Suite
classes. I'll add that to this issue's description.
Hi. This is big issue when writing unit tests in Kotlin. It's really easy to create unit test in Kotlin that implicitly returns a none void return type. JUnit then just silently fails to run any of your Kotlin unit test that have a none void return type.
As an example, in a codebase with about 800 unit tests, I found that about 20 of the unit tests were being silently ignored by JUnit 5 because of this issue.
Here a repo with a minimal reproducible example for easily hitting this bug in Kotlin:
https://github.com/simondean/kotlin-junit-run-blocking-issue/blob/main/src/test/kotlin/IssueTest.kt
@Test
fun `also will be silently skipped`() = runBlocking {
// This bug is not specific to Mockito but the bug is easily triggered by putting a verify() call at the end of a unit test
val value = "anything"
// Because verify() has a return value, so does the whole unit test, which causes JUnit to silently ignore the test (and any others like it in the codebase)
verify(value).toString()
}
See the following duplicate issues for more info/context:
- https://github.com/junit-team/junit5/issues/2244
- https://github.com/junit-team/junit5/issues/2742
Given this issue has been open for 5 years and it's arguably a more serious issue now that people are using Kotlin with JUnit, would it be worth boosting the priority of this issue? Thanks!
would it be worth boosting the priority of this issue? Thanks!
I've added the new
label to signal to the team that it needs to be reconsidered.
Thanks @sbrannen
As an example, in a codebase with about 800 unit tests, I found that about 20 of the unit tests were being silently ignored by JUnit 5 because of this issue.
@simondean Out of interest, how did you detect these? I'm looking to implement similar logic and am searching for prior art currently before implementing something bespoke.
This issue has been automatically marked as stale because it has not had recent activity. Given the limited bandwidth of the team, it will be automatically closed if no further activity occurs. Thank you for your contribution.
This issue has been automatically closed due to inactivity. If you have a good use case for this feature, please feel free to reopen the issue.
Hey folks, just wondering whether there was any movement on this?
I recently stumbled on the exact same issue as in https://github.com/junit-team/junit5/issues/1223 where we had junit4 tests being run in junit5 without vintage on the classpath, and the tests silently passed with 0 test cases run. Hoping that this could be better reported to the developer if they don't realise their tests didn't actually run