pitest icon indicating copy to clipboard operation
pitest copied to clipboard

Missing hamcrest library causing tests not getting picked up without error message

Open eis opened this issue 11 years ago • 16 comments

Here is the example project I've been running on Windows, on Eclipse for JaCoCo coverage, command line for PIT testing: https://github.com/eis/pitest-hamcrest-issue-demo

You can see in the repo the two runs, successful one:

https://raw.github.com/eis/pitest-hamcrest-issue-demo/master/target/pitest/201403062208/index.html (showing 44% line coverage) https://raw.github.com/eis/pitest-hamcrest-issue-demo/master/target/pitest/pitest-out.log (all output on verbose, no problems here)

and broken one:

https://raw.github.com/eis/pitest-hamcrest-issue-demo/master/target/pitest/201403062209/index.html (0% coverage) https://raw.github.com/eis/pitest-hamcrest-issue-demo/master/target/pitest/pitest-out-nohamcrest.log (again no problems mentioned, except that it doesn't find the tests)

The runs are identical in all other respects, except the successful one has added hamcrest library to the classpath, and broken one hasn't.

Successful one: https://github.com/eis/pitest-hamcrest-issue-demo/blob/master/pit-jarjar.bat Broken one: https://github.com/eis/pitest-hamcrest-issue-demo/blob/master/pit-jarjar-nohamcrest.bat

Now, in my trimmed down example it is clear on the broken one that something is wrong. However in a big project this is a really nasty issue, since only some of the tests aren't found if hamcrest library is missing, but some are, so you get worse coverage than you should but you still get something, making it a lot harder to notice things are wrong. That's why I didn't see anything special in the verbose logs at first.

Hamcrest missing should've thrown noclassdeffounderror or similar somewhere, but it doesn't. It's swallowed somewhere by PIT it seems.

eis avatar Mar 06 '14 20:03 eis

Experienced the same issue now with other project where ant.jar was missing from classpath, and the only clue was 15% line coverage instead of expected 30%. There needs to be a way of catching this problem.

eis avatar Mar 23 '14 18:03 eis

The problem seems to be this line:

https://github.com/hcoles/pitest/blob/master/pitest/src/main/java/org/pitest/junit/JUnitCustomRunnerTestUnitFinder.java#L62

(runner.getClass().isAssignableFrom(ErrorReportingRunner.class) in private boolean isNotARunnableTest())

Commenting it out makes the test fail like it should, getting rid of silent failure with low coverage:

22:00:59 PIT >> INFO : SLAVE : 22:00:59 PIT >> WARNING : JUnit error for class class com.tonicsystems.jarjar.DepFindTest : com.tonicsystems.jarjar.DepFindTest
22:00:59 PIT >> FINE : SLAVE : ERROR Description [testClass=com.tonicsystems.jarjar.DepFindTest, name=com.tonicsystems.jarjar.DepFindTest] -> java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing
22:00:59 PIT >> INFO : SLAVE : java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing
..
22:00:59 PIT >> INFO : Calculated coverage in 0 seconds. Exception in thread "main" org.pitest.help.PitHelpError: All tests did not pass without mutation when calculating line coverage. Mutation testing requires a green suite.
See http://pitest.org for more details.

However, is removing that line correct course of action?

eis avatar May 02 '14 19:05 eis

If I drop line and execute pitests tests the test case JUnitCustomRunnerTestUnitFinderTest.shouldNotFindTestInNonTestClasses fails. Thus it seems to be intended to filter classes that cannot be instantiated as valid JUnit tests. This particular test case checks whether classes like public static class NotATest { } will be skipped.

The ignorance of ErrorReportingRunner skips cases where building the runner itself fails as well. For example if the contructor of the runner throws an exception. That said we cannot distinguish between the class above and a bad runner like

@RunWith(ABadRunner.class)
public TestClassWithBadRunner {
    @Test
    public void testMethod(){ ...}
}

public ABadRunner extends Runner {
    public ABadRunner () {
        throw new RuntimeException("This is an intended initialization error");
    }
   // other methods skipped for readability
}

It seems to be intended to skip the class without test annotations but the bad runner should be cought and reportet (and the test suite should be marked as erroneous).

Maybe we should look at maven surefire: surefire has some complex logic (distinguishing between different junit versions) to resolve test classes. The most common logik can be found here: https://github.com/apache/maven-surefire/blob/c8858944ca1409ae368f217cc2f7ca039651bd98/surefire-providers/common-junit4/src/main/java/org/apache/maven/surefire/common/junit4/JUnit4TestChecker.java But there is special logic for JUnit 4.7 and 4.8 (and of course JUnit 3) as well.

JUnit is great but lacks a concept for dynamically finding and filtering tests for execution :-/

StefanPenndorf avatar May 02 '14 20:05 StefanPenndorf

About JUnitCustomRunnerTestUnitFinderTest.shouldNotFindTestInNonTestClasses failing, this is correct. I have a suggestion on how to change the test detection to handle this case. Check out this commit: https://github.com/eis/pitest/compare/test_detection_fix?expand=1

About bad runners, I think the proposed change actually fixes their handling, it doesn't break it. Previously those were just silently ignored, so if you use current version of PIT with a bad runner example you mentioned, you get this:

11:22:29 PIT >> FINE : Exit code was - OK
11:22:29 PIT >> FINE : Slave exited ok
11:22:30 PIT >> FINE : Finished
11:22:30 PIT >> INFO : Completed in 10 seconds
================================================================================

so PIT claims everything went ok even dispite the bad test runner! I think that's a problem, not correct behavior. With my proposed change, this will happen to bad runners:

11:23:35 PIT >> INFO : SLAVE : 11:23:35 PIT >> WARNING : JUnit error for class class com.tonicsystem
s.jarjar.TestClassWithBadRunner : com.tonicsystems.jarjar.TestClassWithBadRunner

11:23:35 PIT >> FINE : SLAVE : ERROR Description [testClass=com.tonicsystems.jarjar.TestClassWithBad
Runner, name=com.tonicsystems.jarjar.TestClassWithBadRunner] -> java.lang.RuntimeException: This is
an intended initialization error

11:23:35 PIT >> INFO : SLAVE : java.lang.RuntimeException: This is an intended initialization error
11:23:35 PIT >> INFO : SLAVE :
        at com.tonicsystems.jarjar.ABadRunner.<init>(ABadRunner.java:13)
..
11:23:35 PIT >> WARNING : Description [testClass=com.tonicsystems.jarjar.TestClassWithBadRunner, nam
e=com.tonicsystems.jarjar.TestClassWithBadRunner] did not pass without mutation.
11:23:35 PIT >> INFO : Calculated coverage in 0 seconds.
Exception in thread "main" org.pitest.help.PitHelpError: All tests did not pass without mutation whe
n calculating line coverage. Mutation testing requires a green suite.
See http://pitest.org for more details.

Which I think is as it should be.

There's a third point I want to make as well. With my change, the test shouldNotCreateAConfigurationThatFindsTestNGTestsWhenTestNGOnClassPath in ConfigurationFactoryTest class starts to fail as well. However, I think the test is wrong - PIT should create a configuration and then fail with an error message about missing TestNG in the class path in this case when it is trying to execute, not silently ignore the missing testng.

So with this change, I think we'd be fixing

  • lack of error about missing hamcrest library
  • lack of error about bad junit test runner
  • lack of error about missing testng library

I don't at least immediately see any downsides to my patch.

eis avatar May 03 '14 08:05 eis

Your commit looks like a good starting point, but I think there's room for improvement:

  • the check hasTestMethods should be done before creating the runner - there is no need to create a runner if you ignore it and return an empty Collection regardless of the runner result
  • JUnit 3 test cases need to extend TestCase class. If you only check for a method starting with "test" you'll probably find too much classes as possible tests (that will fail executing later and cause a "red" test suite)
  • if there is a JUnit @RunWith annotation with a custom runner there is no need for @Test annotation. The custom runner can apply it's own logic. Those test cases whould be silently skipped now.
  • if JUnit4 is used with @Test annotation, abstract classes must be skipped (or will fail executing later and cause a "red" test suite)

Stefan

StefanPenndorf avatar May 03 '14 20:05 StefanPenndorf

@KyleRogers

JUnit 3

Uh, this is 2014... @hcoles, do you actually claim support for such a prehistoric artifact?

fge avatar May 03 '14 20:05 fge

@fge I've never written a JUnit 3 test case myself nor seen one in the wild. Nevertheless JUnit4 ships with JUnit 3 and I don't know if someone out there has mixed test bases. The JUnit folks have been asked for removing JUnit 3 from JUnit 4 but always rejected those proposals.

Personally, I don't care whether JUnit 3 will be supported or not, but currently pitest claims to support JUnit 3. I just tried to maintain JUnit 3 support.

StefanPenndorf avatar May 03 '14 20:05 StefanPenndorf

There is at least one reason I'm aware of for having mixed junit 4 and junit 3 tests, which is grouping test suites. I wouldn't vote for dropping junit 3 support if there is no specific reason to do it.

About checking for TestCase, for @RunWith or others, PIT already uses TestInfo called by JUnitTestClassIdentifier<-CodeSource<-DefaultCoverageGenerator to determine if a class is a valid test class, so the classes that don't get through there are never used anyways. That checks for TestCase, @RunWith and others. I also tested and confirmed this.

The longer I look at the code, the more I start to think that whole isNotARunnableTest method serves little purpose - almost all the valid checks are already done at that point, and if there is something still failing, it should fail with full error message, not silently which is what this method does. If the method is removed, anything that's about to fail fails with a proper error message. I think my hasTestMethods is not needed either for the same reason.

It does check for a Parameterized runner which makes sense as those are handled by ParameterizedJunitTestFinder, not this one. I would think at this point of execution code should only check for the cases which a valid tests for some other finder, but not this one. Parameterized junit tests seem like one such case. For example junit three error or warning cases, which it also checks, are not valid tests for any runner so those should just fail and not get silently ignored.

eis avatar May 04 '14 08:05 eis

@fge Yes, I want to maintain JUnit 3 support - pitest began life as solution for working with a particular legacy codebase full of JUnit 3 tests. Plenty of corporate codebases sill have them even if people aren't writing new ones .

@eis I think things will get past JUnitTestClassIdentifier that aren't executable tests e.g abstract classes. There is however probably a good argument for moving all checks there if possible (I think the list may be longer than just abstract classes - pit's test suite ought to contain examples). PIT has encountered some strange things in tests suites in the wild - this code has been updated by trial and error to cope with them.

hcoles avatar May 04 '14 16:05 hcoles

Actually, that isn't quite right. The JUnitTestClassIdentifier needs to do a less stringent check as it is used to separate test related code from production code in codebases that build to a single output dir (yes some ant projects are built like this). So, for example, if a there is an abstract base test class it must be identified as "test-related" code by the JUnitTestClassIdentifier, but identifed as not a test by the finder.

Messy stuff.

hcoles avatar May 04 '14 19:05 hcoles

I wonder how would one even create a test case for these situations. I tried to do this but it seems quite tricky. Should PIT have integration tests in addition to unit tests?

Anyway, IMO this is an important one to to fix. If we have a consensus about what needs to be done I'd be eager to do it. I guess the suggestion of just removing this line and changing the tests accordingly is still on the table.

eis avatar May 04 '14 20:05 eis

@eis Yes testing is a little awkward, the general strategy has been embed examples of the various types of invalid/valid tests within static classes (so the surefire runner can't see them) and then check that the various bits of pitest do/don't detect them as tests correctly. I just had a quick scan through, and they don't look complete, which is worrying.

There is a (small) set of integration tests within the maven-verification module - these exercise everything end to end via the maven plugin. They don't currently cover this area.

I agree that this is important to fix, simply removing the the line seems reasonable. I'd prefer to do this after the 0.34 release (which should be this month) so there's plenty of time to discover any issues.

hcoles avatar May 05 '14 19:05 hcoles

Good to hear, let's do that. That way there should be also enough time to build reasonable tests that would cover this and other similar cases, which will probably take a while.

eis avatar May 06 '14 05:05 eis

I have a change I'm playing with locally that seperates out the test framework support into a plugin.

This should allow us to create a new JUnit 4 only plugin and ignore all the legacy concerns, which ought to be much simpler to maintian. The existing "plugin" can remain for use on legacy codebases.

hcoles avatar Aug 11 '14 09:08 hcoles

Had a similar issue and I downloaded this jar https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/hamcrest/hamcrest-1.3.tgz added to my project and tests ran without any errors.

johnswekenyi avatar May 08 '17 09:05 johnswekenyi

Closing as 10 years old.

hcoles avatar Apr 02 '24 09:04 hcoles