Less logging during tests
Logging is a relative expensive operation. Logging during tests is only useful if it is being read (so most likely it is ignored during a succesful run), so it should be reduced to a minimum (and where the logged data is important, it should be converted to an assertion). After this you'll have a much better overview of what the build is doing and you suddenly might see overseen warnings.
After this I still noticed a couple of things:
- A couple of Test classes have 0 executed tests. This could be because it is not a test. Be aware that the maven-surefire-plugin has a set of includes, i.e classnames starting with Test or ending with Test, Tests or TestCase but excluding subclasses. For these cases consider an explicit exclude or rename the class so you don't have to wonder again why the number of executed tests is zero.
- The use of jul-to-slf4j should be avoided as it can have a huge impact on the performance. Most likely this can be replaced with slf4j-jdk-platform-logging.
- Based on the output of the tests, there are still a few places where is directly written to System.out either in the test or main class). These should be rewritten.
With this PR you'll already see a huge difference. The remarks above are things that can be optimized later.
Best practices and other information can be found at https://github.com/rfscholte/lessLogging
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 57.88%. Comparing base (
59551e4) to head (d561cb3). Report is 1038 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #14003 +/- ##
============================================
- Coverage 61.75% 57.88% -3.87%
- Complexity 207 219 +12
============================================
Files 2436 2617 +181
Lines 133233 143433 +10200
Branches 20636 22021 +1385
============================================
+ Hits 82274 83026 +752
- Misses 44911 53917 +9006
- Partials 6048 6490 +442
| Flag | Coverage Δ | |
|---|---|---|
| custom-integration1 | <0.01% <ø> (-0.01%) |
:arrow_down: |
| integration | <0.01% <ø> (-0.01%) |
:arrow_down: |
| integration1 | <0.01% <ø> (-0.01%) |
:arrow_down: |
| integration2 | 0.00% <ø> (ø) |
|
| java-11 | 57.86% <ø> (-3.84%) |
:arrow_down: |
| java-21 | 57.77% <ø> (-3.86%) |
:arrow_down: |
| skip-bytebuffers-false | 57.88% <ø> (-3.87%) |
:arrow_down: |
| skip-bytebuffers-true | 30.66% <ø> (+2.94%) |
:arrow_up: |
| temurin | 57.88% <ø> (-3.87%) |
:arrow_down: |
| unittests | 57.88% <ø> (-3.87%) |
:arrow_down: |
| unittests1 | 40.74% <ø> (-6.15%) |
:arrow_down: |
| unittests2 | 27.96% <ø> (+0.23%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Logging is a relative expensive operation. Logging during tests is only useful if it is being read (so most likely it is ignored during a succesful run), so it should be reduced to a minimum (and where the logged data is important, it should be converted to an assertion).
I can see it may be useful to log less but... numbers don't show that it is an expensive operation. Specifically, tests on this PR took as long as tests in other PRs. I can see advantages when logs are being read (GH takes quite long to load 20k log lines from the other PRs while it is pretty fast with the 2k in this PR) and I'm glad to merge this because it can help to detect maven issues in the future, but we need to include some text indicating how to enable logs in case we want to fix a broken test.