dbt-core icon indicating copy to clipboard operation
dbt-core copied to clipboard

[Issue 11312] Data test run results always show failures

Open vglocus opened this issue 9 months ago • 5 comments

Resolves #11312

Problem

Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned. For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing.

Solution

For data tests, always set failures to the actual failure count, even if technically not a failure.

Checklist

  • [x] I have read the contributing guide and understand what's expected of me.
  • [ ] I have run this code in development, and it appears to resolve the stated issue.
  • [ ] This PR includes tests, or tests are not required or relevant for this PR.
  • [x] This PR has no interface changes (e.g., macros, CLI, logs, JSON artifacts, config files, adapter interface, etc.) or this PR has already received feedback and approval from Product or DX.
  • [ ] This PR includes type annotations for new and modified functions.

vglocus avatar Feb 15 '25 08:02 vglocus

Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.

In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.

CLA has not been signed by users: @vglocus

cla-bot[bot] avatar Feb 15 '25 08:02 cla-bot[bot]

Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide.

github-actions[bot] avatar Feb 15 '25 08:02 github-actions[bot]

Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.

In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.

CLA has not been signed by users: @vglocus

cla-bot[bot] avatar Feb 17 '25 07:02 cla-bot[bot]

Codecov Report

:white_check_mark: All modified and coverable lines are covered by tests. :white_check_mark: Project coverage is 86.42%. Comparing base (aa89740) to head (0ebbf88). :warning: Report is 195 commits behind head on main.

:x: Your patch check has failed because the patch coverage (0.00%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #11313      +/-   ##
==========================================
- Coverage   88.96%   86.42%   -2.55%     
==========================================
  Files         189      190       +1     
  Lines       24170    24194      +24     
==========================================
- Hits        21504    20910     -594     
- Misses       2666     3284     +618     
Flag Coverage Δ
integration 82.78% <100.00%> (-3.53%) :arrow_down:
unit 62.71% <0.00%> (+0.17%) :arrow_up:

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Unit Tests 62.71% <0.00%> (+0.17%) :arrow_up:
Integration Tests 82.78% <100.00%> (-3.53%) :arrow_down:
:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

codecov[bot] avatar Mar 17 '25 11:03 codecov[bot]

@vglocus Thanks for (re)opening this PR! I agree this is a good change. As @dbeatty10 explained in https://github.com/dbt-labs/dbt-core/issues/9808#issuecomment-2034482122, it is highly unlikely to represent a behavior change for anyone who's currently relying on the (IMO surprising & incorrect) failures=0 to represent "success" for tests that return nonzero failing records that are less than the configured warn/error threshold.

Could you please update some of the functional tests here, so we can ensure this works going forward? (You can look https://github.com/dbt-labs/dbt-core/pull/9657 for inspiration; I'd recommend giving credit by adding @tbog357 as a co-contributor in the changelog entry.)

jtcohen6 avatar Apr 07 '25 12:04 jtcohen6