beam
beam copied to clipboard
[Python] Add a couple quality-of-life improvemenets to `testing.util.assert_that`
These changes just extend assert_that
to accommodate a couple of common footguns I've seen.
-
assert_that
will now check to see if you're calling it with a pipeline that has already been run. This catches the following circumstance:
with beam.Pipeline() as p:
my_pcoll = p | # ...
assert_that(my_pcoll, equal_to([1,2,3]) # should've been indented
-
assert_that
automatically creates a unique label if the label it's given is already taken. Usually when writing unit tests, we don't really need very specific labels for assertions. When this comes up, we usually have to just manually increment every assertion which is quite tedious.
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers
assign set of reviewers
I ran the linting instructions here: https://cwiki.apache.org/confluence/display/BEAM/Spotless+pre-commit
But they had no effect on any files
Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer
:
R: @liferoad for label python.
Available commands:
-
stop reviewer notifications
- opt out of the automated review tooling -
remind me after tests pass
- tag the comment author after tests pass -
waiting on author
- shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)
The PR bot will only process comments in the main thread (not review comments).
Turns out I jumped the gun and there are more than just lint failures. There were a few true tests failures that came up from actual tests that weren't actually using assert_that
correct. I fixed a few but all the groupby
example tests weren't trivial to fix. I confirmed that if I changed the assertion in those tests, they would've always passed before the changes in this PR
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 71.44%. Comparing base (
827e96d
) to head (ef4829d
). Report is 113 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #30771 +/- ##
===========================================
+ Coverage 38.52% 71.44% +32.92%
===========================================
Files 698 710 +12
Lines 102371 104823 +2452
===========================================
+ Hits 39440 74894 +35454
+ Misses 61301 28299 -33002
Partials 1630 1630
Flag | Coverage Δ | |
---|---|---|
python | 81.23% <100.00%> (+52.08%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Turns out I jumped the gun and there are more than just lint failures. There were a few true tests failures that came up from actual tests that weren't actually using
assert_that
correct. I fixed a few but all thegroupby
example tests weren't trivial to fix. I confirmed that if I changed the assertion in those tests, they would've always passed before the changes in this PR
So what is the plan here? close this PR for now or you want to keep working on this?
I pushed test fixes and skips so we should be all set
On Wed, Mar 27, 2024, 6:57 PM liferoad @.***> wrote:
Turns out I jumped the gun and there are more than just lint failures. There were a few true tests failures that came up from actual tests that weren't actually using assert_that correct. I fixed a few but all the groupby example tests weren't trivial to fix. I confirmed that if I changed the assertion in those tests, they would've always passed before the changes in this PR
So what is the plan here? close this PR for now or you want to keep working on this?
— Reply to this email directly, view it on GitHub https://github.com/apache/beam/pull/30771#issuecomment-2024119891, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQJSFHDQR6P6VV2NLMJNM3Y2NFGJAVCNFSM6AAAAABFLM7DIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRUGEYTSOBZGE . You are receiving this because you authored the thread.Message ID: @.***>
I created a spinoff issue since I think fixing the tests (which were already broken before these changes) is probably out of scope
Bump @liferoad whenever you have a moment
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 71.44%. Comparing base (
c44b454
) to head (ef4829d
). Report is 629 commits behind head on master.
:exclamation: Current head ef4829d differs from pull request most recent head ec84163
Please upload reports for the commit ec84163 to get more accurate results.
Additional details and impacted files
@@ Coverage Diff @@
## master #30771 +/- ##
============================================
+ Coverage 71.27% 71.44% +0.17%
============================================
Files 904 710 -194
Lines 112898 104823 -8075
Branches 1076 0 -1076
============================================
- Hits 80471 74894 -5577
+ Misses 30408 28299 -2109
+ Partials 2019 1630 -389
Flag | Coverage Δ | |
---|---|---|
java | ? |
|
python | 81.23% <100.00%> (-0.04%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
The last commit failed with some weird pydantic/hypothesis errors that seem unrelated to these changes
R: @jrmccluskey for final approval
Gah, I have no idea where this failure couldve come from:
run()
File "/runner/_work/beam/beam/sdks/python/test-suites/tox/py38/build/srcs/sdks/python/target/.tox-py38-tft-113/py38-tft-113/lib/python3.8/site-packages/hypothesis/entry_points.py", line 35, in run
hook = entry.load()
File "/runner/_work/beam/beam/sdks/python/test-suites/tox/py38/build/srcs/sdks/python/target/.tox-py38-tft-113/py38-tft-113/lib/python3.8/site-packages/setuptools/_vendor/importlib_metadata/__init__.py", line 208, in load
module = import_module(match.group('module'))
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'pydantic._hypothesis_plugin'
I think that error is unrelated to this change, I see it on other PRs as well. Filed: https://github.com/apache/beam/issues/30852
I think this might be ready to be merged then if it's safe to ignore the pydantic/hypothesis errors
pydantic errors should be resolved now. ptal at linter errors.
Reminder, please take a look at this pr: @jrmccluskey
Assigning new set of reviewers because Pr has gone too long without review. If you would like to opt out of this review, comment assign to next reviewer
:
R: @damccorm for label python.
Available commands:
-
stop reviewer notifications
- opt out of the automated review tooling -
remind me after tests pass
- tag the comment author after tests pass -
waiting on author
- shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions.
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions.
Looks like this is very close to crossing the finish line - @hjtran any chance you can take another look at this?
Yes, I'll try to take another round of looks at all the PRs I have up this week. I haven't been able to get a reliable dev environment going so every round of iteration has been a bit tough
I haven't been able to get a reliable dev environment going so every round of iteration has been a bit tough
Oh that's not good. We have https://s.apache.org/beam-python-dev-wiki that might help, or if not we can definitely edit the instructions there.
Some people previously wrote and used https://github.com/apache/beam/blob/master/local-env-setup.sh specifically for this purpose (getting a working environment fast), but I am not sure if the script has enough usage, possibly it hasn't been maintained and I haven't used it recently.
Yeah I used that wiki article quite a bit and also used local-env-setup.sh
. I should've written more notes on what went wrong as I've forgotten now. There was some kind of go installation issue when I tried to use the container setup and then some kind of dependency issue when I tried to setup a local environment. There are a few threads in the dev@ mailing list I think where I mentioned these things.
I'll give it another go sometime soon and report back
Reminder, please take a look at this pr: @damccorm
@damccorm finally got a consistent dev environment up. I updated this branch according to @robertwb 's last comment. Hopefully it loos good now