[Python] Log dependencies installed in submission environment
Saves the submission environment dependencies and stage it. Logs it along with the runtime dependencies.
Fixes #28563
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
- [ ] Mention the appropriate issue in your description (for example:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead. - [ ] Update
CHANGES.mdwith noteworthy changes. - [ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 38.47%. Comparing base (
0bbf2c3) to head (80d6d18). Report is 487 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #28564 +/- ##
==========================================
+ Coverage 38.23% 38.47% +0.24%
==========================================
Files 696 698 +2
Lines 101878 102520 +642
==========================================
+ Hits 38952 39449 +497
- Misses 61309 61439 +130
- Partials 1617 1632 +15
| Flag | Coverage Δ | |
|---|---|---|
| go | 54.33% <ø> (+0.39%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers
Run Python_Integration PreCommit
Some unit tests are failing because there is additional staging file now which will always be present. Got the solution. I'll update the PR. Defer review until then.
Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:
R: @jrmccluskey for label python.
Available commands:
stop reviewer notifications- opt out of the automated review toolingremind me after tests pass- tag the comment author after tests passwaiting on author- shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)
The PR bot will only process comments in the main thread (not review comments).
R: @chamikaramj could you comment on the external transform environment. The external_transform environment tests would fail if there is an additional staging file by default.
It complains about no artifact service when it tries to resolve that artifact.
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control
t. The external_transform environment tests would fail if there is an additional staging file by default.
Can we stage job submission dependencies without including them in the runtime environment defintion?
Summary:
We have two issues to address:
- Resolve artifact comparison in environments's
__eq__method - Log/Skip dependency logging for External Environments
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions.
Hey @riteshghorse , could we extract the commits that log the runtime dependencies and merge those before next release cut while submission dependencies portion is being sorted out? Thanks!
Hey @riteshghorse , could we extract the commits that log the runtime dependencies and merge those before next release cut while submission dependencies portion is being sorted out? Thanks!
Sounds good
Created #29705
I've verified that a multi-language from python to java works - Job
So it is a problem with just the test expansion service
I'll a add fake artifact_service method to the test expansion service, that should get us going here
R: @tvalentyn this is ready for review
Changes to note:
- Changed the artifact comparison logic to ignore the type payload field since that has unique hashes
- The external transform test failure was because of the
ExpansionServiceServicernot having artifact service method. So added that. Confirmed this by running a multi-language pipeline successfully - Job Link - The staging logic stays in
stager.pysince we ulitmately callcreate_job_resourcesfrompython_sdk_dependencies()which is invoked during environment creation.
R: @tvalentyn
Barring the lint failure, all tests pass.
Is this ready to merge?
i left one comment, after that it should be ready to merge.
Done, I'll merge once the check passes