Enable BigQueryIO write throttling detection
Please add a meaningful description for your change here
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
- [ ] Mention the appropriate issue in your description (for example:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead. - [ ] Update
CHANGES.mdwith noteworthy changes. - [ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.
Tested with decreased quota (appendRow quota cap'd to 5GB/min)
before:
after:
Both pipeline still failed, as the BigQuery service is severely throttled. Nevertheless autoscaling downscale is acting, better than before.
Need tuning from Dataflow side to get the pipeline run smooth. Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing.
R: @ahmedabu98 @JayajP
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control
Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing
What does each 3 mean? is there a way to get around it?
Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing
What does each 3 mean? is there a way to get around it?
This is Dataflow autoscaler strategy thing
The first 3 min is that the first throttled signal from the backend appears to be 3 min after pipeline running. Example log:
F11 is throttled (fraction of time throttled = 0.2472). Recommend 75.28 threads instead of 100
then, there is downscale signal every 30 s.
The second 3 min is due to downscale signal must be stable for 3 min then autoscaler will take action.
- Example recommendation < 3 min:
why:
Desire to downscale because overall work duration is 17h59m48.01785516525s and desired parallelism is 74 of which 100% is allocated to this pool, but there was a large decrease for only 2m30.000081865s, less than 3m
- Example recommendation at 3 min:
why:
Downscaling because overall work duration is 18h15m50.3475206785s and desired parallelism is 65 of which 100% is allocated to this pool and there was a large decrease for more than 3m
We may (and should) optimize the Dataflow autoscaler, this is an internal task (not Beam)
I see, thanks for providing those details! this LGTM