beam icon indicating copy to clipboard operation
beam copied to clipboard

Enable BigQueryIO write throttling detection

Open Abacn opened this issue 1 year ago • 7 comments

Please add a meaningful description for your change here


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • [ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • [ ] Update CHANGES.md with noteworthy changes.
  • [ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels Python tests Java tests Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

Abacn avatar May 10 '24 20:05 Abacn

Tested with decreased quota (appendRow quota cap'd to 5GB/min)

before: image

after: image

Both pipeline still failed, as the BigQuery service is severely throttled. Nevertheless autoscaling downscale is acting, better than before.

Need tuning from Dataflow side to get the pipeline run smooth. Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing.

Abacn avatar May 13 '24 20:05 Abacn

R: @ahmedabu98 @JayajP

Abacn avatar May 14 '24 18:05 Abacn

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control

github-actions[bot] avatar May 14 '24 19:05 github-actions[bot]

Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing

What does each 3 mean? is there a way to get around it?

ahmedabu98 avatar May 15 '24 20:05 ahmedabu98

Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing

What does each 3 mean? is there a way to get around it?

This is Dataflow autoscaler strategy thing

The first 3 min is that the first throttled signal from the backend appears to be 3 min after pipeline running. Example log:

F11 is throttled (fraction of time throttled = 0.2472). Recommend 75.28 threads instead of 100

then, there is downscale signal every 30 s.

The second 3 min is due to downscale signal must be stable for 3 min then autoscaler will take action.

  • Example recommendation < 3 min:
why:
Desire to downscale because overall work duration is 17h59m48.01785516525s and desired parallelism is 74 of which 100% is allocated to this pool, but there was a large decrease for only 2m30.000081865s, less than 3m
  • Example recommendation at 3 min:
why:
Downscaling because overall work duration is 18h15m50.3475206785s and desired parallelism is 65 of which 100% is allocated to this pool and there was a large decrease for more than 3m

Abacn avatar May 18 '24 02:05 Abacn

We may (and should) optimize the Dataflow autoscaler, this is an internal task (not Beam)

Abacn avatar May 18 '24 02:05 Abacn

I see, thanks for providing those details! this LGTM

ahmedabu98 avatar May 20 '24 19:05 ahmedabu98