druid icon indicating copy to clipboard operation
druid copied to clipboard

Global limit for MSQ controller tasks implemented

Open nozjkoitop opened this issue 1 year ago • 9 comments

Improvements

Implemented a way to limit on the number of query controller tasks running at the same time. This limit specifies what percentage or amount of task slots can be allocated to query controllers. If the limit is reached, the tasks would wait for resources instead of potentially blocking the execution of other tasks (and failing after a timeout).

Rationale

There is no mechanism in Druid to prevent the cluster from being overloaded with controller tasks. Currently, it could cause a significant slowdown in processing and may lead to temporary deadlock situations.

Introduced new configuration options

  • druid.indexer.queue.controllerTaskSlotRatio - optional value which defines the proportion of available task slots that can be allocated to msq controller tasks. This is a floating-point value between 0 and 1. Defaults to null.
  • druid.indexer.queue.maxControllerTaskSlots - optional value which specifies the maximum number of task slots that can be allocated to controller tasks. This is an integer value that provides a hard limit on the number of task slots available for msq controller tasks. Defaults to null.

This PR has:

  • [x] been self-reviewed.
  • [x] added documentation for new or modified features or behaviors.
  • [x] a release note entry in the PR description.
  • [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • [ ] added or updated version, license, or notice information in licenses.yaml
  • [ ] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • [x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • [ ] added integration tests.
  • [x] been tested in a test Druid cluster.

nozjkoitop avatar Aug 13 '24 07:08 nozjkoitop

Remove useless tail

Done, thanks

nozjkoitop avatar Aug 14 '24 10:08 nozjkoitop

Thanks @kfaraz, @cryptoe for your comments The most trivial deadlock scenario occurs when we queue a group of controller tasks but don't have available task slots for actual workers. This results in tasks hanging and eventually timing out. Thanks for highlighting the WorkerTaskRunnerConfig, it seems like the great place for this configuration. I've updated the behavior and merged it with parallelIndexTaskSlotRatio, now it's more flexible, also I've added the logging to inform the user why the task is Pending

nozjkoitop avatar Sep 12 '24 15:09 nozjkoitop

@cryptoe , do you have any further comments on this PR?

kfaraz avatar Sep 28 '24 02:09 kfaraz

@nozjkoitop , sorry I had missed these null checks in the earlier review. Could you please include these? I am unable to commit the suggestions myself.

kfaraz avatar Oct 01 '24 13:10 kfaraz

Could you please include these? I am unable to commit the suggestions myself.

Done, thanks

nozjkoitop avatar Oct 01 '24 14:10 nozjkoitop

@nozjkoitop , @cryptoe has expressed some concerns around having a per-worker config

He makes some good points here: https://apachedruidworkspace.slack.com/archives/C030CMF6B70/p1727836829640959

kfaraz avatar Oct 02 '24 02:10 kfaraz

He makes some good points here: https://apachedruidworkspace.slack.com/archives/C030CMF6B70/p1727836829640959

I'm afraid I don't have access to this Slack workspace

nozjkoitop avatar Oct 02 '24 10:10 nozjkoitop

I'm afraid I don't have access to this Slack workspace

You can try this link https://druid.apache.org/community/join-slack from the Druid docs.

kfaraz avatar Oct 02 '24 10:10 kfaraz

Hey @cryptoe, what do you think about this solution? I've used SelectWorkerStrategies to utilize dynamic configuration of global limits, and it seems like a good option to me.

nozjkoitop avatar Oct 10 '24 10:10 nozjkoitop

Apologies, I have been meaning to get to this PR. Will try to finish it by EOW.

cryptoe avatar Nov 21 '24 13:11 cryptoe

This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If you think that's incorrect or this pull request should instead be reviewed, please simply write any comment. Even if closed, you can still revive the PR at any time or discuss it on the [email protected] list. Thank you for your contributions.

github-actions[bot] avatar Jan 21 '25 00:01 github-actions[bot]

Hi @cryptoe will you have some time to review this changes?

nozjkoitop avatar Jan 21 '25 08:01 nozjkoitop

@nozjkoitop Could you please rebase this PR. Will take a look again. Overall LGTM cc @kfaraz

cryptoe avatar Mar 04 '25 15:03 cryptoe

@cryptoe Done

nozjkoitop avatar Mar 05 '25 11:03 nozjkoitop

Thanks for your work on this, @nozjkoitop ! I will try to finish the review of this PR later this week.

kfaraz avatar Mar 05 '25 11:03 kfaraz

Thanks for the changes. Lets wait for @kfaraz review and then we can get this merged.

cryptoe avatar Mar 06 '25 05:03 cryptoe