boskos icon indicating copy to clipboard operation
boskos copied to clipboard

janitor fails to clean up some resources in a timely manner if dirty rates are unequal

Open ixdy opened this issue 5 years ago • 8 comments

Originally filed as https://github.com/kubernetes/test-infra/issues/15925

Creating a one-sentence summary of this issue is hard, but the basic bug is fairly easy to understand.

Assume a Boskos instance has three resource types, A, B, and C. A has 5 resources, B has 10, and C has 100. A Boskos janitor has been configured to clean all three types.

Currently, the janitor loops through all resource types, iteratively cleaning one resource of each type. If the janitor finds that one of the types has no dirty resources, it stops querying that resource type until all resources have been cleaned, at which point it waits a minute and then starts over with the complete list again.

In our hypothetical case (as well as observed in practice), what this means is that the janitor will finish cleaning resources of type A (and possibly B), while still having many more C resources to clean. Additionally, given that C is such a large pool, there will likely be many jobs making more C resources dirty. As a result, it will be quite some time before the janitor attempts to clean A resources, and the pool will probably fill up with dirty resources.

Possible ways to mitigate the issue (in increasing complexity):

  • increase the number of janitor replicas
  • segment the janitors (i.e. have separate janitors for each type)
  • remove the optimization in the janitor loop, continuing to attempt to acquire all resource types (this will likely result in more /aquire RPCs to Boskos)
  • use Boskos metrics to select which resources to attempt to clean. This could even be prioritized (e.g. focus on whichever type is closest to running out of resources), though that might lead to different issues with starvation. Additionally, a failing cleanup could mean the janitor might get completely stuck.

ixdy avatar May 29 '20 00:05 ixdy

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Aug 27 '20 01:08 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Sep 26 '20 02:09 fejta-bot

/remove-lifecycle stale

ixdy avatar Oct 16 '20 01:10 ixdy

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

fejta-bot avatar Nov 15 '20 01:11 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 15 '20 01:11 k8s-ci-robot

/reopen /remove-lifecycle rotten

ixdy avatar Nov 16 '20 20:11 ixdy

@ixdy: Reopened this issue.

In response to this:

/reopen /remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 16 '20 20:11 k8s-ci-robot

/lifecycle frozen

detiber avatar Nov 17 '20 19:11 detiber