kebechet icon indicating copy to clipboard operation
kebechet copied to clipboard

Optimize Kebechet runs in a deployment

Open fridex opened this issue 3 years ago • 12 comments

Is your feature request related to a problem? Please describe.

It looks like we are not effectively utilizing cluster resources for Kebechet. Kebechet is run on each webhook received which might easily flood the whole namespace, especially with active repositories. Let's have a way to limit the number of Kebechet pods for a single repository in a deployment.

Describe the solution you'd like

One of the solutions would be to use messaging if Kafka provides a feature that would limit the number of specific messages (that is probably not possible based on our last tech talk discussion CC @KPostOffice).

Another way to limit the number of Kebechet runs for a single repository once a webhook is sent to user-api is to create a new database record stored in postgres (and associated with GitHub URL) that keeps null or a timestamp when user-api scheduled Kebechet last time.

  1. user-api receives a webhook
  2. user-api checks if there is already a pending request for the given repo in postgres (the timestamp is not null) and the timestamp is less than the specified number of minutes (a new configuration entry in user-api) a. if yes, the webhook handling is ignored (kebechet is not run) b. if no, continue to step 3
  3. add the current timestamp to the database for the specified repo
  4. schedule kebechet

On Kebechet side - once Kebechet is started, Kebechet marks the given timestamp as null for the repo it handles and starts handling the repository with Kebechet managers.

This way we will ignore any webhooks coming to the system while kebechet messages for repositories are already queued and we know Kebechet will handle repositories in the next run for the specified repos.

  • [ ] add timestamp to postgres and have it associated with GitHub URL (installations)
  • [ ] user-api is extended with the described logic above that manipulates with the timestamp
  • [ ] user-api's configuration is extended with a configuration option (configurable via env vars) that states timeout after which the timestamp stored in the database is invalid
  • [ ] user-api exposes the newly created configuration as a metric (and is available in the dashboard)
  • [ ] kebechet sets the given timestamp entry in the database to null on startup
    • [ ] this can be done as init container using workflow helpers or so (if we do not want thoth-storages in kebechet itself)

Describe alternatives you've considered

Keep the solution as is, but it is not optimal with respect to the resources allocated.

Additional context

The timestamp was chosen to avoid manually adjusting the database if there will be issues (ex. issues with Kafka). If we lose messages or kebechet fails to clear the database entry to null, we will still be able to handle requests after the specified time, configured on user-api.

fridex avatar Oct 18 '21 13:10 fridex

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

sesheta avatar Jan 16 '22 15:01 sesheta

/remove-lifecycle stale /priority important-longterm

KPostOffice avatar Jan 17 '22 21:01 KPostOffice

Some Kebechet features rely on the content of the webhook (i.e. if a PR was merged). If we drop webhooks we may drop functionality.

KPostOffice avatar Feb 04 '22 01:02 KPostOffice

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

sesheta avatar May 05 '22 01:05 sesheta

/remove-lifecycle stale

KPostOffice avatar May 12 '22 14:05 KPostOffice

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

sesheta avatar Aug 10 '22 16:08 sesheta

This looks more like a job for a workqueue with a limited number of concurrent consumers. Besides (emphasis mine) :

It looks like we are not effectively utilizing cluster resources for Kebechet. Kebechet is run on each webhook received which might easily flood the whole namespace, especially with active repositories

Do we actually have hard data on that, aka, is it a problem in practice ?

/sig devsecops

VannTen avatar Aug 30 '22 14:08 VannTen

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

sesheta avatar Sep 29 '22 17:09 sesheta

/remove-lifecycle rotten /lifecycle frozen

harshad16 avatar Oct 04 '22 03:10 harshad16

/remove-lifecycle frozen /priority backlog

goern avatar Oct 05 '22 11:10 goern

/priority backlog

for consistency (as this does not happen automatically): /remove-priority important-longterm

codificat avatar Oct 05 '22 13:10 codificat

Issues needing reporter input close after 60d,

If there is new input, reopen with /reopen.

sesheta avatar Apr 03 '23 15:04 sesheta