kebechet
kebechet copied to clipboard
Optimize Kebechet runs in a deployment
Is your feature request related to a problem? Please describe.
It looks like we are not effectively utilizing cluster resources for Kebechet. Kebechet is run on each webhook received which might easily flood the whole namespace, especially with active repositories. Let's have a way to limit the number of Kebechet pods for a single repository in a deployment.
Describe the solution you'd like
One of the solutions would be to use messaging if Kafka provides a feature that would limit the number of specific messages (that is probably not possible based on our last tech talk discussion CC @KPostOffice).
Another way to limit the number of Kebechet runs for a single repository once a webhook is sent to user-api is to create a new database record stored in postgres (and associated with GitHub URL) that keeps null
or a timestamp when user-api scheduled Kebechet last time.
- user-api receives a webhook
- user-api checks if there is already a pending request for the given repo in postgres (the timestamp is not
null
) and the timestamp is less than the specified number of minutes (a new configuration entry in user-api) a. if yes, the webhook handling is ignored (kebechet is not run) b. if no, continue to step 3 - add the current timestamp to the database for the specified repo
- schedule kebechet
On Kebechet side - once Kebechet is started, Kebechet marks the given timestamp as null
for the repo it handles and starts handling the repository with Kebechet managers.
This way we will ignore any webhooks coming to the system while kebechet messages for repositories are already queued and we know Kebechet will handle repositories in the next run for the specified repos.
- [ ] add timestamp to postgres and have it associated with GitHub URL (installations)
- [ ] user-api is extended with the described logic above that manipulates with the timestamp
- [ ] user-api's configuration is extended with a configuration option (configurable via env vars) that states timeout after which the timestamp stored in the database is invalid
- [ ] user-api exposes the newly created configuration as a metric (and is available in the dashboard)
- [ ] kebechet sets the given timestamp entry in the database to
null
on startup- [ ] this can be done as init container using workflow helpers or so (if we do not want thoth-storages in kebechet itself)
Describe alternatives you've considered
Keep the solution as is, but it is not optimal with respect to the resources allocated.
Additional context
The timestamp was chosen to avoid manually adjusting the database if there will be issues (ex. issues with Kafka). If we lose messages or kebechet fails to clear the database entry to null
, we will still be able to handle requests after the specified time, configured on user-api.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale /priority important-longterm
Some Kebechet features rely on the content of the webhook (i.e. if a PR was merged). If we drop webhooks we may drop functionality.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
This looks more like a job for a workqueue with a limited number of concurrent consumers. Besides (emphasis mine) :
It looks like we are not effectively utilizing cluster resources for Kebechet. Kebechet is run on each webhook received which might easily flood the whole namespace, especially with active repositories
Do we actually have hard data on that, aka, is it a problem in practice ?
/sig devsecops
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
/remove-lifecycle frozen /priority backlog
/priority backlog
for consistency (as this does not happen automatically): /remove-priority important-longterm
Issues needing reporter input close after 60d,
If there is new input, reopen with /reopen
.