secrets-store-csi-driver
secrets-store-csi-driver copied to clipboard
Move prow jobs to use the community clusters
xref: https://github.com/kubernetes/test-infra/issues/29722
hey @aramase do you know why the changes from https://github.com/kubernetes/test-infra/pull/29473 prevented the jobs from finishing? There are a lot of folks that are going to be making this transition and your implementation looked pretty on track with what I'd expect.
@aramase based on https://monitoring-eks.prow.k8s.io/d/96Q8oOOZk/builds?orgId=1&var-org=kubernetes-sigs&var-repo=secrets-store-csi-driver&var-job=pull-secrets-store-csi-driver-lint&var-build=All&from=1686575114078&to=1686578965140, it looks like the resource quotas should be significantly increased.
Recommendations from @xmudrii would be 2-4 CPU and 4-8 GB Mem for linting jobs. If you merge with new capacity values, you should be able to monitor the above dashboard to see how it's performing.
@aramase based on https://monitoring-eks.prow.k8s.io/d/96Q8oOOZk/builds?orgId=1&var-org=kubernetes-sigs&var-repo=secrets-store-csi-driver&var-job=pull-secrets-store-csi-driver-lint&var-build=All&from=1686575114078&to=1686578965140, it looks like the resource quotas should be significantly increased.
Recommendations from @xmudrii would be 2-4 CPU and 4-8 GB Mem for linting jobs. If you merge with new capacity values, you should be able to monitor the above dashboard to see how it's performing.
@rjsadow I'm happy to try the new resource limits and this issue was opened so we can follow up and move the jobs. The revert was done to unblock our patch release.
Recommendations from @xmudrii would be 2-4 CPU and 4-8 GB Mem for linting jobs.
It might be good to document these recommendations for future reference.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen