k8s.io
k8s.io copied to clipboard
Migrate away from gs://kops-ci
Part of umbrella issue to migrate the kubernetes project away from use of GCP project google-containers: #1469
- [x] Create a new GCS bucket writable by k8s-infra-prow-build GKE cluster (
kops-ci
tok8s-infra-kops-ci-results
) - [x] Give kOps maintainers admin access to this bucket
- [x] Add a canary jobs (duplicate of the ones writing to gs://kops-ci) that pushes to the new bucket
- ensure it's building and pushing appropriately
- [ ] update jobs that pull from the new bucket
- [x] get rid of the jobs that runs on the "default" cluster
/sig cluster-lifecycle /area jobs /help wanted
/assign @justinsb @johngmyers For Kops
/assign @spiffxp For wg-k8s-infra
@ameukam: The label(s) area/jobs
cannot be applied, because the repository doesn't have them.
In response to this:
Part of umbrella issue to migrate the kubernetes project away from use of GCP project google-containers: #1469
- [ ] Create a new GCS bucket writable by k8s-infra-prow-build GKE cluster (
kops-ci
tok8s-infra-kops-ci-results
)- [ ] Give kOps maintainers admin access to this bucket
- [ ] Add a canary jobs (duplicate of the ones writing to gs://kops-ci) that pushes to the new bucket
- ensure it's building and pushing appropriately
- [ ] update jobs that pull from the new bucket
- [ ] get rid of the jobs that runs on the "default" cluster
/sig cluster-lifecycle /area jobs /help wanted
/assign @justinsb @johngmyers For Kops
/assign @spiffxp For wg-k8s-infra
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/priority important-longterm
/unassign I'm not actively working this, though I am happily reviewing @ameukam's PRs
/milestone clear Clearing from milestone because migrating all of the kops jobs over is likely to affect our spend in a not-insignificant way
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
/milestone v1.31
This migration is complete.
Are we missing something @ameukam ?
I see references of the bucket: https://cs.k8s.io/?q=https%3A%2F%2Fstorage.googleapis.com%2Fkops-ci&i=nope&files=&excludeFiles=&repos=. 🤔
I think we're really close, we were able to set up the 1.30 jobs with no kops-ci at all. I think there are still a few 1.28 / 1.29 jobs that point to kops-ci, but we can now repoint them following the example of 1.30 (we were waiting on releases on 1.28 / 1.29, I think they're now ~done)