k8s.io
k8s.io copied to clipboard
Migrate away from google.com gcp project k8s-authenticated-test
Part of umbrella issue to migrate away from google.com gcp projects: https://github.com/kubernetes/k8s.io/issues/1469
Part of umbrella to migrate kubernetes e2e test images/registries to community-owned infrastructure: https://github.com/kubernetes/k8s.io/issues/1458
The registry is used by the following kubernetes e2e tests:
- [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
- [sig-apps] ReplicationController should serve a basic image on each replica with a private image
The k8s-authenticated-test project was accidentally earlier today, and has caused these tests to fail (ref: ref: https://github.com/kubernetes/kubernetes/issues/97002#issuecomment-737435131)
We should:
- [x] ensure the project is restored
- [x] determine what permissions are setup on that project
- [ ] determine whether we want to keep the test that references this project at all (ref: https://github.com/kubernetes/kubernetes/issues/97026#issuecomment-738500525)
- if no, delete this test from all currently supported versions of kubernetes
- if yes, setup a new project and migrate tests to use it (staging project with custom ACL? custom project?)
- [ ] flag project for removal post-deprecation window
/wg k8s-infra /area artifacts /sig testing /sig release /area release-eng
For reference, here's the output of gsutil iam get gs://artifacts.k8s-authenticated-test.appspot.com/
{
"bindings": [
{
"members": [
"projectEditor:k8s-authenticated-test",
"projectOwner:k8s-authenticated-test"
],
"role": "roles/storage.legacyBucketOwner"
},
{
"members": [
"allAuthenticatedUsers",
"projectViewer:k8s-authenticated-test"
],
"role": "roles/storage.legacyBucketReader"
}
],
"etag": "CAk="
}
To keep the behavior of these tests as-is using a new registry, the key part is allAuthenticatedUsers
instead of allUsers
.
That said, I question whether we should keep these tests at all, ref: https://github.com/kubernetes/kubernetes/issues/97026#issuecomment-738500525
/miletone v1.21 /sig apps test owner /sig node I think this is a more appropriate test owner for this functionality
/cc
/milestone v1.22
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten /milestone v1.23 Push to close https://github.com/kubernetes/k8s.io/issues/1458 for v1.23
/milestone v1.24
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen
One possible alternative discussed at https://github.com/kubernetes/kubernetes/issues/113925#issuecomment-1536115193
This project is at risk in the near future and GCR is deprecated and shutting down within a year anyhow.
Raised in #sig-node today.