Recreate the `k8s-authenticated-test` GCP project as `k8s-staging-authenticated-test` on kubernetes.io GCP Org
Fixes: https://github.com/kubernetes/kubernetes/issues/97026 Part of https://github.com/kubernetes/k8s.io/issues/1458
The following tests in k/k are failing quite frequently on kops clusters and for anyone who doesn't run e2e tests on specific projects.
[sig-apps] ReplicaSet should serve a basic image on each replica with a private image
[sig-apps] ReplicationController should serve a basic image on each replica with a private image
Sample failure: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/e2e-kops-gce-cni-cilium-k8s-ci/1702296867670331392
This particular image gcr.io/k8s-authenticated-test/agnhost:2.6 is pulled for testing and runs in a google.com GCP org.
We can't get rid of these tests till the in-tree Kubelet auth providers are gone.
I'm proposing the images live in a new location that is owned by the community and the images can be pulled by any google service account.
The new pull location for the image will be us-central1.docker.pkg.dev/k8s-staging-authenticated-test/images/agnhost:2.6
/cc @dims @ameukam @bentheelder
I wonder if we want to migrate this registry. There was a conversation about removing the tests using gcr/k8s-authenticated-test .
Do we want input from sig-node before doing this ? @dims @BenTheElder
most recent convo https://kubernetes.slack.com/archives/C7J9RP96G/p1694796571586009
/sig testing
/assign @aojea
I would prefer to see an alternative as I think trying to host a permanent authenticated image endpoint is a liability (also don't forget when something like this happens now old release tests are broken)
We can't get rid of these tests till the in-tree Kubelet auth providers are gone.
I don't think that's true, again you can pass generic auth that isn't specific to say, GCP, by way of e.g. a secret in a namespace. That's generic and I don't see that being ripped out of tree.
Suggestions:
- Run a local registry in the cluster (see past discussion though of why this may not work portably due to k8s networking)
- Custom app similar to registry.k8s.io and the tiniest image possible w/ custom auth (lots of extra work / infra for one test ...)
- This PR or something similar (... but acknowledge that it's likely to break in the future)
Suggestions:
- Run a local registry in the cluster (see past discussion though of why this may not work portably due to k8s networking)
- Custom app similar to registry.k8s.io and the tiniest image possible w/ custom auth (lots of extra work / infra for one test ...)
- This PR or something similar (... but acknowledge that it's likely to break in the future)
Let's go with the 3rd option and revisit this problem in the future. I don't see anyone who is ready to do 1 and 2. Replacing kube-up clusters is a better use of our time than doing a ton of extra work for a test that shouldn't even be there in the first place.
@aojea Any concern about picking the third option ?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: upodroid
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~infra/gcp/OWNERS~~ [upodroid]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.