k8s.io icon indicating copy to clipboard operation
k8s.io copied to clipboard

Migrate away from google.com gcp project k8s-authenticated-test

Open spiffxp opened this issue 4 years ago • 15 comments

Part of umbrella issue to migrate away from google.com gcp projects: https://github.com/kubernetes/k8s.io/issues/1469

Part of umbrella to migrate kubernetes e2e test images/registries to community-owned infrastructure: https://github.com/kubernetes/k8s.io/issues/1458

The registry is used by the following kubernetes e2e tests:

  • [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
  • [sig-apps] ReplicationController should serve a basic image on each replica with a private image

The k8s-authenticated-test project was accidentally earlier today, and has caused these tests to fail (ref: ref: https://github.com/kubernetes/kubernetes/issues/97002#issuecomment-737435131)

We should:

  • [x] ensure the project is restored
  • [x] determine what permissions are setup on that project
  • [ ] determine whether we want to keep the test that references this project at all (ref: https://github.com/kubernetes/kubernetes/issues/97026#issuecomment-738500525)
    • if no, delete this test from all currently supported versions of kubernetes
    • if yes, setup a new project and migrate tests to use it (staging project with custom ACL? custom project?)
  • [ ] flag project for removal post-deprecation window

spiffxp avatar Dec 02 '20 22:12 spiffxp

/wg k8s-infra /area artifacts /sig testing /sig release /area release-eng

spiffxp avatar Dec 02 '20 22:12 spiffxp

For reference, here's the output of gsutil iam get gs://artifacts.k8s-authenticated-test.appspot.com/

{
  "bindings": [
    {
      "members": [
        "projectEditor:k8s-authenticated-test",
        "projectOwner:k8s-authenticated-test"
      ],
      "role": "roles/storage.legacyBucketOwner"
    },
    {
      "members": [
        "allAuthenticatedUsers",
        "projectViewer:k8s-authenticated-test"
      ],
      "role": "roles/storage.legacyBucketReader"
    }
  ],
  "etag": "CAk="
}

To keep the behavior of these tests as-is using a new registry, the key part is allAuthenticatedUsers instead of allUsers.

That said, I question whether we should keep these tests at all, ref: https://github.com/kubernetes/kubernetes/issues/97026#issuecomment-738500525

spiffxp avatar Dec 04 '20 01:12 spiffxp

/miletone v1.21 /sig apps test owner /sig node I think this is a more appropriate test owner for this functionality

spiffxp avatar Jan 21 '21 18:01 spiffxp

/cc

pacoxu avatar Mar 17 '21 02:03 pacoxu

/milestone v1.22

spiffxp avatar Mar 24 '21 18:03 spiffxp

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jun 22 '21 19:06 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot avatar Jul 22 '21 19:07 fejta-bot

/remove-lifecycle rotten /milestone v1.23 Push to close https://github.com/kubernetes/k8s.io/issues/1458 for v1.23

spiffxp avatar Jul 27 '21 04:07 spiffxp

/milestone v1.24

spiffxp avatar Nov 24 '21 01:11 spiffxp

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 22 '22 01:02 k8s-triage-robot

/remove-lifecycle stale

ameukam avatar Feb 22 '22 09:02 ameukam

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 26 '22 19:06 k8s-triage-robot

/remove-lifecycle stale /lifecycle frozen

ameukam avatar Jun 26 '22 23:06 ameukam

One possible alternative discussed at https://github.com/kubernetes/kubernetes/issues/113925#issuecomment-1536115193

This project is at risk in the near future and GCR is deprecated and shutting down within a year anyhow.

Raised in #sig-node today.

BenTheElder avatar Apr 01 '24 20:04 BenTheElder