k8s.io icon indicating copy to clipboard operation
k8s.io copied to clipboard

Deprecate and migrate away from gs://kubernetes-release

Open spiffxp opened this issue 3 years ago • 37 comments

Part of umbrella issue to migrate the kubernetes project away from use of GCP project google-containers: https://github.com/kubernetes/k8s.io/issues/1571

This issue covers the deprecation of and migration away from the following google.com assets:

  • the google.com-owned GCS bucket gs://kubernetes-release living in GCP project google-containers, in favor of the community-owned GCS bucket gs://k8s-release living in GCP project TBD (currently k8s-release)
  • the region-specific GCS buckets gs://kubernetes-release-asia and gs://kubernetes-release-eu, same as above but gs://k8s-release-eu and gs://k8s-release-asia instead
  • TODO: are there container images involved here as well, or did we already address that with k8s.gcr.io?

These are not labeled as steps just yet because not everything needs to be completed to full fidelity in strict sequential order. I would prefer that we get a sense sooner rather than later what the impact of shifting dl.k8s.io traffic will be, in terms of how much budget, and what percentage of traffic that represents vs. hardcoded traffic.

Determine new-to-deprecated sync implementation and deprecation window

There are likely a lot of people out there that have gs://kubernetes-release hardcoded. It's unreasonable to stop putting new releases there without some kind of advance warning. So after announcing our intent to deprecate gs://kubernetes-release, we should decide how we're going to sync new releases back there (and its region-specific buckets)

  • gsutil rsync
  • Google Cloud Storage Transfer Service
  • etc.

As for the deprecation window itself, I think it's fair to treat this with a deprecation clock equivalent to disabling a v1 API.

Determine gs://k8s-release project location and geo-sync implementation

  • Someone (probably me) manually created gs://k8s-release and its other buckets to prevent someone else from grabbing the name
  • The -eu and -asia buckets are not actually region-specific, and should be recreated as such
  • We should decide how we're going to implement region syncing (same as above)
  • We should decide at this stage whether we want to block on a binary artifact promotion process, or get by with one of the syncing mechanisms from above

Use dl.k8s.io where possible and identify remaining hardcoded bucket name references across the project

The only time a kubernetes release artifact GCS bucket name needs to show up in a URI is if gsutil is involved, or someone is explicitly interested in browsing the bucket. For tools like curl or wget that retrieve binaries via HTTP, we have https://dl.k8s.io, which will allow us to automatically shift traffic from one bucket to the next depending on the requested URIs

I started doing this for a few projects while working on https://github.com/kubernetes/k8s.io/issues/2318, e.g.

  • https://github.com/kubernetes/cloud-provider-gcp/pull/252
  • https://github.com/kubernetes-sigs/cluster-api/pull/4958

TODO: a cs.k8s.io query and resulting checklist of repos to investigate

Shift dl.k8s.io traffic to gs://k8s-release-dev

TODO: there is a separate issue for this.

We will pre-seed gs://k8s-release with everything in gs://kubernetes-release, and gradually modify dl.k8s.io to redirect more and more traffic to gs://k8s-release.

The idea is not to flip a switch, just in case that sends us way more traffic than our budget is prepared to handle. Instead, let's consider shifting traffic gradually for certain URI patterns, or a certain percentage of requests, etc. It's unclear whether this will be as straightforward as adding lines to nginx, or whether we'll want GCLB changes as well.

Change remaining project references to gs://k8s-release

/area artifacts /area prow /area release-eng /sig release /sig testing /wg k8s-infra /priority important-soon /kind cleanup /milestone v1.23

spiffxp avatar Jul 26 '21 22:07 spiffxp

/cc @kubernetes/release-engineering

puerco avatar Aug 12 '21 18:08 puerco

Blocked on https://github.com/kubernetes/k8s.io/issues/1375

spiffxp avatar Sep 29 '21 19:09 spiffxp

/milestone v1.24

spiffxp avatar Nov 24 '21 00:11 spiffxp

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 22 '22 00:02 k8s-triage-robot

/remove-lifecycle stale

ameukam avatar Feb 22 '22 09:02 ameukam

/milestone clear /lifecycle frozen /priority backlog

ameukam avatar May 12 '22 02:05 ameukam

/remove-lifecycle frozen /milestone v1.26 /priority important-longterm

ameukam avatar Aug 26 '22 15:08 ameukam

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 24 '22 16:11 k8s-triage-robot

/remove-lifecycle stale /milestone v1.27

ameukam avatar Nov 24 '22 19:11 ameukam

Blocked by https://github.com/kubernetes/k8s.io/issues/4528

/milestone v1.28 /lifecycle frozen

ameukam avatar Feb 22 '23 09:02 ameukam

This isn't blocked by 4528, everything can switch to only use dl.k8s.io immediately.

BenTheElder avatar May 10 '23 21:05 BenTheElder

Lots of hits still https://cs.k8s.io/?q=%2Fkubernetes-release&i=nope&files=&excludeFiles=&repos=

BenTheElder avatar May 10 '23 21:05 BenTheElder

/assign

rjsadow avatar May 10 '23 22:05 rjsadow

Below are all the references for https://storage.googleapis.com/kubernetes-release that need to be updated. This list wont include the changes that will be necessary for gs://kubernetes-release updates. I'll generate and track those changes next.

rjsadow avatar May 11 '23 09:05 rjsadow

Here are the results for gs://kubernetes-release. These changes will need to be a bit more involved and careful as:

if those references are reading files they should switch to using https / curl / wget if they're writing files (kubernetes release) we can't migrate yet

rjsadow avatar May 12 '23 19:05 rjsadow

What is the plan for gs://k8s-release-dev/ci/? The plumbing for https://dl.k8s.io/ci seem to be in place already. Are folks good to start moving to that ref?

rjsadow avatar May 15 '23 12:05 rjsadow

k8s-release-dev will be tracked separately and isn't meant to be end-user facing (unlike dl.k8s.io), it's meant for contributors to the project

BenTheElder avatar May 15 '23 15:05 BenTheElder

kubernetes/release | search results Cannot be updated ATT, changes are used to push artifacts to the bucket

One of them is NOT:

https://github.com/kubernetes/release/blob/065c82ea4a3ca8f0e4b1b87ade902cb9e18be78d/hack/rapture/publish-packages.sh#L60

This is consuming release binaries.

BenTheElder avatar May 17 '23 05:05 BenTheElder

Even most of the release tools should be updated unless they're writing content, and only when writing. We'll deal with writing content later.

BenTheElder avatar May 17 '23 05:05 BenTheElder

I believe at this point all remaining references either have justification for not being updated or are actively awaiting PRs to be reviewed.

rjsadow avatar May 22 '23 13:05 rjsadow

Awesome, thank you! I'll plan to take a pass through remaining references again when the PRs are in.

BenTheElder avatar May 22 '23 17:05 BenTheElder

@BenTheElder, I think we're ready for a check on outstanding references.

rjsadow avatar Jun 14 '23 20:06 rjsadow

Looking at:

the remaining references will go away (except the blog posts and changelogs) after the redirect update and the migration to a community-owned release bucket.

ameukam avatar Jun 14 '23 21:06 ameukam

/milestone v1.29

ameukam avatar Jul 25 '23 22:07 ameukam

Though, for the moment we're still seeing most bandwidth / requests go to the bucket.

Hopefully when we start publishing only to a new kubernetes.io bucket we'll see that start to change.

BenTheElder avatar Jul 27 '23 21:07 BenTheElder

/milestone v1.30

ameukam avatar Dec 08 '23 11:12 ameukam

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 07 '24 12:03 k8s-triage-robot

/remove-lifecycle stale

xmudrii avatar Mar 07 '24 12:03 xmudrii

/milestone v1.31

ameukam avatar Apr 18 '24 07:04 ameukam

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 17 '24 07:07 k8s-triage-robot