Deprecate using `gencred` and switch to using Google principals to authenticate to GKE clusters
The new gke auth plugin doesn't store access tokens in the kubeconfig file.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: SOME CERT
server: https://34.90.233.66
name: gke_mahamed_europe-west4_dev
contexts:
- context:
cluster: gke_mahamed_europe-west4_dev
user: gke_mahamed_europe-west4_dev
name: gke_mahamed_europe-west4_dev
current-context: gke_mahamed_europe-west4_dev
kind: Config
preferences: {}
users:
- name: gke_mahamed_europe-west4_dev
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
Intree gcp plugin used to do the following which wasn't great.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: SOME CERT
server: https://34.90.233.66
name: gke_mahamed_europe-west4_dev
contexts:
- context:
cluster: gke_mahamed_europe-west4_dev
user: gke_mahamed_europe-west4_dev
name: gke_mahamed_europe-west4_dev
current-context: gke_mahamed_europe-west4_dev
kind: Config
preferences: {}
users:
- name: gke_mahamed_europe-west4_dev
user:
auth-provider:
config:
access-token: REDACTED
cmd-args: config config-helper --format=json
cmd-path: /Users/REDACTED/google-cloud-sdk/bin/gcloud
expiry: "2022-11-30T15:48:48Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Related to https://github.com/kubernetes/test-infra/issues/27896
/sig testing /sig k8s-infra
cc @chaodaiG @cjwagner
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/priority important-longterm
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@upodroid [edit: is this] still important with the migration? (not sure what we settled on in k8s-infra)
It is important, Argo is configured to access clusters using gke-auth plugin and we want prow to do the same as well
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
fresh