Automation service accounts for `k8s-artifacts-` buckets
SIG Release (aka @kubernetes/release-managers) maintains various buckets in the k8s-artifacts-prod project:
It would be good to have a dedicated service account to automatically publish binaries for each tag and repository to avoid manually invocations of kpromo gh.
The tokens could be stored in our 1Password vault.
+1 as a Release Manager
+1 as a Release Manager
We could also reuse existing service accounts and grant them permissions to push to those buckets.
We could also reuse existing service accounts and grant them permissions to push to those buckets.
Would it be better security wise to have a dedicated service account per bucket?
We could also reuse existing service accounts and grant them permissions to push to those buckets.
Would it be better security wise to have a dedicated service account per bucket?
it depends on the entities that use these service accounts. As long we are inside the GCP perimeter, IMHO, reuse existing service accounts is fine. However, it's recommended to use short-lived tokens rather than JSON creds.
However, it's recommended to use short-lived tokens rather than JSON creds.
How would that work, for example when using GitHub actions?
The only service account I can see is [email protected] which has write access to the buckets. Should we use this one?
I prefer that we create a new service account, especially because this SA might be used outside Prow (e.g. with GitHub Actions)
However, it's recommended to use short-lived tokens rather than JSON creds.
How would that work, for example when using GitHub actions?
I remember @upodroid mentioned this article. https://cloud.google.com/blog/products/identity-security/enabling-keyless-authentication-from-github-actions which in resume means the Github Actions will assume an existing SA.
We can start with a new and single SA to cover all the buckets handled by RelEng.
From the convo on Slack, we can start with one single GCP Service Account to handle artifacts publication.
@xmudrii @saschagrunert Feel free to open PR against the repo and I'll actuate it.
Ref https://github.com/kubernetes/k8s.io/pull/5997
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale