cluster-api-provider-gcp
cluster-api-provider-gcp copied to clipboard
Document the exact IAM permissions needed on the GSA used by CAPG
/kind documentation
The Quickstart/Prerequisite docs at https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/main/docs/book/src/topics/prerequisites.md#create-a-service-account ask the user to
[...] create a new service account with Editor permissions. Afterwards, generate a JSON Key and store it somewhere safe.
Assigning such a broad aggregated IAM role to a service account is problematic from a security perspective. The exact permissions the GSA needs should be documented, and ideally any e2e tests would include running the controller with the documented/defined least privileges required.
Furthermore, I personally think the linked doc should be updated reminding users that generating and downloading a JSON key for a GSA is a quick/easy way to get started with CAPG, but a more secure means to grant the controller access to GCP APIs should be used in a production setup.
/assign
@itspngu Any updates?
related to #451
@itspngu Any updates?
Hey, haven't had the chance to work on this yet sadly because I'm blocked internally. Feel free to re-assign if it's urgent and somebody else wants to work on this issue, else I will report back with my findings once I had the opportunity to iron things out.
Yeah sure! Take your time. I was just checking in, nothing urgent.
On Fri, 12 Aug 2022 at 20:27, Thorben @.***> wrote:
@itspngu https://github.com/itspngu Any updates?
Hey, haven't had the chance to work on this yet sadly because I'm blocked internally. Feel free to re-assign if it's urgent and somebody else wants to work on this issue, else I will report back with my findings once I had the opportunity to iron things out.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/cluster-api-provider-gcp/issues/629#issuecomment-1213205911, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR7Q7RRH6QQZSJZZVKIDOBDVYZQ4HANCNFSM5ZEYENXQ . You are receiving this because you commented.Message ID: @.*** com>
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@sayantani11 @itspngu Hi! Do you still plan to implement it?