k8s.io
k8s.io copied to clipboard
Restrict log viewing for k8s-infra-gcp-auditors
This role is clearly intended to allow visibility into what infrastructure we have on a non-sensitive, read-only basis. The documentation suggests it should be relatively open to join.
Currently this permits viewing all logs in all projects. We have projects that are too sensitive for that.
Off the top of my head https://registry.k8s.io production (k8s-infra-oci-proxy-prod) and probably AAA cluster (slack automation) logs should require specific, granular permission.
We should either remove log viewing and related roles that allow seeing any sort of service / cluster / loadbalancer logs from this role and restrict it to viewing what assets we have, or else we should figure out how to enable per-project restricting logging to more specific groups.
Thankfully the current set of members can be relatively trusted here, but this violates the intent of the group.
I think removing log access is reasonable. People should join project specific groups for access to logs, and some of these groups should be small and highly restricted due to PII.
https://github.com/kubernetes/k8s.io/blob/3874ef7b852c0d9de991ad7920ed36686217cb66/groups/sig-k8s-infra/groups.yaml#L232-L246
/sig k8s-infra /priority important-soon /kind bug
removing log access
Perhaps with a long term ambition to provide canned log access, maybe allow querying Cloud Audit Logs for elevation-of-privilege actions, that kind of thing. Definitely access to historic security finding records (if there are any).
I think it makes sense to escalate permissions for more granular log access than organization wide. IE joining groups for specific sensitive projects.
Unfortunately we've made this rather complex and with limited bandwidth I'm not super confident in the PRs to update this yet.
So for now I think we need to actually be very restrictive about membership in this group, since it may have PII access, and not actually openly permit joining.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten