Support team clusters
I think we should have semi-persistent "team" clusters that can last longer than default ones (but are still periodically flushed and reprovisioned) and also have a workflow where the creds can be easily shared among a team.
Even among the team, there'd still be an "owner" who might be e.g. actively testing their operator, but they'd be able to release it and e.g. turn back on the CVO and leave things hopefully a in a clean state for the next user. Or, commonly two members of a team might be concurrently working on a PR.
Another (quite common for me) use case is "I just want to run some read-only commands (as kube:admin) against a recent-ish 4.6 cluster".
Setup:
- A team is defined as one of the github team aliases we have today, e.g. @openshift/openshift-team-red-hat-coreos
- We synchronize membership of the GH team to Slack groups
UX, talking to @cluster-bot
<user> teamcluster request
<bot> cluster https://console.team-redhat-coreos.devcluster.openshift.com/ is currently available
<bot> (kubeconfig attachment)
<bot> use `teamcluster done` when you're done
Scenario where cluster is in use:
<user> teamcluster request
<bot> cluster https://console.team-redhat-coreos.devcluster.openshift.com/ is in use by @miabbott
<bot> use `teamcluster ping` to send a message to @miabbott requesting current credentials
<user> teamcluster ping
Bot to @miabbott:
<bot> user @cgwalters has requested access, reply "yes" to grant, "no <reason>" to indicate refusal
<miabbott> <yes or no>
Alternative flow for immediate grant of "readonly" access:
<user> teamcluster request ro
<bot> cluster https://console.team-redhat-coreos.devcluster.openshift.com/ is in use by @miabbott
<bot> (kubeconfig attachment)
<bot> You requested read-only access, please avoid changing the cluster
(Bot also ends a message to @miabbott like: <bot> User @cgwalters requested readonly access)
This sounds like a great idea to me. More and more folks are swarming on cards/tasks/work and this would be helpful!
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /lifecycle frozen One :robot: closing an issue on a git repo about a different :robot:
@cgwalters: Reopened this issue.
In response to this:
/reopen /lifecycle frozen One :robot: closing an issue on a git repo about a different :robot:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.