openyurt
openyurt copied to clipboard
[BUG] kubectl logs to pool-coordinator needs rbac
What happened:
When using kubectl logs
to get pod log from pool-coordinator, the following error will happen.
Error from server (Forbidden): Forbidden (user=openyurt:pool-coordinator:apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log yurt-hub-openyurt-e2e-test-worker2)
It seems that we do not have rbac rules for openyurt:pool-coordinator:apiserver
. After creating the following ClusterRoleBinding
into pool-coordinator, it works.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pool-coordinator-apiserver
subjects:
- kind: User
name: openyurt:pool-coordinator:apiserver
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
It's just a temporary solution. We may need more explict authorization for getting nodes/proxy. Additionally, we also should consider when should we create such RBAC rules for openyurt:pool-coordinator:apiserver
during deploying pool-coordinator, including ClusterRole
and ClusterRoleBinding
.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- OpenYurt version:
- Kubernetes version (use
kubectl version
): - OS (e.g:
cat /etc/os-release
): - Kernel (e.g.
uname -a
): - Install tools:
- Others:
others
/kind bug
@Congrool Do you have any progress for this issue?
I proposed to add a sidecar container into the pool-coordinator pod, which takes the responsibility of creating RBAC for kubectl logs
. It will continously check whether RBAC has existed in the pool-coorindator to avoid that all RBACs deleted when etcd restarts.
I proposed to add a sidecar container into the pool-coordinator pod, which takes the responsibility of creating RBAC for
kubectl logs
. It will continously check whether RBAC has existed in the pool-coorindator to avoid that all RBACs deleted when etcd restarts.
@Congrool it's a good idea to add a sidecar to manage RBAC rules for pool-coordinator, and i think the sidecar don't need to check RBAC rules existence at interval time because users have no permission to delete RBAC rules in pool-coordinator. on the other hand, it's more simpler for sidecar.
because users have no permission to delete RBAC rules in pool-coordinator.
Good idea, I originally thought that when etcd restarts, all in-memory storage will be reset. But the pod may restart at that time solving the problem.
Hi @rambohe-ch and @Congrool I can take this work up. I am looking up code repo, please let me know how do you see this implementation
Hi @rambohe-ch and @Congrool I can take this work up. I am looking up code repo, please let me know how do you see this implementation
@Himanshu372 Thanks for your actively reply, but i think @Congrool is working on this issue now. @Congrool do you have any comment?
@Himanshu372 Thanks for you help, I'm still working on this feature. I think it will be available in the next OpenYurt release. I'll let you know if it's ready.
Thank you. I'll have a look at other open issues. If you any issue which needs attention, please let me know. @rambohe-ch @Congrool
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
/reopen
@Congrool: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.