openyurt
openyurt copied to clipboard
[feature request]`NodePool Governance Capability`: YurtHub supports writing data to pool-spirit
What would you like to be added: Yurthub needs to add the following features:
- When the pool-spirit starts or migrates and rebuilds, actively establish a connection with the pool-spirit and send local data to the pool-spirit.
- When the local data is updated, the changed data needs to be synchronized to the pool-spirit.
Since multiple nodes are likely to update a resource at the same time, conflicts need to be considered.
Notes: For details, please refer to the proposal: https://github.com/openyurtio/openyurt/pull/772
Why is this needed: As mentioned in the proposal(https://github.com/openyurtio/openyurt/pull/772), users can obtain resource data or kubectl exec/logs in the node pool dimension through pool-spirit. Considering that there is no guarantee that pool-spirit is always connected to the cloud, in order to ensure real-time data, the data in pool-spirit is updated by each yurthub.
others /kind feature
Hi,I want to take over this work. /assign
This work is almost finished, including:
- add a client to send objs in response to etcd in pool-coordinator.
- add a reverse proxy to re-direct requests to apiserver in pool-coordinator when the node cannot reach the cloud.
- kubectl logs/exec/auth/api-versions/api-resources/get/list/watch to pool-coordinator has been tested
Some problems still unsolved:
- cannot cache secret, which will be used when using InClusterConfig to setup pod.
- unit tests have not been revised
- resource gc in pool-coordinator. Considering that when an edge component uses GET request to get a resource, this resource will be cached into the pool-coordinator, but when to delete it?
- support node-scope and pool-scope cache at the same time
For ReverseProxy, the processing logic of request from edge side is something like:
switch{
case cloud.IsHealthy:
if poolCoordinator.IsHealthy && ! LeaderYurthub && IsPoolScopeResource(req.Resource) {
fallthrough
}
if err := cloud.Handle(req); err != nil {
fallthrough
}
case poolCoordinator.IsHealthy:
if err := poolCoordinator.Handle(req); err != nil {
fallthrough
}
default:
localCache.Handle(req)
}
So I need a way to determine whether this yurthub is leader yurthub. What do you think? @rambohe-ch
For ReverseProxy, the processing logic of request from edge side is something like:
switch{ case cloud.IsHealthy: if poolCoordinator.IsHealthy && ! LeaderYurthub && IsPoolScopeResource(req.Resource) { fallthrough } if err := cloud.Handle(req); err != nil { fallthrough } case poolCoordinator.IsHealthy: if err := poolCoordinator.Handle(req); err != nil { fallthrough } default: localCache.Handle(req) }
So I need a way to determine whether this yurthub is leader yurthub. What do you think? @rambohe-ch
@Congrool please wait me for a little time.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
/reopen
This feature is not completely implemented. So we need to reopen it.
@Congrool: Reopened this issue.
In response to this:
/reopen
This feature is not completely implemented. So we need to reopen it.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'll submit pr of pool-coordinator cache implementation. But I think we need to rebase the pool-coordinator-dev on #882 first.
@rambohe-ch What do you think?
I'll submit pr of pool-coordinator cache implementation. But I think we need to rebase the pool-coordinator-dev on #882 first.
@rambohe-ch What do you think?
@Congrool agree with you.
/close
@Congrool: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.