karmada
karmada copied to clipboard
Karmada not propagate resources to new member cluster
What happened:
I just want to propagate my deployment to all member clusters automatically, when they join.
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
After creating this PP, I find nginx deployment in cluster-1 and cluster-2.
But when I join a new member cluster-3, I find nginx deployment is not propagated to cluster-3
What you expected to happen:
Deployment should be propagated to new member cluster.
type Placement struct {
// ClusterAffinity represents scheduling restrictions to a certain set of clusters.
// If not set, any cluster can be scheduling candidate.
// +optional
ClusterAffinity *ClusterAffinity `json:"clusterAffinity,omitempty"`
}
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Karmada version:
- kubectl-karmada or karmadactl version (the result of
kubectl-karmada version
orkarmadactl version
): - Others:
I just want to propagate my deployment to all member clusters automatically, when they join.
Yes, I think this is a reasonable use case. But now the scheduler doesn't re-schedule in case of cluster joining or removal.
@dddddai I can see there is TODO
, do you mean to cover this case?
https://github.com/karmada-io/karmada/blob/ed9b838056ee45720abf51b2d603548e6ce922bf/pkg/scheduler/scheduler.go#L384
Yeah I was thinking so, there were similar issues https://github.com/karmada-io/karmada/issues/1644 https://github.com/karmada-io/karmada/issues/829#issuecomment-1107735202
I think the descheduler should be responsible for this case?
I think the descheduler should be responsible for this case?
cc @Garrybest
I'll pay attention to this feature. @dddddai can you help to lead the effort?
Descheduler's duty is to evict replicas only. This issue focuses on scheduling a object to new joined clusters.
I think we could always s.scheduleResourceBinding(rb)
when the placement is Duplicated
. It is ok?
hello, is there anything new?
@liuchintao we are working on it. @chaunceyjiang send a PR(#2301) for this. /assign @chaunceyjiang