karmada
karmada copied to clipboard
Don't auto distribute custom namespace which karmada component is deployed
What happened:
I installed karmada on the demo namespace, when adding a cluster, the demo namespace is automatically created by karmada namespace controller
on all member clusters.
karmada ns list (when karmada deploy demo ns):
NAME STATUS AGE
default Active 123d
karmada-cluster Active 114d
karmada-es-dev Active 27h
demo Active 123d
What you expected to happen:
I just want to deploy karmada components on the demo namespace, but don't want to distribute the demo namespace by default.
We should allow users to choose whether to distribute the demo namespace, rather than depending on where the karmada components are deployed.
karmada ns list (when karmada deploy any ns):
NAME STATUS AGE
default Active 123d
karmada-cluster Active 114d
karmada-system Active 123d
How to reproduce it (as minimally and precisely as possible):
deploy karmada on custom namespace
/cc @RainbowMango @lonelyCZ @prodanlabs @JarHMJ @likakuli
/assign
FYI: https://github.com/karmada-io/karmada/blob/master/pkg/controllers/namespace/namespace_sync_controller.go#L85
Seems this is another side effect of namespace
controller, in addition to #2512.
We should allow users to choose whether to distribute the demo namespace, rather than depending on where the karmada components are deployed.
So, what's your point? What do you think we should do?
When karmada is deployed in the demo namespace, the namespace list of karmada is obtained, and the result is as follows:
[root@ik8s01]# kubectl --kubeconfig karmada.config get ns
NAME STATUS AGE
default Active 11h
karmada-cluster Active 24m
karmada-es-ik8s Active 23m
demo Active 11h
kube-node-lease Active 11h
kube-public Active 11h
kube-system Active 11h
The reserved karmada-system namespace is disappear. For end users, the demo namespace is regular, there're no difference with others created by users. so the demo could be deleted by user without knowing it. If the demo namespace is gone, the karmada will be broken, because the karmada components cannot obtain leader lock and stay at crash state until the karmada creator recreate the special namespace.
For end users, they don't care the namespace which the karmda is deployed. So we can choose a reserved namespace to act as leaderselection namespace.
when deploy related components, we don't specify the leaderselection namespace. Keep it as default value.
- kube-system for kube-controller-manager
- karmada-system for karmada-descheduler, karmada-scheduler, karmada-controller-manager.
@lonelyCZ @RainbowMango what do you think ?
I'm not sure I fully understand your points.
I guess you mean Karmada can be installed in any customized namespaces(like, demo
), for now, we don't take the demo
namespace as reserved
namespaces.
I don't know how we can identify if a namespace is used to hold Karmada system components. If we can, yes, we can put it into the reserved namespaces too, so that it will be prevented from propagating.
Also, I added this issue to the next community meeting, I'd like to talk more if we can't get a consensus then.
So let's push this forward, referring to the meeting notes before, the best way is to add a labels to decide whether to sync the ns:
my first think is add a lable to ns: controller.karmada.io/namespace-visibility: notSync
When ns controller find this label, it will not sync this ns to member cluster.
cc @RainbowMango @carlory @XiShanYongYe-Chang
And also, referring other famous projects, the labels should follow:
submodel.karmada.io/(resource or config): xxx
notSync
Do we have other alternatives?
controller.karmada.io/namespace-visibility
:
- host-plane-only
- host-only
- control-plane-only
There is four cases:
- create ns with
controller.karmada.io/namespace-visibility
label: so just not create work resource. - create ns with
controller.karmada.io/namespace-visibility
label, then patch it without label, so it will triggerReconcile
to create worke resource. - create ns without
controller.karmada.io/namespace-visibility
label, so just create work resource. - create ns without
controller.karmada.io/namespace-visibility
label, then patch it with label, so should delete the work resource?? this will trigger delete the member cluster ns, little dangerous.
cc @RainbowMango @XiShanYongYe-Chang @carlory
create ns with controller.karmada.io/namespace-visibility label, then patch it without label, so it will trigger Reconcile to create worke resource.
sorry, I can't get it.
sorry, I can't get it.
First has this label, karamda should skip to create the work(not sync it to member clusters).
When patch to delete it, it will trigger Reconcile
to create the work(sync it to member clusters).
My suggestion:
diff --git a/pkg/apis/policy/v1alpha1/well_known_constants.go b/pkg/apis/policy/v1alpha1/well_known_constants.go
index d1c186b1..d7d850da 100644
--- a/pkg/apis/policy/v1alpha1/well_known_constants.go
+++ b/pkg/apis/policy/v1alpha1/well_known_constants.go
@@ -10,3 +10,12 @@ const (
// ClusterPropagationPolicyLabel is added to objects to specify associated ClusterPropagationPolicy.
ClusterPropagationPolicyLabel = "clusterpropagationpolicy.karmada.io/name"
)
+
+const (
+ // NamespaceAutoPropagationLabel is added to namespace objects to indicate if
+ // the namespace should be skipped from propagating by the namespace controller.
+ // For example, a namespace with the following label will be skipped:
+ // labels:
+ // namespace.karmada.io/skip-auto-propagation: true
+ NamespaceAutoPropagationLabel = "namespace.karmada.io/skip-auto-propagation"
+)
We can skip the namespaces with the label namespace.karmada.io/skip-auto-propagation: true
in the namespace controller.
Q: What if a namespace doesn't have the label before, and the label is added after it has been propagated to the member cluster?
I'm not sure if we should take action for that as you mentioned above it's a dangerous operation to remove a namespace. I think we can postpone it and look forward to more evidence for us to do so.
Just echo my ideas here, we can try to clean up the Works
as long as no object is in the same namespace.
/assign
/kind feature
Because of the wrong format MR message, this issue is not closed automatically, So i close it manually. /close
@jwcesign: Closing this issue.
In response to this:
Because of the wrong format MR message, this issue is not closed automatically, So i close it manually. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.