kubefed
kubefed copied to clipboard
Mark a portion of source manifest as "do not sync"
What would you like to be added: Have possibility to mark a portion of manifest "do not override this if changed in managed cluster". For instance, it can be done bringing additional field into Federated specs together with Overrides, Placement etc.
Why is this needed: In the current paradigm kubefed acts as a single source of truth for all deliverables. The problem is - in the real word there is a bunch of controllers who also does modifications, in this case they start to fight with kubefed.
Kubefed has "Local Value Retention" but it is limited for a specific set of cases, and every new case requires code modification. As kubefed aims to be a framework for other platform it looks like a blocker.
Use-case example to demonstrate the problem: I deliver admission webhooks via kubefed. At the same time I'm using cert-manager CA injector feature in every target cluster in order to generate/update tls certificates for webhooks.
So I have a state when cert-manager updates ValidatingWebhookConfiguration.webhooks.clientConfig.caBundlepart
of the manifest in the target cluster and kubefed controller erases it as source FederatedValidatingWebhookConfiguration
manifest doesn't have it. Because of it the whole cert-manager feature is not available.
/kind feature
This feature is indeed reasonable, but I'm not sure if there's need to customize retentions for every single resource object. If not, we might expose the customization in FederatedTypeConfig which would be much cleaner.
Hello.
I assume by saying "retention" you mean if controller propagates changes to member clusters or not? So you're saying (please fix me if I wrong) that to have rule in FederatedTypeConfig
is more preferable than in separate FeredatedResource
manifests, right?
This is a good idea, but let me present for discussion another option: describe this thing in FederatedResources itself, for instance using JSONPath.
I like it for the following reasons:
- It is possible to divide RBAC roles for admins who maintain kubefed and operators deliver own manifests - very good option to build own workflows.
- Makes deliveries more granular and explicit - when describe manifest and rule how to propagate it in one file it is easier to maintain and support, split into components etc.
- Standard templating engines can be applied. For instance, we use helm and kustomize with custom plugins to deliver manifests, so can change logic on-the-fly based on requirements.
- I do not think that
FederatedResource
type is currently overloaded, in fact there are only few items on the top ofResource
there.
Yeah I know that put the retention in FeredatedResource
is more fine-grained, but let me also list some reasons for the FederatedTypeConfig
way:
- In most cases the retention rule is applied for every resource of a type, just like the built-in cases, this way putting the retention information in all resources is redundant, especially for retention rules on label/annotation keys which would grow verbose.
- For other cases where you only want the retention to apply on some resources, we can set it "conditional", and the retention will happen only if you don't provide a value of that field, like the built-in
ServiceAccount
type retention does. I think with this we can handle almost all cases? Feel free to point out some case that cannot be covered.
@irfanurrehman Would be grateful if you can also give some opinions on this
I can think of several usecase around this.
- The retention can be specified once and applied globally to all resources of a specific federatedType
- It can be specified on each federated resource
- It can be further fine grainer to be applied to certain clusters and not to others.
I think we can do this in steps, with a goal that all the above configurations are possible eventually. My recommendation would be to first extend existing local value retention to make it configurable, such that a user can specify new fields (probably via FTC) without having to change the code. Most of the current user needs might be solved only by this.
I think we can do this in steps, with a goal that all the above configurations are possible eventually. My recommendation would be to first extend existing local value retention to make it configurable, such that a user can specify new fields (probably via FTC) without having to change the code. Most of the current user needs might be solved only by this.
+1, let's first make it configurable in FTC, and leave others till there's a real need.
I'm fine with any solution which allows me to continue using kubefed, FTC sounds better that nothing :) Now, how do you plan a feature. Is there ETA when it could be delivered (roadmap/plan, next release etc), or this will wait for opensource contributor? (sorry, It is not like I'm pushing or something, just for curiosity)
Now, how do you plan a feature. Is there ETA when it could be delivered (roadmap/plan, next release etc), or this will wait for opensource contributor?
This feature would need a KEP first than implementation, as KubeFed is in short of maintainers, we cannot provide an ETA with confidence. It would be most welcomed if there's some contributor who would like to work on this, or even take part in the maintenace :) If not, I can carry on with it but I cannot give a clear ETA for now.
/assign
@afoninsky KEP is here FYI: #1514
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.