kubefed icon indicating copy to clipboard operation
kubefed copied to clipboard

Mark a portion of source manifest as "do not sync"

Open afoninsky opened this issue 2 years ago • 10 comments

What would you like to be added: Have possibility to mark a portion of manifest "do not override this if changed in managed cluster". For instance, it can be done bringing additional field into Federated specs together with Overrides, Placement etc.

Why is this needed: In the current paradigm kubefed acts as a single source of truth for all deliverables. The problem is - in the real word there is a bunch of controllers who also does modifications, in this case they start to fight with kubefed.

Kubefed has "Local Value Retention" but it is limited for a specific set of cases, and every new case requires code modification. As kubefed aims to be a framework for other platform it looks like a blocker.

Use-case example to demonstrate the problem: I deliver admission webhooks via kubefed. At the same time I'm using cert-manager CA injector feature in every target cluster in order to generate/update tls certificates for webhooks.

So I have a state when cert-manager updates ValidatingWebhookConfiguration.webhooks.clientConfig.caBundlepart of the manifest in the target cluster and kubefed controller erases it as source FederatedValidatingWebhookConfiguration manifest doesn't have it. Because of it the whole cert-manager feature is not available.

/kind feature

afoninsky avatar Jul 11 '22 08:07 afoninsky

This feature is indeed reasonable, but I'm not sure if there's need to customize retentions for every single resource object. If not, we might expose the customization in FederatedTypeConfig which would be much cleaner.

zqzten avatar Jul 11 '22 11:07 zqzten

Hello.

I assume by saying "retention" you mean if controller propagates changes to member clusters or not? So you're saying (please fix me if I wrong) that to have rule in FederatedTypeConfig is more preferable than in separate FeredatedResource manifests, right?

This is a good idea, but let me present for discussion another option: describe this thing in FederatedResources itself, for instance using JSONPath.

I like it for the following reasons:

  1. It is possible to divide RBAC roles for admins who maintain kubefed and operators deliver own manifests - very good option to build own workflows.
  2. Makes deliveries more granular and explicit - when describe manifest and rule how to propagate it in one file it is easier to maintain and support, split into components etc.
  3. Standard templating engines can be applied. For instance, we use helm and kustomize with custom plugins to deliver manifests, so can change logic on-the-fly based on requirements.
  4. I do not think that FederatedResource type is currently overloaded, in fact there are only few items on the top of Resource there.

afoninsky avatar Jul 11 '22 18:07 afoninsky

Yeah I know that put the retention in FeredatedResource is more fine-grained, but let me also list some reasons for the FederatedTypeConfig way:

  1. In most cases the retention rule is applied for every resource of a type, just like the built-in cases, this way putting the retention information in all resources is redundant, especially for retention rules on label/annotation keys which would grow verbose.
  2. For other cases where you only want the retention to apply on some resources, we can set it "conditional", and the retention will happen only if you don't provide a value of that field, like the built-in ServiceAccount type retention does. I think with this we can handle almost all cases? Feel free to point out some case that cannot be covered.

zqzten avatar Jul 12 '22 12:07 zqzten

@irfanurrehman Would be grateful if you can also give some opinions on this

zqzten avatar Jul 12 '22 12:07 zqzten

I can think of several usecase around this.

  • The retention can be specified once and applied globally to all resources of a specific federatedType
  • It can be specified on each federated resource
  • It can be further fine grainer to be applied to certain clusters and not to others.

I think we can do this in steps, with a goal that all the above configurations are possible eventually. My recommendation would be to first extend existing local value retention to make it configurable, such that a user can specify new fields (probably via FTC) without having to change the code. Most of the current user needs might be solved only by this.

irfanurrehman avatar Jul 14 '22 07:07 irfanurrehman

I think we can do this in steps, with a goal that all the above configurations are possible eventually. My recommendation would be to first extend existing local value retention to make it configurable, such that a user can specify new fields (probably via FTC) without having to change the code. Most of the current user needs might be solved only by this.

+1, let's first make it configurable in FTC, and leave others till there's a real need.

zqzten avatar Jul 14 '22 11:07 zqzten

I'm fine with any solution which allows me to continue using kubefed, FTC sounds better that nothing :) Now, how do you plan a feature. Is there ETA when it could be delivered (roadmap/plan, next release etc), or this will wait for opensource contributor? (sorry, It is not like I'm pushing or something, just for curiosity)

afoninsky avatar Jul 19 '22 07:07 afoninsky

Now, how do you plan a feature. Is there ETA when it could be delivered (roadmap/plan, next release etc), or this will wait for opensource contributor?

This feature would need a KEP first than implementation, as KubeFed is in short of maintainers, we cannot provide an ETA with confidence. It would be most welcomed if there's some contributor who would like to work on this, or even take part in the maintenace :) If not, I can carry on with it but I cannot give a clear ETA for now.

zqzten avatar Jul 19 '22 12:07 zqzten

/assign

zqzten avatar Jul 31 '22 13:07 zqzten

@afoninsky KEP is here FYI: #1514

zqzten avatar Aug 01 '22 09:08 zqzten

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 30 '22 10:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 29 '22 10:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 29 '22 11:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 29 '22 11:12 k8s-ci-robot