kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

Ability to template resource urls

Open jeacott1 opened this issue 3 years ago • 5 comments

Is your feature request related to a problem? Please describe.

I have a remote base config intended to be used as a base for new work (ie it doesn't include a set of ready to run overlays in the remote repo), with several optional components. The remote usage requires context and hence multiple urls in multiple files that end up looking a bit like

overlays/stage/kustomize.yaml
resources:
- https://github.com/myrepo//manifests/config/?ref=abc
- ./some-local-resource
- ./some-local-resource2

components:
- https://github.com/myrepo//manifests/components/ingress/?ref=abc
- https://github.com/myrepo//manifests/components/storage/?ref=abc

----
overlays/stage/some-local-resource/kustomize.yaml

resources:
- https://github.com/myrepo//manifests/base-overlays/some-local-resource/?ref=abc

patchesStrategicMerge:
  - ./some-local-resource-patch.yaml
----

overlays/stage/some-local-resource2/kustomize.yaml

resources:
- https://github.com/myrepo//manifests/base-overlays/some-local-resource2/?ref=abc

patchesStrategicMerge:
  - ./some-local-resource2-patch.yaml

this is painful to manage. I need to remember to update urls in many places just to change a base revision #. I 'could' replace ref=abc with ref=${abc} and pre-process all my files with an external template engine, but if for any reason I wanted to extend the usage of this variable to the remote source, this pre-processing wouldn't work, and it would just be nice to have a mechanism in kustomize that could generally operate as build time vars. Lots of people currently abuse the existing vars feature pushing up unused configMaps that are exclusively used for build time template variables in an attempt to DRY up kustomize a bit. unfortunately kustomize vars $(SOMEVAR) don't work in resource urls. is there already an alternative?

one other option I think would speed things up, and DRY things up would be to allow a general resources ref that simply hauls down the repo into a specified relative folder, and everything could then simply continue to reference relative local directories instead of context sub kustomize(s) requiring their own remote resources definition.

perhaps something like:

resources:
- https://github.com/myrepo//manifests/base//../base?ref=abc

where the first // need not point to a folder containing a kustomize file, and the second // represents the virtual local relative folder to drop the remote repo into. (kustomize would still clone the repo to a tmp folder but resolve everything as though it was symlinked from the specified location. in this way my local overlays/stage/some-local-resource/kustomize.yaml

could simply have a relative path as though it was all developed locally and at least completely remove the need for multiple remote references in different files.

resources:
- ../base-overlays/some-local-resource

thoughts?

jeacott1 avatar Jun 03 '22 02:06 jeacott1

@jeacott1: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 03 '22 02:06 k8s-ci-robot

We discussed this at this week's bug scrub, and three points came up with with enhancing support for the setup described:

  • We won't be able to accept any features that involve templating, as being template-free is a core part of what makes Kustomize, Kustomize: kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. You can also read more about Kustomize philosophy in our eschewed features list, and the Declarative Application Management article that initiated this project. The Vars feature, which superficially resembles templating, causes confusion and is deprecated in favour of Replacements.
  • While remote URLs are supported, we don't recommend heavy use of remote bases/resources for production use cases and as such are hesitant to introduce optimizations for that. Note that we are working on a kustomize localize command that may be relevant here: Localize KEP.
  • Encapsulation: similar to the above, the pattern Kustomize encourages is encapsulated bases, and while supported, individual remote file references violate that pattern and as such we are hesitant to optimize for them.

The first point is the most important: even if we were to be persuaded on the value of the latter two, the solution absolutely must be template-free.

KnVerey avatar Jun 10 '22 19:06 KnVerey

@KnVerey thanks for the update. I think there might be some scope with the general resource notion I proposed that could accommodate both cases. Run as it is locally and as a remote ref sans templating. The existing url approach does not accommodate both cases without touching many files.

I am also using the vars feature atm for various purposes in my kustomize, and am sad to see the direction kustomize has decided to take. it will render another usecase I have unworkable (without a 3rd party template engine) when its removed.

Whilst the template free notion is a fine ideal, in practice it means attempting to extend fundamentally unstable interfaces and a requirement to understand the details of what is being extended. It's a really bad model and the reason programming languages almost universally have, and encourage the use of interfaces. I don't know how kustomize intends to address this down the line?

For my case, I guess I'm trying to use kustomize in the same sphere I would use terraform. With terraform I can define a common base at a remote url, and trivially extend the parts I need (or offer that base to others to extend/customise). I'm also attempting to deliver configuration to clients using kustomize like this, ideally meaning they see less complexity and the separation of concerns (vendor/client) is clear, and the upgrade path is simple. What I'm hearing from you is that kustomize is not a good fit for this usecase?

Cheers

jeacott1 avatar Jun 12 '22 02:06 jeacott1

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 10 '22 02:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 12 '22 16:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 11 '22 16:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 11 '22 16:11 k8s-ci-robot