kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

Changing 'imagePullPolicy' of all containers in all deployments

Open matti opened this issue 5 years ago • 54 comments

originally asked here https://github.com/kubernetes-sigs/kustomize/issues/412 - but the question is still left unanswered:

following kustomization

patches:
  - path: imagepullpolicytoalways.yaml
    target:
      kind: Deployment

and

- op: replace
  path: "/spec/template/spec/containers/0/imagePullPolicy"
  value: Always

changes/adds the imagePullPolicy to first container, but how to set it to all containers? using *does not work.

matti avatar Sep 02 '19 13:09 matti

And I can't use AlwaysPullImages AdmissionController in GKE

matti avatar Sep 02 '19 13:09 matti

- op: replace
  path: "/spec/template/spec/containers[]/imagePullPolicy"
  value: Always

results in doc is missing path: /spec/template/spec/containers[]/imagePullPolicy: missing value

matti avatar Sep 02 '19 13:09 matti

workaround:

patches:
  - path: jsonpatches/first-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
  - path: jsonpatches/second-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
      name: this|that

matti avatar Sep 02 '19 13:09 matti

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Dec 01 '19 14:12 fejta-bot

Any thoughts on adding this to the default images transformer?

images:
  - name: postgres
    newName: my-registry/my-postgres
    newTag: v1
    newPullPolicy: IfNotPresent

I am aware that this is not quite the ask of this issue...

antoninbas avatar Dec 12 '19 00:12 antoninbas

/remove-lifecycle stale

antoninbas avatar Dec 12 '19 00:12 antoninbas

workaround:

patches:
  - path: jsonpatches/first-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
  - path: jsonpatches/second-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
      name: this|that

@matti , what does your patch yaml file look like for setting the imagePullPolicy? I am trying to set the imagePullPolicy values for all rendered yaml generated from kompose (which translates docker-compose into kubernetes yaml).

jbmcfarlin31 avatar Mar 05 '20 18:03 jbmcfarlin31

sorry I kinda stopped using kustomize - it is too hard or impossible to have things like this.

matti avatar Mar 05 '20 19:03 matti

@matti I feel you. I cannot seem to get imagePullPolicy to work, at all. I either end up replacing the whole container spec or something else... thinking I might have to implement by own patching utility..

jbmcfarlin31 avatar Mar 05 '20 20:03 jbmcfarlin31

@jbmcfarlin31

You have to apply a patch like this one:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: antrea-agent
spec:
  template:
    spec:
      containers:
        - name: antrea-agent
          imagePullPolicy: IfNotPresent
        - name: antrea-ovs
          imagePullPolicy: IfNotPresent
      initContainers:
        - name: install-cni
          imagePullPolicy: IfNotPresent

It is less than ideal. There should be a way to change the imagePullPolicy with the images transformer.

antoninbas avatar Mar 05 '20 20:03 antoninbas

@antoninbas do you need to have a specific patch file like that? What I mean by specific is like exact name mappings and so on?

We basically take a compose file, convert with kompose, and then want to apply kustomize patches to that rendered yaml file. The compose files we are converting aren't necessarily stuff we own, so we won't know the names of services and so on.

We ideally want something just like deployment_patch.yaml:

kind: Deployment
spec:
  templates:
    spec:
       containers:
          imagePullPolicy: Always

That is then applied to all future Deployments generated by kompose.

jbmcfarlin31 avatar Mar 05 '20 20:03 jbmcfarlin31

I tried that a while back but it didn't work for me. I had to enumerate all containers by name.

For your use case, it would be great if @matti's patch worked:

- op: replace
  path: "/spec/template/spec/containers/*/imagePullPolicy"
  value: Always

but the wildcard * does not work here. It is not part of the JSON patch RFC (https://tools.ietf.org/html/rfc6902) as far as I can tell, so that explains why kustomize does not support it.

It would be great if one of the kustomize developers could comment on this issue though, in case there is an alternative solution.

antoninbas avatar Mar 05 '20 20:03 antoninbas

@antoninbas man that was not the news I was hoping for lol. So as it sits currently, without the developers commenting, there currently is no way to patch through kustomize or potentially through the kubectl patch ... command all imagePullPolicy fields within deployments?

jbmcfarlin31 avatar Mar 05 '20 20:03 jbmcfarlin31

Not that I know of. But I have been using kustomize very lightly so I am definitely not an expert.

antoninbas avatar Mar 05 '20 20:03 antoninbas

You can deploy an admission controller webhook which mutates all the objects live on the cluster and ensures imagePullPolicy is what you need 😅 🌮

pre avatar Mar 06 '20 11:03 pre

We are using self build docker images in Minikube, therefore the ImagePullPolicy should be Never for local development but Always for all other environments. I did not expect this to be so hard with Kustomize :cry: Also using environment variables seems to be not possible :cry: :cry:

TekTimmy avatar Apr 28 '20 17:04 TekTimmy

Made it working with mentioned patchesStrategicMerge... My cronjob YML:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: base-cronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: base-cronjob
              image: "cronjob:latest"
              imagePullPolicy: "Never"
              args: ['python3 cronjob.py']

My kustomization.yml (important is providing the containers name):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cronjob
patchesStrategicMerge:
  - |-
    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: base-cronjob
    spec:
      schedule: "*/2 * * * *"
      jobTemplate:
        spec:
          template:
            spec:
              containers:
                - name: base-cronjob
                  image: "cronjob:dev"
                  imagePullPolicy: "Always"

TekTimmy avatar Apr 28 '20 17:04 TekTimmy

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jul 27 '20 17:07 fejta-bot

sorry I kinda stopped using kustomize - it is too hard or impossible to have things like this.

Hi @matti, might I ask which other tool you moved for this kind of templating?

I'm trying similar templating like in this issue and feeling exactly the same, that either it's not possible or very diffcult. I think there should be another way.

Thanks!

agascon avatar Jul 29 '20 14:07 agascon

helm. Helm is the clear winner of these tools.

On 29. Jul 2020, at 17.03, agascon [email protected] wrote:

 sorry I kinda stopped using kustomize - it is too hard or impossible to have things like this.

Hi @matti, might I ask which other tool you moved for this kind of templating?

I'm trying similar templating like in this issue and feeling exactly the same, that either it's not possible or very diffcult. I think there should be another way.

Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

matti avatar Jul 29 '20 17:07 matti

Helm is a great tool, but write and maintain a chart is really a pain! Go-templating and the shitty yaml indentation are a deadly mix :(

bygui86 avatar Jul 29 '20 18:07 bygui86

I know. That's why I tried kustomize (and kpt), but issues like these just wont work with declarative approach. Just give another try for helm, it also handles removal of resources nicely (have you tried what happens when you remove a kustomize resouce? you need to delete that manually)

matti avatar Jul 29 '20 19:07 matti

There is already an issue about resources removal, so I think that it will be fixed soon.

I think that Kustomize offers lots of really important features and the community will add more and more within next months. Features that are completely compliant with declarative approach.

@matti can you make an example of declarative approach when Kustomize does not work?

bygui86 avatar Jul 29 '20 21:07 bygui86

This issue? And also this "closed" issue here: https://github.com/kubernetes-sigs/kustomize/issues/168#issuecomment-618387782

matti avatar Jul 30 '20 07:07 matti

Sorry maybe I did the wrong question. Why do you think this issue is preventing Kustomize to be a good fit for a declarative approach?

bygui86 avatar Jul 30 '20 07:07 bygui86

I think that there is nothing wrong with kustomize, but rather wrong with the declarative approach itself. In theory it is nice that your yamls are in git and they don't have side effects. And it works for many cases.

Then you need to add something to an array key, or all arrray keys and potentially all array keys except one and it becames a massive jsonPatch/strategicMergePatch party. And, for example in this issue it is not possible to solve it with those.

For another more concrete example see this: https://github.com/kubernetes-sigs/kustomize/issues/347 - because of this I have massive amount of duplication.

And kustomize overlays are great for adding, but how do you remove stuff? Often you start structuring your kustomization yamls files and write bunch of extra kustomization resources and have a lot of directories and kustomization.yamls all over and then you realize that something can not be done, which again in "helm" would be a simple variable / condition.

Eventually everything becomes some sort of Generator in kustomize where declarative approach fails and this just becomes super difficult to read.

Helm, or some other template based tool, does not provide the same pure properties, but atleast you never get stuck in issues like this.

As a long time user of Terraform, it has the same kind of issues and now with latest "generators" like support for count in modules (issue open for yeeeaars) it might have enough ways to mitigate the downsides of declarative approach (basically you also ended up copy/pasting/duplicating your terraform files a lot - just like in the kustomize&ingress issue above)

https://github.com/kubernetes-sigs/kustomize/issues/1493#issuecomment-620739587 <-- this comment in this thread sums up the kind of problems you realize later: the author has been using kustomize just fine in development, but when they need to go to production they realize that they need "helm"

So yes, when given enough time kustomize might have enough generators/patching/stuff, but while waiting for it "helm" is not slowing you down.

matti avatar Jul 30 '20 07:07 matti

@matti I think what you're really looking for is jsonnet, eg through kubecfg.

blaggacao avatar Aug 07 '20 02:08 blaggacao

Thanks for the answers, definitely there is a lot of valuable information in this thread.

What I'm trying to achieve is for example having a base template which is later decorated or enhanced with a number of overlays or transformations. For example if I define a deployment for a Kafka consumer, have an overlay which automatically would add default Kafka required settings on that deployment.

For simple stuff, Kustomize can support this, but if you need some more complex stuff as already mentioned quickly you will end building very complex templates or even worst hitting some dead end way.

Helm can support this, but for sure won't be so elegant as this base + overlays approach. In the end is another language and in some cases only the templating is needed, using helm could be overkill.

These days reading about this, I saw some posts proposing using any standard language to do the templating, maybe using go or python or even javascript, handling json internally and finally producing the yaml manifest. What do you think about that?

PD: @blaggacao jsonnet looks promising I'll check for sure.

agascon avatar Aug 07 '20 14:08 agascon

I think that @agascon is right: Kustomize is the best for easy and medium-complex stuffs, but for complex to really complex others is not the best. On the other hand we have Helm and Jsonnet: powerful but putting another language as a wrapper around yaml files. Honestly I don't like Helm go templating and I don't like the idea of learning another language just to maintain a bunch of yaml files. What can be realized with Jsonnet, can be done as well (maybe even better) with a language using the Kubernetes client: golang, python, even javascript. So rather than maintain a new language (Jsonnet), I prefer to use a well known one (in my case golang). This way I can report even a bigger win: using something I already know to deal with less yaml :P

bygui86 avatar Aug 07 '20 14:08 bygui86

I don't think jsonnet is way to go, feels too low level.

What about using Terraform - Before kustomize I used terraform kubernetes provider a lot. Now with terrraform 0.13 most of the classic terraform problems are gone.

For example this issue would be super simple to solve with terraform. Also writing modules (now when modules finally support count) that provide re-usability and less lines.

matti avatar Aug 07 '20 17:08 matti