Kustomize: Changing 'imagePullPolicy' of all containers in all deployments

Created on 2 Sep 2019  Â·  36Comments  Â·  Source: kubernetes-sigs/kustomize

originally asked here https://github.com/kubernetes-sigs/kustomize/issues/412 - but the question is still left unanswered:

following kustomization

patches:
  - path: imagepullpolicytoalways.yaml
    target:
      kind: Deployment

and

- op: replace
  path: "/spec/template/spec/containers/0/imagePullPolicy"
  value: Always

changes/adds the imagePullPolicy to first container, but how to set it to all containers? using *does not work.

Most helpful comment

Any thoughts on adding this to the default images transformer?

images:
  - name: postgres
    newName: my-registry/my-postgres
    newTag: v1
    newPullPolicy: IfNotPresent

I am aware that this is not quite the ask of this issue...

All 36 comments

And I can't use AlwaysPullImages AdmissionController in GKE

- op: replace
  path: "/spec/template/spec/containers[]/imagePullPolicy"
  value: Always

results in doc is missing path: /spec/template/spec/containers[]/imagePullPolicy: missing value

workaround:

patches:
  - path: jsonpatches/first-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
  - path: jsonpatches/second-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
      name: this|that

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Any thoughts on adding this to the default images transformer?

images:
  - name: postgres
    newName: my-registry/my-postgres
    newTag: v1
    newPullPolicy: IfNotPresent

I am aware that this is not quite the ask of this issue...

/remove-lifecycle stale

workaround:

patches:
  - path: jsonpatches/first-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
  - path: jsonpatches/second-container-pull-policy-to-always.yaml
    target:
      kind: Deployment
      name: this|that

@matti , what does your patch yaml file look like for setting the imagePullPolicy? I am trying to set the imagePullPolicy values for all rendered yaml generated from kompose (which translates docker-compose into kubernetes yaml).

sorry I kinda stopped using kustomize - it is too hard or impossible to have things like this.

@matti I feel you. I cannot seem to get imagePullPolicy to work, at all. I either end up replacing the whole container spec or something else... thinking I might have to implement by own patching utility..

@jbmcfarlin31

You have to apply a patch like this one:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: antrea-agent
spec:
  template:
    spec:
      containers:
        - name: antrea-agent
          imagePullPolicy: IfNotPresent
        - name: antrea-ovs
          imagePullPolicy: IfNotPresent
      initContainers:
        - name: install-cni
          imagePullPolicy: IfNotPresent

It is less than ideal. There should be a way to change the imagePullPolicy with the images transformer.

@antoninbas do you need to have a specific patch file like that? What I mean by specific is like exact name mappings and so on?

We basically take a compose file, convert with kompose, and then want to apply kustomize patches to that rendered yaml file. The compose files we are converting aren't necessarily stuff we own, so we won't know the names of services and so on.

We ideally want something just like deployment_patch.yaml:

kind: Deployment
spec:
  templates:
    spec:
       containers:
          imagePullPolicy: Always

That is then applied to all future Deployments generated by kompose.

I tried that a while back but it didn't work for me. I had to enumerate all containers by name.

For your use case, it would be great if @matti's patch worked:

- op: replace
  path: "/spec/template/spec/containers/*/imagePullPolicy"
  value: Always

but the wildcard * does not work here. It is not part of the JSON patch RFC (https://tools.ietf.org/html/rfc6902) as far as I can tell, so that explains why kustomize does not support it.

It would be great if one of the kustomize developers could comment on this issue though, in case there is an alternative solution.

@antoninbas man that was not the news I was hoping for lol. So as it sits currently, without the developers commenting, there currently is no way to patch through kustomize or potentially through the kubectl patch ... command all imagePullPolicy fields within deployments?

Not that I know of. But I have been using kustomize very lightly so I am definitely not an expert.

You can deploy an admission controller webhook which mutates all the objects live on the cluster and ensures imagePullPolicy is what you need 😅 🌮

We are using self build docker images in Minikube, therefore the _ImagePullPolicy_ should be _Never_ for local development but _Always_ for all other environments. I did not expect this to be so hard with Kustomize :cry:
Also using environment variables seems to be not possible :cry: :cry:

Made it working with mentioned _patchesStrategicMerge_...
My cronjob YML:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: base-cronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: base-cronjob
              image: "cronjob:latest"
              imagePullPolicy: "Never"
              args: ['python3 cronjob.py']

My _kustomization.yml_ (important is providing the containers __name__):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cronjob
patchesStrategicMerge:
  - |-
    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: base-cronjob
    spec:
      schedule: "*/2 * * * *"
      jobTemplate:
        spec:
          template:
            spec:
              containers:
                - name: base-cronjob
                  image: "cronjob:dev"
                  imagePullPolicy: "Always"

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

sorry I kinda stopped using kustomize - it is too hard or impossible to have things like this.

Hi @matti, might I ask which other tool you moved for this kind of templating?

I'm trying similar templating like in this issue and feeling exactly the same, that either it's not possible or very diffcult. I think there should be another way.

Thanks!

helm. Helm is the clear winner of these tools.

On 29. Jul 2020, at 17.03, agascon notifications@github.com wrote:


sorry I kinda stopped using kustomize - it is too hard or impossible to have things like this.

Hi @matti, might I ask which other tool you moved for this kind of templating?

I'm trying similar templating like in this issue and feeling exactly the same, that either it's not possible or very diffcult. I think there should be another way.

Thanks!

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Helm is a great tool, but write and maintain a chart is really a pain!
Go-templating and the shitty yaml indentation are a deadly mix :(

I know. That's why I tried kustomize (and kpt), but issues like these just wont work with declarative approach. Just give another try for helm, it also handles removal of resources nicely (have you tried what happens when you remove a kustomize resouce? you need to delete that manually)

There is already an issue about resources removal, so I think that it will be fixed soon.

I think that Kustomize offers lots of really important features and the community will add more and more within next months. Features that are completely compliant with declarative approach.

@matti can you make an example of declarative approach when Kustomize does not work?

This issue? And also this "closed" issue here: https://github.com/kubernetes-sigs/kustomize/issues/168#issuecomment-618387782

Sorry maybe I did the wrong question.
Why do you think this issue is preventing Kustomize to be a good fit for a declarative approach?

I think that there is nothing wrong with kustomize, but rather wrong with the declarative approach itself. In theory it is nice that your yamls are in git and they don't have side effects. And it works for many cases.

Then you need to add something to an array key, or all arrray keys and potentially all array keys except one and it becames a massive jsonPatch/strategicMergePatch party. And, for example in this issue it is not possible to solve it with those.

For another more concrete example see this: https://github.com/kubernetes-sigs/kustomize/issues/347 - because of this I have massive amount of duplication.

And kustomize overlays are great for adding, but how do you remove stuff? Often you start structuring your kustomization yamls files and write bunch of extra kustomization resources and have a lot of directories and kustomization.yamls all over and then you realize that something can not be done, which again in "helm" would be a simple variable / condition.

Eventually everything becomes some sort of Generator in kustomize where declarative approach fails and this just becomes super difficult to read.

Helm, or some other template based tool, does not provide the same pure properties, but atleast you never get stuck in issues like this.

As a long time user of Terraform, it has the same kind of issues and now with latest "generators" like support for count in modules (issue open for yeeeaars) it might have enough ways to mitigate the downsides of declarative approach (basically you also ended up copy/pasting/duplicating your terraform files a lot - just like in the kustomize&ingress issue above)

https://github.com/kubernetes-sigs/kustomize/issues/1493#issuecomment-620739587 <-- this comment in this thread sums up the kind of problems you realize later: the author has been using kustomize just fine in development, but when they need to go to production they realize that they need "helm"

So yes, when given enough time kustomize might have enough generators/patching/stuff, but while waiting for it "helm" is not slowing you down.

@matti I think what you're really looking for is jsonnet, eg through kubecfg.

Thanks for the answers, definitely there is a lot of valuable information in this thread.

What I'm trying to achieve is for example having a base template which is later decorated or enhanced with a number of overlays or transformations. For example if I define a deployment for a Kafka consumer, have an overlay which automatically would add default Kafka required settings on that deployment.

For simple stuff, Kustomize can support this, but if you need some more complex stuff as already mentioned quickly you will end building very complex templates or even worst hitting some dead end way.

Helm can support this, but for sure won't be so elegant as this base + overlays approach. In the end is another language and in some cases only the templating is needed, using helm could be overkill.

These days reading about this, I saw some posts proposing using any standard language to do the templating, maybe using go or python or even javascript, handling json internally and finally producing the yaml manifest. What do you think about that?

PD: @blaggacao jsonnet looks promising I'll check for sure.

I think that @agascon is right: Kustomize is the best for easy and medium-complex stuffs, but for complex to really complex others is not the best.
On the other hand we have Helm and Jsonnet: powerful but putting another language as a wrapper around yaml files.
Honestly I don't like Helm go templating and I don't like the idea of learning another language just to maintain a bunch of yaml files. What can be realized with Jsonnet, can be done as well (maybe even better) with a language using the Kubernetes client: golang, python, even javascript.
So rather than maintain a new language (Jsonnet), I prefer to use a well known one (in my case golang).
This way I can report even a bigger win: using something I already know to deal with less yaml :P

I don't think jsonnet is way to go, feels too low level.

What about using Terraform - Before kustomize I used terraform kubernetes provider a lot. Now with terrraform 0.13 most of the classic terraform problems are gone.

For example this issue would be super simple to solve with terraform. Also writing modules (now when modules finally support count) that provide re-usability and less lines.

Kustomize is the best for easy and medium-complex stuffs

I wouldn't way that this is true: you can do extremely complex stuff with kustomize, but then you hit these kind of hard issues that just can not be done with kustomize even if they are "easy" or "medium"

The problem with kustomize is that because you can not express some things at all (without massive amount of repetition), but you don't know those issues beforehand, it is just safer to use something that has the expressiveness like helm, pulumi, terraform etc.

Jsonnet is lazily and hermetically evaluated, similar to nix. Those are handy features for configuration management. The learningcurve is unhandy. If we rule out the bias of knowledge, ceteris paribus a purely functional, lazy evaluated language paradigm has its good amount of aces under the sleeve.

Yet bias of knowledge is a real thing, most of the time.

Had jsonnet/nix's paradigms be applied to terraform, problems would not have been there - by design as opposed to "by workaround/fix".

I'd fallback to jsonnet, when kustomize cannot be tamed to do the job.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

FWIW, for a related problem, my Makefile hack for always puling any image not versioned (tagged devel; could be :latest for others, or anything not beginning with v, etc.):

kustomize build config/default | awk '/image:.*:devel/{r=1}/imagePullPolicy/{gsub(":.*",": Always");print;r=0;next}{print}' | kubectl apply -f -

Matti's critique is accurate. The deficiencies requiring external workarounds become obvious when you try to use kustomize seriously, and you can waste time trying to discover a way to do something that turns out to be unreasonably difficult or impossible. The flaw that most surprised me was not being able to remove things.

We just got hit by this one. Would be great if there were a convenient way to set imagePullPolicy per image or a working multi-object patch.

The original problem in the issue is because of the limitation of JSON patch (RFC 6902). The path cannot express "all items in array".

What I can suggest is creating a customize plugin to modify the resources.

Was this page helpful?
0 / 5 - 0 ratings