Requiring a rolling update due to config changes isn't always an option.
For example most prometheus deployments are a single instance and recreating the pod for every config change would cause "holes" in time series metrics.
nginx-ingress-controller is able to reload configmap on the fly. This ConfigMap has to be name fixed in order to pass it through nginx-ingress-controller flags. For example:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/mandatory.yaml#L249
@yanc0 You can declare variable reference and pass it down to the container args.
In kustomization.yaml, declare the variable for the configmap
vars:
- name: CONFIGURATION
objref:
kind: ConfigMap
name: nginx-configuration
apiVersion: v1
Then update your container args as
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
args:
...(truncated)
- --configmap=$(POD_NAMESPACE)/$(CONFIGURATION)
...(truncated)
You can pass down the names for other resources as well like tcp-services-confgimap and publish-service.
For more information, please take a look at a detailed demo: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/wordpress/README.md
It seems to me that while having the hash suffix is often a good thing and something that you want, you might have situations where you don't want that feature. Maybe it could be made optional in some way, maybe with an annotation or something like that.
Another use case is deploying a secret that will be used by something outside the kustomization. For example, we deploy a wildcard TLS certificate to some namespaces, that's used by several ingresses deployed separately. The name has to be static for that to work.
So I'm currently looking into this on my fork. My current solution is to add a renaming 'behavior' to the generated configmaps and generated secrets:
configMapGenerator:
- name: ldap-configmap
renaming: none
files:
- env.startup.txt
This approach works for config-maps and secrets that have been created using the generators but does not disable the renaming of ConfigMaps and Secrets defined via K8S Yaml.
I am thinking that a global flag to change the default renaming behavior to 'none' and then users which want to leverage the hash-suffix renaming could specify:
configMapGenerator:
- name: ldap-configmap
renaming: hash_suffix
files:
- env.startup.txt
@mattatcha Does this align with your original thoughts?
@Liujingfang1 Are you able to have a quick look at my initial commit? I would appreciate feedback on the approach prior to me writing tests and documentation?
I'm still thinking if we need the option of disabling hash for configmaps. The rolling update due to configmap change guarantees the new configmap is picked up immediately. While the nginix-ingress-controller example can be solved by setting Vars. For the case using configmap/secret outside kustomization, is it possible to manage them inside the same kustomization?
@pwhittlesea If we decide to add this option, I think we can do it a little different. Currently you add renameing to each configmapGenerate/secretGenerator and a flag to kustomize build. In general, adding flags to kustomize build is not a good idea since the behavior couldn't be fully determined by looking at the manifest files. Consider an application, it may involve several configmaps or secrets. Do we want some configmap to have hash and others not? Do we want them to have hash or not have hash synchronously? Maybe we can use a global flag inside kustomizatin.YAML to control if we want to add hash to all the configmaps or secrets it manages.
@Liujingfang1 I agree that a global flag inside the kustomize file would be preferable. Are you thinking something at the same level as the namePrefix option?
I would also like to put forward my reason for needing this feature:
I do use the rolling update for most of my configmaps, however my company provides our platform to customers with the ability for them to configure certain parts of the system. Due to certain regulatory requirements we have to ensure that the configuration is precisely documented and, as we use configmaps to provide this configuration, configmaps with random names are not suitable for us. Disabling this for some config maps would allow me to transition to kustomize without having to address the much larger issue.
I feel that, although having the rolling updates for free when using kustomize is a great thing, it is functionallity that maybe not all users want.
However I also accept that I am probably in a fringe set of users and kustomize (rightfully so) aims to be an opinionated tool that forces best practice on deployment.
For the case using configmap/secret outside kustomization, is it possible to manage them inside the same kustomization?
No, the secret is used by various application deployments from different repositories.
@mxey @pwhittlesea Which way do you use more often for configmaps, from a configmap.YAML or from a configmapGenerator? What if we don't append the hash for a configmap read from a YAML file?
@Liujingfang1 That would work for me. Maybe a flag in the kustomize file defaulting to the current behaviour? This would effectively allow my team to treat config maps and secrets defined in YAML as 'constant' vs the managed ones in the generator sections that provide rolling update support.
@Liujingfang1 that would work for me as well.
This is the PR #159 for only adding hash for configmap/secretes created from generators
I'd still like to see this. In my case, I have some objects in a separate repo that reference well known secrets. Since we keep kustomize configurations in a repository, we will either end up managing these secrets separately (eg: { kustomize ... ; decrypt ... ; } | kubectl apply -f-) or having to run a script in the repository that we keep the configurations in, in order to have yamls on disk.
Regardless of specific use cases, IMO the current behavior is surprising and opposite what it should be. When generating a secret, name should equal what you see when you do a kubectl get secrets. A new field, generateName, should be what you use if you want to generate a name with a hash appended. And generateName should not involve any dashes magic... It should just append some version hash. This would make kustomize more consistent with the current kubernetes api
For the records: see also #489 , which is about implementing a global options for generators (including disabling the hash suffix)
Most helpful comment
Another use case is deploying a secret that will be used by something outside the kustomization. For example, we deploy a wildcard TLS certificate to some namespaces, that's used by several ingresses deployed separately. The name has to be static for that to work.