Helm3 doesn't automatically create namespace - see https://v3.helm.sh/docs/faq/#automatically-creating-namespaces
How can we solve this with helmfile, so that we don't have to manually create them?
@costimuraru Good point!
I think incubator/raw would be handy for this use-case:
releases:
- name: namespaces
chart: incubator/raw
values:
- resources:
- apiVersion: v1
kind: Namespace
metadata:
name: myns1
spec:
- apiVersion: v1
kind: Namespace
metadata:
name: myns2
spec:
Thanks, @mumoshu. Yes that also crossed my mind, though we're seeing some issues with incubator/raw and helm3. Opened an issue in helm to track it: https://github.com/helm/helm/issues/6626
We've been doing this with hooks for the moment:
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: system-dashboard
#helmfile.yaml
releases:
- name: kubernetes-dashboard
namespace: system-dashboard
chart: helm/kubernetes-dashboard
version: 2.0.0-internal.0
values:
- values.yaml.gotmpl
hooks:
- events: ["prepare"]
showlogs: true
command: "kubectl"
args: ["apply", "-f", "namespace.yaml"]
You can definitely improve on that but for now that's what we've been doing.
@mikesplain That's neat. Thanks for sharing!
@mikesplain I do something similar for cert-manager; something to mind is that the kubectl command in the hook _isn't guaranteed_ to be the same as the kubeContext declared in the helmfile.yml; it's whatever the current kubectl context of the user's env is. This could make a difference in environments which use helmfile for multiple environments & the user forgets to kubectl config use-context into the cluster their helmfile is targeting. It's a bit of an edge-case but it has implications. With that said, so far using the hook has worked fine. I'm just careful about which kubectl context I'm in 馃憤
We've also discussed a couple more solutions we could solve this:
1) Have a create-namespaces-chart which takes a list of namespaces and generates the yaml files for it.
releases:
- name: "create-namespaces-chart"
chart: "adobe/create-namespaces-chart"
- name: "namespaces[0]"
value: "logging"
- name: "namespaces[1]"
value: "monitoring"
- name: "namespaces[2]"
value: "velero"
2) Update the upstream charts to create the namespace themselves
We've also discussed a couple more solutions we could solve this:
- Have a
create-namespaces-chartwhich takes a list of namespaces and generates the yaml files for it.releases: - name: "create-namespaces-chart" chart: "adobe/create-namespaces-chart" - name: "namespaces[0]" value: "logging" - name: "namespaces[1]" value: "monitoring" - name: "namespaces[2]" value: "velero"
- Update the upstream charts to create the namespace themselves
@costimuraru I don't think this is a good way to create namespaces. In case you accidentally delete chart, all your work will be gone :scream: And eventually the process of recovering all your apps will be a nightmare..
I don't think this is a good way to create namespaces. In case you accidentally delete chart, all your work will be gone
@volym3ad Good point. Definitely this needs to be handled beforehand, or directly by helmfile.
@scottrigby Thanks for sharing! I'll definitely check helm-namespace and grab the idea.
Using a pre-install and pre-upgrade hook can also prevent the deletion problem
@mikesplain Good point! Then it seems to be the best option we can have today.
My immediate idea was to chain any helm calls from Helmfile with helm namespace that seems like an overkill.
The next idea was to change Helmfile to call kubectl or client-go to create the namespace. But it wouldn't work for users who wants to manage namespaces externally to the main helmfile.yaml.
How about adding a flag to optionally enable creating K8s namespace with kubectl? It should look like helmfile apply --create-namespaces. I'd choose shelling out to kubectl as introducing the huge client-go dependency(and conflating the binary size) doesn't pay.
How about adding a flag to optionally enable creating K8s namespace with kubectl? It should look like helmfile apply --create-namespaces.
That could work
Please add this.
incubator/raw have one drawback
helmfile delete deletes is first which deletes all other releases but not all the resources.
This makes a big mess on your cluster.
@maver1ck Which feature do you want specifically and how helmfile delete should work after that?
Worst part about that is that the feature was removed in Helm3 without a notice and cli still takes in namespace param... This is very confusing, just learnt about that while moving to helm3...
I don't really like the idea of a prehook with kubectl apply as it has a few drawbacks.
Here is what I ended up doing, similar to what was suggested by @mumoshu:
values:
- namespaces:
- test1
- test2
environments:
{{ .Environment.Name }}:
values:
- global/global-values_{{ .Environment.Name }}.yaml
---
{{ $kubecontext := printf "arn:aws:eks:%s:%s:cluster/%s" $.Values.global.region $.Values.global.account $.Values.global.clusterName }}
releases:
- name: namespaces
chart: devops/namespace/chart
version: 1.0.0
kubeContext: {{ $kubecontext }}
namespace: kube-system
values:
- namespaces:
{{ range $key := $.Values.namespaces }}
- {{ . }}
{{ end }}
namespace.yaml template within the namespace chart:
{{- range $key := .Values.namespaces -}}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ . }}
labels:
{{- include "default.labels" $ | nindent 4 }}
annotations:
helm.sh/resource-policy: keep
{{- end -}}
This will ensure that when you are running helmfile --environment <env_name> --selector namespace=kube-system apply the namespaces will always be created in the correct kube context.
I've added the helm.sh/resource-policy: keep annotation to make sure the namespace is not accidentally deleted by helm.
Let me know what you think!
I've added the
helm.sh/resource-policy: keepannotation to make sure the namespace is not accidentally deleted by helm.
That would be the missing detail that would make namespaces managed by helm an option!
Will look into it after new year. Until yet hadn't thought of anything better then bash/kubectl glue, that takes the namespaces defined in the helmfile yaml as a starting point and would precede helmfile.
Added a generic helm chart for what @dudicoco covers here. I figured I'd also cover the other mentioned alternatives and put some helmfile specific example code out there. Need to test with helm 2 as well but it should work I'd think.
The Helm team is working on a helm cli flag that would allow the deployer to choose to have Helm 3 create namespaces. See:
https://github.com/helm/helm/issues/6794
and
https://github.com/helm/helm/pull/6795
Looks like --create-namespace will be available in helm v. 3.2
https://github.com/helm/helm/pull/7648
Helmfile is going to create namespaces by default thanks to --create-namespace. See #1140
For fun, add pre-hook 馃
releases:
- name: example-app
namespace: examples
chart: anychart
hooks:
- events: ["prepare"]
showlogs: true
command: kubectl
args: [ "create", "ns", "{{`{{ .Release.Namespace }}`}}" ]
hehehe.. it works!
@abdennour despite of createNamespace works as expected with helm3, it may better to add k8s context into your hook
args: ["--context", "{{`{{.Release.KubeContext}}`}}", "create", "ns", "{{`{{ .Release.Namespace }}`}}"]
Does helmfile check hook exit code? create ns will fail with existing namespace.
Most helpful comment
The Helm team is working on a helm cli flag that would allow the deployer to choose to have Helm 3 create namespaces. See:
https://github.com/helm/helm/issues/6794
and
https://github.com/helm/helm/pull/6795