For the following service template:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}
namespace: {{ .Release.Namespace }}
annotations:
{{- if .Values.svc.annotations }}{{- range $name, $value := .Values.svc.annotations }}{{- if not (empty $value) }}
{{ $name }}: {{ $value }}
{{- end }}{{- end }}{{- end }}
labels:
{{- if .Values.labels }}{{- range $name, $value := .Values.labels }}{{- if not (empty $value) }}
{{ $name }}: {{ $value }}
{{- end }}{{- end }}{{- else }}
name: {{ .Values.name }}
{{- end }}
spec:
type: {{ .Values.svc.type }}
ports:
{{- range .Values.svc.ports }}
- name: {{ .name }}
port: {{ .port }}
protocol: {{ .protocol }}
targetPort: {{ .targetPort }}
{{- end }}
selector:
{{- if .Values.labels }}{{- range $name, $value := .Values.labels }}{{- if not (empty $value) }}
{{ $name }}: {{ $value }}
{{- end }}{{- end }}{{- else }}
name: {{ .Values.name }}
{{- end }}
{{- end }}
With the following helmfile:
repositories:
- name: "foo"
url: "https://bar"
helmDefaults:
timeout: 1800
environments:
staging:
values:
- {{ env "STAGING_YAML" | default "values/staging.yaml" }}
releases:
- name: {{ .Environment.Name }}
namespace: {{ .Namespace }}
chart: foo/mychart
version: !!string 1.2.3
atomic: true
wait: true
labels:
app: {{ .Environment.Name }}
values:
- name: {{ .Environment.Name }}
(...)
svc:
enabled: true
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "secret-arn"
service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
- name: https
port: 443
protocol: TCP
targetPort: 3000
When executing helmfile apply, we obtain the following go error:
coalesce.go:199: warning: destination for env is a table. Ignoring non-table value rendered_from_helmfile
coalesce.go:199: warning: destination for livenessProbe is a table. Ignoring non-table value <nil>
Error: unable to build kubernetes objects from release manifest: unable to decode "": resource.metadataOnlyObject.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found t, error found in #10 byte of ...|enabled":true,"servi|..., bigger context ...|o/aws-load-balancer-connection-draining-enabled":true,"service.beta.kubernetes.io/aws-load-balancer-|...
Tested on macOS and Linux. See that all annotations are defined using double quotes, but they are referenced in go error as without quotes. Diff shows the resource as new 100% of time. In my example, services are the only resources at the moment with annotations.
Are you sure this is due to Helmfile, not Helm?
bigger context ...|o/aws-load-balancer-connection-draining-enabled":true,"service.beta.kubernetes.io/aws-load-balancer-|...
For me, this seems to indicate that you miss a newline between each {{ $name }}: {{ $value }} in:
{{- if .Values.svc.annotations }}{{- range $name, $value := .Values.svc.annotations }}{{- if not (empty $value) }}
{{ $name }}: {{ $value }}
{{- end }}{{- end }}{{- end }}
Are you sure this is due to Helmfile, not Helm?
That might be a possibility, since the YAML is correctly rendered using helmfile diff and I can manually apply the resources with kubectl.
For me, this seems to indicate that you miss a newline between each
{{ $name }}: {{ $value }}
But the service template has a newline between the template statement and the end statement. Do I need to do something different to indicate a newline?
I believe Helmfile isn't doing anything affect helm templates itself.
Could you share a minimal example git repo for reproduction?
The chart is here: https://github.com/slaterx/helm-monochart
The helmfile is here: https://github.com/slaterx/helmissue
And I've executed it with the command: helmfile --log-level=debug --environment=staging --namespace=some-namespace --file staging.yaml apply
Thanks!
Btw, which version of helm are you using?
Sorry, forgot to add that, my apologies. Here they are:
➜ helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.8"}
➜ helmfile --version
helmfile version v0.99.1
Thanks for the info!
Anyways, I was able to reproduce this with helm2 and helm3 alone:
$ cat values.yaml
name: default
deployment:
enabled: false
pod:
enabled: false
svc:
enabled: true
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "secret-arn"
service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
- name: https
port: 443
protocol: TCP
targetPort: 3000
helm v2.14.3:
$ helm tiller run -- helm install --name foo monochart-repo/monochart -f values.yaml
Installed Helm version v2.14.3
Installed Tiller version v2.14.3
Helm and Tiller are the same version!
Starting Tiller...
Tiller namespace: kube-system
Running: helm install --name foo monochart-repo/monochart -f values.yaml
Error: release foo failed: Service in version "v1" cannot be handled as a Service: v1.Service.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found t, error found in #10 byte of ...|enabled":true,"servi|..., bigger context ...|o/aws-load-balancer-connection-draining-enabled":true,"service.beta.kubernetes.io/aws-load-balancer-|...
Stopping Tiller...
Error: plugin "tiller" exited with error
helm v3.0.3:
$ helm3 install foo monochart-repo/monochart -f values.yaml
Error: Service in version "v1" cannot be handled as a Service: v1.Service.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found t, error found in #10 byte of ...|enabled":true,"servi|..., bigger context ...|o/aws-load-balancer-connection-draining-enabled":true,"service.beta.kubernetes.io/aws-load-balancer-|...
I'm not sure what's the root cause here though.
Thanks for the help @mumoshu ! I'll raise an issue on helm repository.
@slaterx No need to raise one!
I found that true needs to be escaped as annoations is basically a map[string]string while true is obviously a boolean value.
Try pipeing values to the quote func like {{ $value | quote }}.
After this fix it's working fine on my machine.
Most helpful comment
@slaterx No need to raise one!
I found that
trueneeds to be escaped asannoationsis basically amap[string]stringwhiletrueis obviously a boolean value.Try pipeing values to the
quotefunc like{{ $value | quote }}.After this fix it's working fine on my machine.