Kubespray: cannot unmarshal bool into Go struct field KubeProxyConfiguration.nodePortAddresses of type

Created on 5 Jan 2019  Â·  12Comments  Â·  Source: kubernetes-sigs/kubespray

BUG REPORT:
Hey guys,

I found an issue during the first master initialization it's caused while decoding JSON and cannot unmarshal bool into Go struct field KubeProxyConfiguration.nodePortAddresses of type

Environment:
OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
Linux 3.10.0-957.1.3.el7.x86_64 x86_64
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"

Cloud provider or hardware configuration:
hw

Version of Ansible (ansible --version):
ansible 2.8.0.dev0
config file = /home/richardson/git/kubespray/ansible.cfg
configured module search path = [u'/home/richardson/git/kubespray/library']
ansible python module location = /usr/lib/python2.7/site-packages/ansible-2.8.0.dev0-py2.7.egg/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Mar 14 2018, 16:45:33) [GCC 8.0.1 20180222 (Red Hat 8.0.1-0.16)]

Kubespray version (commit) (git rev-parse --short HEAD):
39d75030

Network plugin used:
calico

Command used to invoke ansible:
➜ ~ ansible-playbook -i ./inventory/tr-dev-spo-bra-cluster/hosts.ini --private-key="/home/richardson/.ssh/id_rsa" -e ansible_user=MY-USER --extra-vars "ansible_sudo_pass=MY-PASSWORD" cluster.yml --become -vvv
Output:
TASK [kubernetes/master : kubeadm | Initialize first master] *************************************************

Saturday 05 January 2019 17:28:52 -0200 (0:00:00.194) 0:35:59.425
*
fatal: [c982yraonesrc]: FAILED! => {"changed": true, "cmd": ["timeout", "-k", "600s", "600s", "/usr/local/bin/kubeadm", "init", "--config=/etc/kubernetes/kubeadm-config.yaml", "--ignore-preflight-errors=all"], "delta": "0:00:00.477046", "end": "2019-01-05 19:28:55.948931", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2019-01-05 19:28:55.471885", "stderr": "W0105 19:28:55.947241 48782 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal bool into Go struct field KubeProxyConfiguration.nodePortAddresses of type []string\nv1alpha1.KubeProxyConfiguration.NodePortAddresses: []string: decode slice: expect [ or n, but found f, error found in #10 byte of ...|dresses\":false,\"oomS|..., bigger context ...|7.0.0.1:10249\",\"mode\":\"ipvs\",\"nodePortAddresses\":false,\"oomScoreAdj\":-999,\"portRange\":null,\"resource|...", "stderr_lines": ["W0105 19:28:55.947241 48782 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal bool into Go struct field KubeProxyConfiguration.nodePortAddresses of type []string", "v1alpha1.KubeProxyConfiguration.NodePortAddresses: []string: decode slice: expect [ or n, but found f, error found in #10 byte of ...|dresses\":false,\"oomS|..., bigger context ...|7.0.0.1:10249\",\"mode\":\"ipvs\",\"nodePortAddresses\":false,\"oomScoreAdj\":-999,\"portRange\":null,\"resource|..."], "stdout": "", "stdout_lines": []}

Most helpful comment

@richardsonlima @alexonixon This is likely because you're using an older inventory/group_vars/k8s-cluster/k8s-cluster.yml that sets kube_proxy_nodeport_addresses as a boolean.

kube_proxy_nodeport_addresses should be corrected as a string or omitted from group_vars/other user vars if you do not need to change the defaults. Please reopen if this does not resolve your issue.

All 12 comments

File that could be related roles/kubernetes/master/templates/kubeadm-config.v1alpha1.yaml.j2

same error

@alexonixon are you on commit 39d75030 ?

So I think that the issue start on this commit ...
Fix kube-proxy configuration for kubeadm (#3958)
https://github.com/kubernetes-sigs/kubespray/commit/80379f6cab211af51313e6a46e319c8219cf53a1

right now i´m on commit d58b338b git checkout d58b338bd8e2b5da1a3d2d001b7b24bbadee5e87 testing if the issue will appear again

@alexonixon Test finished, using the commit base d58b338 it works but we need to find and figure out the root issue and cause on the next commit above that.

@chadswen

@alexonixon after cluster deployed using commit base d58b338 I´d an issue with the dashboard ( CrashLoopBackOff ) but solved using this workaround https://github.com/kubernetes/dashboard/issues/3472

@alexonixon after cluster deployed using commit base d58b338 I´d an issue with the dashboard ( CrashLoopBackOff ) but solved using this workaround kubernetes/dashboard#3472

Same here, but I didn't try this W.A, yet. I've thought that error was related to bad setup of cni.

/assign riverzhang

@richardsonlima @alexonixon This is likely because you're using an older inventory/group_vars/k8s-cluster/k8s-cluster.yml that sets kube_proxy_nodeport_addresses as a boolean.

kube_proxy_nodeport_addresses should be corrected as a string or omitted from group_vars/other user vars if you do not need to change the defaults. Please reopen if this does not resolve your issue.

@chadswen thank you! working now

What is the expectation for upgrades here? Shouldn't the data sources be validated|conformed if they change in a breaking way between versions? 1.12 (release-2.8) -> 1.13 (master adf6a71) causes this to crop back up

Was this page helpful?
0 / 5 - 0 ratings