Kubeadm: Panic when joining a node with config and control plane

Created on 17 Dec 2018  路  8Comments  路  Source: kubernetes/kubeadm

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version): kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:36:44Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Note: I also observed this after building from source on master branch.

Environment:

  • Kubernetes version (use kubectl version): v1.13.x and v1.14.x so far
  • Cloud provider or hardware configuration: local and AWS
  • OS (e.g. from /etc/os-release): Both local (Ubuntu 18.04) and cloud (Fedora 28)
  • Kernel (e.g. uname -a): 4.x
  • Others:

What happened?

kubeadm join --config=/etc/kubernetes/kubeadm-config-join.yaml --experimental-control-plane
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x139c962]

goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewJoin(0x7ffee89ab708, 0x28, 0xc000374280, 0xc0000a9b30, 0x0, 0x0, 0xc00046b960)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:320 +0x442
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewValidJoin(0xc00055bd00, 0xc000374280, 0x7ffee89ab708, 0x28, 0x0, 0x0, 0x0, 0xc00026fc70, 0x692be5, 0xc000565600)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:231 +0xbb
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1(0xc000550280, 0xc00046b8e0, 0x0, 0x2)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:204 +0x13c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000550280, 0xc00046b860, 0x2, 0x2, 0xc000550280, 0xc00046b860)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:760 +0x2cc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00036f180, 0xc0002a0f00, 0xc00036fb80, 0xc0004d7c10)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2fd
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc00036f180, 0xc00000c010, 0x18c9c80)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:794 +0x2b
k8s.io/kubernetes/cmd/kubeadm/app.Run(0xc00007a180, 0x18b)
    /workspace/anago-v1.13.1-beta.0.57+eec55b9ba98609/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:48 +0x202
main.main()
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:29 +0x33

What you expected to happen?

kubeadm continues with the rest of its logic instead of panic'ing

How to reproduce it (as minimally and precisely as possible)?

kubeadm join --config=<path to config file> --experimental-control-plane

Anything else we need to know?

This is happening on this line, and I fixed it locally by changing the nil-check to if internalCfg.ControlPlane != nil . If this is the proper fix, I can submit the PR for it. I don't believe this code was present prior to v1.13.x since it's related to the new control plane join functionality.

kinbug prioritcritical-urgent

Most helpful comment

@anitgandhi thanks!
just github-mention me and @rosti so we can get this PR merged quick.

All 8 comments

@anitgandhi thank you for filing this bug report!
Your solution to the problem is the correct one. Can you file a PR for this?
We will also need to cherry-pick this to 1.13 as it is a nasty crash.

The actual mechanics of the crash are the following. --experimental-control-plane sets defaultcfg.ControlPlane to something different than nil, but this gets overridden in internalCfg by the config file and if there is no controlPlane key in the JoinConfiguration this crash will occur.

/kind bug
/priority critical-urgent

/cc

Sure, PR incoming within the next few hours. This is my first contribution so I'm just going to do a quick read of the contributing guide before I send it in :+1:

@anitgandhi thanks!
just github-mention me and @rosti so we can get this PR merged quick.

On a slightly tangential note, @rosti based on what you described above, if I'm going to be using a JoinConfiguration file for new master nodes, should I be including a ControlPlane section, or should just --experimental-control-plane --config=<my config path> work (after the PR ?

Based on this (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1#JoinControlPlane) it only has LocalAPIEndpoint which I don't care to set explicitly.

@anitgandhi

it only has LocalAPIEndpoint which I don't care to set explicitly.

in that case the ControlPlaneEndpoint on the root control plane node config should be sufficient for your needs.

@anitgandhi Having controlPlane: {} in your JoinConfiguration should be sufficient.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ggee picture ggee  路  4Comments

helphi picture helphi  路  3Comments

jessfraz picture jessfraz  路  3Comments

ep4eg picture ep4eg  路  3Comments

bruceauyeung picture bruceauyeung  路  4Comments