BUG REPORT
the api server fails with the
I1004 18:20:37.003327 1 server.go:114] Version: v1.8.0
I1004 18:20:37.003445 1 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider.
F1004 18:20:37.003478 1 plugins.go:115] Couldn't open cloud provider configuration /etc/kubernetes/cloud-config: &os.PathError{Op:"open", Path:"/etc/kubernetes/cloud-config", Err:0x2}
the cloud config is there on a host but it seems that container can't read it.
SELinux is off:
root@kube-master-1:~# getenforce
Disabled
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compil
er:"gc", Platform:"linux/amd64"}
Environment:
kubectl version):Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler
:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
uname -a):Linux kube-master-1 4.11.0-1011-azure #11-Ubuntu SMP Tue Sep 19 19:03:54 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
the entire process was started with
kubeadm init --config=/etc/kubernetes/kubeadm.conf
but it gets stuck on
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
logs show errors like this
Oct 04 18:32:12 kube-master-1 kubelet[2699]: E1004 18:32:12.359954 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.22.0.4:6443/api/v1/nodes?fieldSe
lector=metadata.name%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:13 kube-master-1 kubelet[2699]: E1004 18:32:13.281200 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.22.0.4:6443/api/v1/services?r
esourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:13 kube-master-1 kubelet[2699]: E1004 18:32:13.284960 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.22.0.4:6443/api/v1/pods?f
ieldSelector=spec.nodeName%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:13 kube-master-1 kubelet[2699]: E1004 18:32:13.360584 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.22.0.4:6443/api/v1/nodes?fieldSe
lector=metadata.name%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:14 kube-master-1 kubelet[2699]: E1004 18:32:14.282423 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.22.0.4:6443/api/v1/services?r
esourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:14 kube-master-1 kubelet[2699]: E1004 18:32:14.285848 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.22.0.4:6443/api/v1/pods?f
ieldSelector=spec.nodeName%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:14 kube-master-1 kubelet[2699]: E1004 18:32:14.361350 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.22.0.4:6443/api/v1/nodes?fieldSe
lector=metadata.name%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:15 kube-master-1 kubelet[2699]: E1004 18:32:15.283177 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.22.0.4:6443/api/v1/services?r
esourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:15 kube-master-1 kubelet[2699]: E1004 18:32:15.286260 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.22.0.4:6443/api/v1/pods?f
ieldSelector=spec.nodeName%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
Oct 04 18:32:15 kube-master-1 kubelet[2699]: E1004 18:32:15.362515 2699 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.22.0.4:6443/api/v1/nodes?fieldSe
lector=metadata.name%3Dkube-master-1&resourceVersion=0: dial tcp 10.22.0.4:6443: getsockopt: connection refused
that led me to find that apiserver container is down
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e1fb71af18f9 gcr.io/google_containers/kube-apiserver-amd64 "kube-apiserver --..." About a minute ago Exited (255) About a minute ago k8s_kube-apiserver_kube-apiserver-kube-master-1_kube-system_7813492e02b0dddb3f97bd3293a84964_7
I think something like this could fix that:
https://github.com/tadas-subonis/kubernetes/commit/631dd81c7f4bbaa5ac1e0e13542b43ddeb3564d7
@tadas-subonis That would work but is probably overkill and would expose some stuff into the containers that we'd not want to do. We should probably do a directory under /etc/kubernetes/
The problem is that we don't have explicit support for cloud providers and so we don't know about the cloud-config flag (which I assume is specified in your kubeadm.conf file).
Options to make this more generic/safe:
cloud-config, make sure that it is under /etc/kubernetes/cloud-config/ and map that in to the control plan components.Both of these approaches have their disadvantages as we are plumbing through more tweeks and knobs. More to support and test. But, that being said, I think that option 2 above might be the way to go.
Is this a regression from 1.7?
I haven't tried this with 1.7
On Thu, Oct 5, 2017 at 11:54 PM, Joe Beda notifications@github.com wrote:
Is this a regression from 1.7?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubeadm/issues/484#issuecomment-334602893,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPFsizDXVESmJx8NiKQQbj-xnGuJ-7gks5spVASgaJpZM4PuC1f
.
--
Kind Regards,
Tadas Å ubonis
1.7 seems fine. The code for volumes in manifests was re-done for 1.8, I think, starting July 20th.
Ah, I see, this is due to that we don't mount the full /etc/kubernetes/ hostPath dir anymore, instead files only for security. I'd be happy to recieve a bugfix in v1.7 that mounts /etc/kubernetes/cloud-config using hostPath if the file exists and .cloudProvider is set. Bear in mind that we won't expand much on the functionality here (it's in alpha); instead we're moving towards out-of-tree cloud providers on top of kubernetes instead.
@luxas What about other optional mounts like for basic auth? You mentioned there was a fix, but I didn't find it yet.
@luxas
1.8.1 is released. Is this fix included?
@andrewrynhard is working on a fix for this I think.
Yah, tried this with kubeadm v1.8.0 - does not work. @tadas-subonis Did you have a workaround for this in the time being? I downgraded to v1.7.8 and that seems like it "may" have worked?
@srflaxu40 The workaround currently is to put cloud-config file inside /etc/kubernetes/pki, as this directory currently is being mounted inside both to apiserver and controller manager.
And you need to specify in kubeadm.conf:
apiServerExtraArgs:
cloud-config: /etc/kubernetes/pki/cloud-config
controllerManagerExtraArgs:
cloud-config: /etc/kubernetes/pki/cloud-config
P.S. Note that /etc/kubernetes/pki directory is cleared each time you're doing kubeadm reset
I moved to other means of deployment (kubespray).
On Mon, Oct 30, 2017 at 3:44 AM, John notifications@github.com wrote:
Yah, tried this with kubeadm v1.8.0 - does not work. @tadas-subonis
https://github.com/tadas-subonis Did you have a workaround for this in
the time being? I downgraded to v1.7.8 and that seems like it "may" have
worked?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubeadm/issues/484#issuecomment-340329714,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPFsl6j4ROwqFO9GGZMQjMZKsgoViigks5sxTgegaJpZM4PuC1f
.
--
Kind Regards,
Tadas Å ubonis
@tadas-subonis Kubespray also supports kubeadm deployment (experimental). https://github.com/kubernetes/kubernetes/pull/49840 will help address this soon
@tadas-subonis does kubespray support singly attaching a kube-minion/slave to kube master?
I believe so. You add it to node group and then apply just that node
On Nov 1, 2017 18:00, "John" notifications@github.com wrote:
@tadas-subonis https://github.com/tadas-subonis does kubespray support
singly attaching a kube-minion/slave to kube master?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubeadm/issues/484#issuecomment-341150754,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPFspLmhVJoa-9ayqJuWExElvdCM22bks5syJWggaJpZM4PuC1f
.
@jbeda , @luxas, @andrewrynhard can https://github.com/kubernetes/kubernetes/pull/49840 be cherry-picked for 1.8.3 ? I did not see this in https://groups.google.com/forum/#!topic/kubernetes-dev-announce/EBIEGBxXhX4
@tamalsaha That PR will be backported to v1.8.x I think, yes
Running into this same issue trying to get the vSphere plugin working on a cluster built with kubeadm. Temporarily moving my vsphere config file into the pki directory worked for me as far as allowing the cluster to start again after making the --cloud-provider and --cloud-config settings.
Also running into this issue.
@rayterrill can you elaborate how you've solved using /etc/kubernetes/pki hack mentioned by @alexpekurovsky ?
We've tested that suggestion and kubeadm adds one cloud-provider parameter and two cloud-config parameters (one to the default location and one to the specified location in kubeadm.conf). kubeadm.conf prepends parameters and the last specification of cloud-config is chosen. That is the hardcoded default wrong value /etc/kubernetes/cloud-config.
Here are the generated manifests with the aforementioned hack:
root@k8-head-ubuntu16:~# grep cloud /etc/kubernetes/manifests/kube-apiserver.yaml
- --cloud-config=/etc/kubernetes/pki/cloud-config
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud-config
root@k8-head-ubuntu16:~# grep cloud /etc/kubernetes/manifests/kube-controller-manager.yaml
- --cloud-config=/etc/kubernetes/pki/cloud-config
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud-config
And here is the output, as expected:
F1212 13:52:52.840889 1 plugins.go:115] Couldn't open cloud provider configuration /etc/kubernetes/cloud-config: &os.PathError{Op:"open", Path:"/etc/kubernetes/cloud-config", Err:0x2}
I copied my vsphere config to /etc/kubernetes/pki, from initially having it at /etc/kubernetes, then I reloaded my configuration with (sudo systemctl daemon-reload; sudo systemctl restart kubelet.service). Everything worked after that.
Here's what I have in my manifests:
grep cloud /etc/kubernetes/manifests/kube-apiserver.yaml
- --cloud-provider=vsphere
- --cloud-config=/etc/kubernetes/pki/vsphere.conf
grep cloud /etc/kubernetes/manifests/kube-controller-manager.yaml
- --cloud-provider=vsphere
- --cloud-config=/etc/kubernetes/pki/vsphere.conf
I'm not exactly sure what you're asking. Let me know if I didn't understand. I'm happy to do whatever I can to help track this down if possible - just still n00b status in Kube and learning so apologies if I missed something.
So, was anyone able to workaround this issue?
Or it is impossible to run own k8s cluster at GCE?
I am having the same issue, even when I place the cloud-config in /etc/kubernetes/pki
Can anyone also specify what should be the content of cloud-config file? Maybe I am doing that wrong.
I haven't find any solution, except downgrading to 1.8.7 k8s
My api server was now able to read my cloud-config but I am not sure about the content of cloud-config. This is what I currently have
[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes
Getting the following error in kubelet logs:
Unable to register node "ip-x-x-x-x" with API server: nodes "ip-x-x-x-x" is forbidden: node "master" cannot modify node "ip-x-x-x-x"
Anyone please help!
My cloud config was to get vSphere integration working.
In my case it was /etc/kubernetes/pki/vsphere.conf, and my file contained this contents:
[Global]
vm-name = "kuben1"
Who has duplicated --cloud-config in apiserver - check if you have file "/etc/kubernetes/cloud-config" on Host OS. If you do - delete it. It's kind of discovery in kubeadm (or wherever it is) - if you do have this file - you will get automatically "--cloud-config=/etc/kubernetes/cloud-config"
What's the correct way to do this for Kubeadm 1.8.7.
Is the solution to use /etc/kubernetes/cloud-config/vsphere.conf or still use the workaround /etc/kubernetes/pki/vsphere.conf. What about the other flag --cloud-provider=vsphere?
@dkirrane As for 1.8 version you have to use /etc/kubernetes/pki hack and pass extra arguments to apiserver and controllermanager. You need to check original location of /etc/kubernetes/cloud-config doesn’t exist.
Starting from 1.9 you can add extra volumes to apiserver and controllermanager to mount /etc/kubernetes/cloud-config and use default location
@alexpekurovsky just moved from 1.8 to 1.9.5. Is this the correct setup??
Add following to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=vsphere --cloud-config=/etc/kubernetes/cloud-config/vsphere.conf"
Update /etc/kubernetes/manifests/kube-apiserver.yaml and /etc/kubernetes/manifests/kube-controller-manager.yaml with
- --cloud-config=/etc/kubernetes/cloud-config/vsphere.conf
- --cloud-provider=vsphere
and add new volumeMount
volumeMounts:
- mountPath: /etc/kubernetes/cloud-config
name: cloud-config
readOnly: true
Most helpful comment
@alexpekurovsky just moved from 1.8 to 1.9.5. Is this the correct setup??
Add following to
/etc/systemd/system/kubelet.service.d/10-kubeadm.confUpdate
/etc/kubernetes/manifests/kube-apiserver.yamland/etc/kubernetes/manifests/kube-controller-manager.yamlwithand add new volumeMount