I've been following the "Deploy on OpenStack" section using CentOS base images for kubernetes master and nodes but run into the error below, is there some config I'm missing or somewhere I ought to look for why the file isn't created?
TASK [network_plugin/flannel : Flannel | Write flannel configuration] **********
ok: [kubernetes-node-2]
ok: [kubernetes-master]
ok: [kubernetes-node-1]
TASK [network_plugin/flannel : Flannel | Create flannel pod manifest] **********
ok: [kubernetes-node-2]
ok: [kubernetes-master]
ok: [kubernetes-node-1]
TASK [network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] ***
fatal: [kubernetes-node-1]: FAILED! => {"changed": false, "elapsed": 600, "failed": true, "msg": "Timeout when waiting for file /run/flannel/subnet.env"}
fatal: [kubernetes-node-2]: FAILED! => {"changed": false, "elapsed": 600, "failed": true, "msg": "Timeout when waiting for file /run/flannel/subnet.env"}
fatal: [kubernetes-master]: FAILED! => {"changed": false, "elapsed": 600, "failed": true, "msg": "Timeout when waiting for file /run/flannel/subnet.env"}
You don't miss a config, it should work. cc @Smana
Have you tried to re-run the script?
Thanks for the quick reply, yeah I've tried re-running the script but hit the same error.
Just in case it helps I'll explain a little more about what I've done.. I've created 4 centos vms in openstack, kubernetes-master
kubernetes-node-1
kubernetes-node-2
and a forth called kargo so it can be used to access all machines listed above using the openstack private network.
I'm low on floating ips so only use one that will eventually go on kubernetes-master so I can access my kubernetes services via a router of some sort. Thought I'd mention that incase it has an impact as other ansible scripts I've seen seem to expect each machine has a public ip address too,
For example here's my inventory, the kargo vm has root ssh access to all other machines. All ips below are on my openstack private network.
[kube-master]
kubernetes-master ansible_ssh_host=192.168.100.101
[etcd]
kubernetes-node-1 ansible_ssh_host=192.168.100.102
kubernetes-node-2 ansible_ssh_host=192.168.100.103
[kube-node]
kubernetes-node-1 ansible_ssh_host=192.168.100.102
kubernetes-node-2 ansible_ssh_host=192.168.100.103
[k8s-cluster:children]
kube-node
kube-master
Hi @rawlingsj , actually i didn't deploy a cluster on Openstack yet.
This is a recent contrib from @TeutoNet.
Centos + Flannel works fine on AWS,GCE,Baremetal.
You need to check the following things:
systemctl status kubeletdocker psjournalctl -ae -u kubeletdocker imagesFor info the subnet.env file is generated by a container.
Ah ok - yeah I can see the error after running journalctl -ae -u kubelet on the master
could not init cloud provider "openstack"
I did wonder if it was me because originally used..
cloud_provider: openstack
but I updated to
cloud_provider: 'openstack'
re-ran the playbook and got the same error
Apr 20 14:16:26 kubernetes-master systemd[1]: Started Kubernetes Kubelet Server.
Apr 20 14:16:26 kubernetes-master systemd[1]: Starting Kubernetes Kubelet Server...
Apr 20 14:16:27 kubernetes-master kubelet[23730]: E0420 14:16:27.393513 23730 server.go:270] Failed running kubelet: could not init cloud provider "openstack": Expected HTTP response code [200 203] when accessing [POST http://10.34.112.27:5000/v2.0/toke
Apr 20 14:16:27 kubernetes-master kubelet[23730]: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
Apr 20 14:16:27 kubernetes-master kubelet[23730]: could not init cloud provider "openstack": Expected HTTP response code [200 203] when accessing [POST http://10.34.112.27:5000/v2.0/tokens], but got 401 instead
Apr 20 14:16:27 kubernetes-master kubelet[23730]: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}}
Apr 20 14:16:27 kubernetes-master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Apr 20 14:16:27 kubernetes-master systemd[1]: Unit kubelet.service entered failed state.
Apr 20 14:16:27 kubernetes-master systemd[1]: kubelet.service failed.
Apr 20 14:16:27 kubernetes-master systemd[1]: kubelet.service holdoff time over, scheduling restart.
Apr 20 14:16:27 kubernetes-master systemd[1]: start request repeated too quickly for kubelet.service
Apr 20 14:16:27 kubernetes-master systemd[1]: Failed to start Kubernetes Kubelet Server.
Apr 20 14:16:27 kubernetes-master systemd[1]: Unit kubelet.service entered failed state.
Apr 20 14:16:27 kubernetes-master systemd[1]: kubelet.service failed.
I'm not familiar with Openstack yet but maybe you're facing this issue : http://stackoverflow.com/questions/32714756/kublet-doesnt-start-with-cloud-provider-openstack ?
Ah sorry I can see that it is actually a 401 Unauthorized.
Thanks @Smana that post has a list of attributes that my file I'm sourcing doesnt have. I suspect its down to different openstack providers. I'll try these and report back.
Yes and please let us know if something's missing in our doc :)
I've still not ruled out it being an issue my side but to update, he's the openstack env vars I'm setting (which are used as the OS_AUTH_URL is used in the logs above)
export OS_USERNAME=admin
export OS_PASSWORD=****
export OS_AUTH_URL=http://10.34.112.27:5000/v2.0
export OS_TENANT_ID=admin
export OS_REGION_NAME=RegionOne
Hi @TeutoNet would you be able to share what env vars are sourced in your openstack environment as I wonder if they're different from the set I've got above.
When I first ran the playbook there was a validation complaining that no tenant_id was set, I had tenant_name (which was what's used in my openstack env) I wonder if that's it? Wonder if the validation should accept tenant_id || tenant_name, I'll see if I can test it here
Ah it was totally me.. as you probably would have guessed :)
My openstack provider used OS_TENANT_NAME not OS_TENANT_ID, so rather than the value 'admin' I had to get the id using openstack project list
I guess if you were going to add more docs then the list of env vars would probably be enough.. i.e.
export OS_USERNAME=mycooluser
export OS_PASSWORD=****
export OS_AUTH_URL=http://openstackapi:5000/v2.0
export OS_TENANT_ID=abcdefg12345678
export OS_REGION_NAME=RegionOne
Anyways the ansible script just finished successfully!
Many thanks @Smana and @ant31
Anyways the ansible script just finished successfully!
Great :)
let us know if you have any other issue/questions
Most helpful comment
Hi @rawlingsj , actually i didn't deploy a cluster on Openstack yet.
This is a recent contrib from @TeutoNet.
Centos + Flannel works fine on AWS,GCE,Baremetal.
You need to check the following things:
systemctl status kubeletdocker psjournalctl -ae -u kubeletdocker imagesFor info the
subnet.envfile is generated by a container.