Kubespray: Check for dnsmasq port fails

Created on 11 May 2017  路  10Comments  路  Source: kubernetes-sigs/kubespray

BUG REPORT
Using kargo playbooks on CoreOS machines on Openstack.
Machines are in private network, with floatingip assigned.

This ansible task is failing:

TASK [dnsmasq : Check for dnsmasq port (pulling image and running container)]
skipping: [k8s-worker-02]
skipping: [k8s-master-01]
skipping: [k8s-worker-03]
skipping: [k8s-master-02]
skipping: [k8s-master-03]
fatal: [k8s-worker-01]: FAILED! => {"changed": false, "elapsed": 181, "failed": true, "msg": "Timeout when waiting for 10.233.0.2:53"}

Environment:

  • Cloud provider or hardware configuration:
    Openstack
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Container Linux by CoreOS stable (1353.7.0)

  • Version of Ansible (ansible --version):
    ansible 2.3.0.0

Kargo version (commit) (git rev-parse --short HEAD):
8eb60f5

Network plugin used:
flannel

Copy of your inventory file:
[kube-master]
k8s-master-01 ansible_ssh_host=x.x.x.37
k8s-master-02 ansible_ssh_host=x.x.x.38
k8s-master-03 ansible_ssh_host=x.x.x.36

[etcd]
k8s-master-01 ansible_ssh_host=x.x.x.37
k8s-master-02 ansible_ssh_host=x.x.x.38
k8s-master-03 ansible_ssh_host=x.x.x.36

[kube-node]
k8s-worker-01 ansible_ssh_host=x.x.x.35
k8s-worker-02 ansible_ssh_host=x.x.x.39
k8s-worker-03 ansible_ssh_host=x.x.x.34

[k8s-cluster:children]
kube-node
kube-master

Command used to invoke ansible:
ansible-playbook -u core -b kargo/cluster.yml

Anything else do we need to know:
I deployed the CoreOS machines on Openstack using terraform, using kargo for the ansible playbooks.

Most helpful comment

dnsmasq_kubedns + calico = didn't work
dnsmasq_kubedns + flannel = didn't work
kubedns + calico = didn't work
kubedns + flannel = WORKS :)

All 10 comments

Could be because I did not include "access_ip" in the inventory.
Running again now with updated inventory, will update with results.

Not related to access_ip var.
docs/vars.md:* dns_server - Cluster IP for dnsmasq (default is 10.233.0.2)

{{dns_server}} is used in roles/dnsmasq
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
kube_service_addresses: 10.233.0.0/18

Possible the same issue as #1268.

I hit the same problem. Any update on resolving this issue?

have the same on OpenStack VMs (coreos and ubuntu 16.04)
redeployed with flannel instead of calico - all is working

you all should see this "resolved" in that it shouldn't happen if you deploy from the current master branch. we've switched from the dnsmasq based deployment to simply deploying kubedns as the default.

works with dns_mode set to kubedns

just deployed on CoreOS - does not work for me - the same
Looks I have other DNS related problem (not with deployment)
After app is deployed, app containers have floating dns problem with resolving external hosts

I will spend more time on debug it later

I also experienced the same issue, DNS resolution seems not woking properly. DNS was set to kubedns by default. Trying changing it to other, will update with the results.

dnsmasq_kubedns + calico = didn't work
dnsmasq_kubedns + flannel = didn't work
kubedns + calico = didn't work
kubedns + flannel = WORKS :)

Was this page helpful?
0 / 5 - 0 ratings