Kubeadm: how run kubeadm in other network interface different to the default one?

Created on 16 Jan 2017  路  6Comments  路  Source: kubernetes/kubeadm

and not --api-advertise-addresses is just a little part of the answer

a month ago was needed to router the kubernetes service cidr subnet to the no default interface

in ubuntu : 矛p route add 10.96.0.0/12 dev other_interface

now this not work anymore, the kube-dns service become unreachable perhaps due to the router probably,

I am using tinc vpn to connect various nodes on internet, and the setup was working a month ago.

someones have an idea how can I make it work again?. thanks.

kinsupport prioritbacklog

Most helpful comment

Ok I fixed this one, this was tested creating a kubeadm cluster between different cloud providers over tinc network (vultr, scaleway, digital ocean, Hetzner, aws, ovh). all the host on ubuntu 16.04, docker installed from apt-get install docker.io

  1. clean up the /etc/resolv.conf in all those servers, installed dnsmasq, and create new nameserver entries, 1. localhost, other to the docker interface ( this is probably a hack) , also blocked the /etc/resolv.conf for be rewritten again.

  2. add the google dns servers to dnsmasq

rm  /etc/resolv.conf
echo "nameserver 127.0.0.1
nameserver 172.17.0.1" > /etc/resolv.conf

chattr +i /etc/resolv.conf

echo "server=8.8.8.8
server=8.8.4.4" > /etc/dnsmasq.conf

/etc/init.d/dnsmasq restart

then in the master, with tinc running and after install the kubeadm packages, I did initialize the cluster with --api-advertise-addresses pointing to the tinc address

kubeadm init --api-advertise-addresses=10.187.216.232 --pod-network-cidr 10.32.0.0/12 --service-cidr 10.96.0.0/12

WARNING!!: make sure that hostname -i return the tinc address before run kubeadm

repeat the process in the nodes, but additionally, I added the service-cidr to the tinc interface

ip route add 10.96.0.0/12 dev tzk

then ran the kube join command, so far all perfect, the dns service is working and weave net tooo. all over nodes on different providers.

hope that this can help to other people

I am automating all those steps here https://github.com/NebTex/tzk, the tinc vpn network can be launched in just minutes, and use consul with acl tokens for coordinate and share public keys, also caddy (proxy with automatic letsencrypt) provide some security to the consul communications over internet

All 6 comments

Ok I fixed this one, this was tested creating a kubeadm cluster between different cloud providers over tinc network (vultr, scaleway, digital ocean, Hetzner, aws, ovh). all the host on ubuntu 16.04, docker installed from apt-get install docker.io

  1. clean up the /etc/resolv.conf in all those servers, installed dnsmasq, and create new nameserver entries, 1. localhost, other to the docker interface ( this is probably a hack) , also blocked the /etc/resolv.conf for be rewritten again.

  2. add the google dns servers to dnsmasq

rm  /etc/resolv.conf
echo "nameserver 127.0.0.1
nameserver 172.17.0.1" > /etc/resolv.conf

chattr +i /etc/resolv.conf

echo "server=8.8.8.8
server=8.8.4.4" > /etc/dnsmasq.conf

/etc/init.d/dnsmasq restart

then in the master, with tinc running and after install the kubeadm packages, I did initialize the cluster with --api-advertise-addresses pointing to the tinc address

kubeadm init --api-advertise-addresses=10.187.216.232 --pod-network-cidr 10.32.0.0/12 --service-cidr 10.96.0.0/12

WARNING!!: make sure that hostname -i return the tinc address before run kubeadm

repeat the process in the nodes, but additionally, I added the service-cidr to the tinc interface

ip route add 10.96.0.0/12 dev tzk

then ran the kube join command, so far all perfect, the dns service is working and weave net tooo. all over nodes on different providers.

hope that this can help to other people

I am automating all those steps here https://github.com/NebTex/tzk, the tinc vpn network can be launched in just minutes, and use consul with acl tokens for coordinate and share public keys, also caddy (proxy with automatic letsencrypt) provide some security to the consul communications over internet

Guys reopen again

The biggest issues are kube-proxy and kube-dns.

kube--proxy just can't do his job in this setup.

I discover yesterday that I can make routable all the docker container in the vpn, just using static routes in tinc, so I add the ability to the daemon to pull this info from consul and create those route automatically.

so next I will try to launch kubernetes without kubeadm in this setup and probably without the needed to use a cni network. while I will leave this issue open. I will make any update soon.

@criloz Does this happen in v1.6 as well?

@luxas not,

I ended fixing my issues with

  1. running kube-dns as daemon-set in each node
  2. disabling the cni plugin with a systemd drop-in config file
  3. run the kube-controller-manager with systemd, instead as docker container [default kubeadm installation] this in order to use ceph.

that was that I did, then I could successfully run kubeadm 1.5 with ceph, add/delete nodes from any cloud provider on demand, all the communication going through the vpn, I still have not the opportunity to test it with 1.6

@criloz I'm not exactly sure what kubeadm can do here, because it seems that your networking requirements are very specific. What do you expect kubeadm to do differently?

@jamiehannaford not much really, I created this issue when I started with kubernetes and did that using kubeadm, without really understand how kubernetes work internally, I was relly frustrated by that time, till I learn kubernetes in deeper and decide to hack the kubeadm installation to my necessities

something that probably can help?

  • allow the user to easily select if want to use or not a cni networking plugin
  • allow to the user to pick systemd or docker for run the core services

closing the issue

Was this page helpful?
0 / 5 - 0 ratings