Kubespray: kube-proxy is not created by Kubespray despite having configuration in kubeadm-config

Created on 21 Apr 2019  路  10Comments  路  Source: kubernetes-sigs/kubespray

Environment:

  • Cloud provider or hardware configuration:Virtual Machine
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):Ubuntu 16.04

  • Version of Ansible (ansible --version):2.7.10

Kubespray version (commit) (git rev-parse --short HEAD):6f919e5

Network plugin used:flannel

Copy of your inventory file:
[all]
master ansible_host=161.X.X.X ip=161.X.X.X ansible_user=raman ansible_sudo=yes
worker ansible_host=161.X.X.X ip=161.X.X.X ansible_user=raman ansible_sudo=yes

[kube-master]
master

[etcd]
master

[kube-node]
worker

[k8s-cluster:children]
kube-node
kube-master

[calico-rr]

Command used to invoke ansible:
ansible-playbook --flush-cache -i inventory/mycluster/inventory.ini cluster.yml --ask-pass --become --ask-become-pass

Output of ansible run:
TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] ****************************************
Sunday 21 April 2019 20:09:50 +0530 (0:00:00.278) 0:02:51.331
***
fatal: [master]: FAILED! => {"changed": true, "cmd": "/opt/bin/kubectl --kubeconfig /etc/kubernetes/admin.conf get ds kube-proxy --namespace=kube-system -o jsonpath='{.spec.template.spec.nodeSelector.beta.kubernetes.io/os}'", "delta": "0:00:00.082602", "end": "2019-04-21 20:09:50.532693", "msg": "non-zero return code", "rc": 1, "start": "2019-04-21 20:09:50.450091", "stderr": "Error from server (NotFound): daemonsets.extensions "kube-proxy" not found", "stderr_lines": ["Error from server (NotFound): daemonsets.extensions "kube-proxy" not found"], "stdout": "", "stdout_lines": []}

Anything else do we need to know:
kube-proxy daemonset is not there in master node and i have no clue why kubeadm not created kube-proxy daemonset despite having KubeProxyConfiguration.

Please help me here.
this is my kubeadm-config.yaml it has KubeProxyConfiguration but still kube-proxy is not getting created.
root@master:~/kubespray# cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 161.X.X.X
bindPort: 6443
nodeRegistration:
name: master
taints:

  • effect: NoSchedule
    key: node.kubernetes.io/master

criSocket: /var/run/dockershim.sock

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
clusterName: cluster.local
etcd:
external:
endpoints:
- https://161.X.X.X:2379
caFile: /etc/ssl/etcd/ssl/ca.pem
certFile: /etc/ssl/etcd/ssl/node-master.pem
keyFile: /etc/ssl/etcd/ssl/node-master-key.pem
networking:
dnsDomain: cluster.local
serviceSubnet: 10.233.0.0/18
podSubnet: 10.233.64.0/18
kubernetesVersion: v1.14.0
controlPlaneEndpoint: 161.X.X.X:6443
certificatesDir: /etc/kubernetes/ssl
imageRepository: gcr.io/google-containers
useHyperKubeImage: false
apiServer:
extraArgs:
authorization-mode: Node,RBAC
bind-address: 0.0.0.0
insecure-port: "0"
apiserver-count: "1"
endpoint-reconciler-type: lease
service-node-port-range: 30000-32767
kubelet-preferred-address-types: "InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP"
storage-backend: etcd3
runtime-config: admissionregistration.k8s.io/v1alpha1
allow-privileged: "true"
audit-log-path: "/var/log/audit/kube-apiserver-audit.log"
audit-log-maxage: "30"
audit-log-maxbackup: "1"
audit-log-maxsize: "100"
audit-policy-file: /etc/kubernetes/audit-policy/apiserver-audit-policy.yaml
extraVolumes:
- name: audit-policy
hostPath: /etc/kubernetes/audit-policy
mountPath: /etc/kubernetes/audit-policy
- name: audit-logs
hostPath: /var/log/kubernetes/audit
mountPath: /var/log/audit
readOnly: false
- name: usr-share-ca-certificates
hostPath: /usr/share/ca-certificates
mountPath: /usr/share/ca-certificates
readOnly: true
certSANs:
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- 10.233.0.1
- localhost
- 127.0.0.1
- master
- 161.X.X.X
timeoutForControlPlane: 5m0s
controllerManager:
extraArgs:
node-monitor-grace-period: 40s
node-monitor-period: 5s
pod-eviction-timeout: 5m0s
node-cidr-mask-size: "24"
bind-address: 0.0.0.0
configure-cloud-routes: "false"
scheduler:
extraArgs:
bind-address: 0.0.0.0

extraVolumes:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes:
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig:
qps:
clusterCIDR: 10.233.64.0/18
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: False
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: master
iptables:
masqueradeAll: False
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: rr
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
nodePortAddresses: []
oomScoreAdj: -999
portRange:
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms

kinbug

Most helpful comment

How did this get closed? It does not seem to have been fixed. I just ran into this tonight with kubespray and have yet to figure out what the problem actually is. Manually configuring kube-proxy seems like a nasty hack workaround at best.

All 10 comments

Kubespray doesn't create kube-proxy daemonset so i created it manually in /etc/kubernetes.
root@master:/etc/kubernetes# cat kube-proxy-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/affinity: '{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/arch","operator":"In","values":["amd64"]}]}]}}}'
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated","value":"master","effect":"NoSchedule"}]'
labels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/kube-proxy-amd64:v1.14.1
imagePullPolicy: IfNotPresent
command:
- kube-proxy
- --kubeconfig=/run/kubeconfig
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/run/dbus
name: dbus
- mountPath: /run/kubeconfig
name: kubeconfig
volumes:
- hostPath:
path: /etc/kubernetes/kubelet.conf
name: kubeconfig
- hostPath:
path: /var/run/dbus
name: dbus

root@master:/etc/kubernetes# cat kube-proxy-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: gce:podsecuritypolicy:kube-proxy
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gce:podsecuritypolicy:privileged
subjects:

  • kind: ServiceAccount
    name: kube-proxy
    namespace: kube-system root@master:/etc/kubernetes# cat kube-proxy-rbac.yaml
    piVersion: v1
    kind: ServiceAccount
    metadata:
    name: kube-proxy
    namespace: kube-system
    labels:

addonmanager.kubernetes.io/mode: Reconcile

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: system:kube-proxy
labels:
addonmanager.kubernetes.io/mode: Reconcile
subjects:

  • kind: ServiceAccount
    name: kube-proxy
    namespace: kube-system
    roleRef:
    kind: ClusterRole
    name: system:node-proxier
    apiGroup: rbac.authorization.k8s.io

root@master:/etc/kubernetes# cat kube-proxy-user-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:kube-proxy-user-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:

  • kind: User
    name: system:node:master
    namespace: kube-system

adding these files with these values worked for me and kube-proxy pods are working fine as expected.

@RamanPndy
I am having the below error, can you please let me know the steps I should do and the changes please.

TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] ****************************************
Tuesday 30 April 2019 08:2dd7:34 +0000 (0:00:00.470) 0:05:55.572
**
fatal: [master-1]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl --kubeconfig /etc/kubernetes/admin.conf get ds kube-proxy --namespace=kube-system -o jsonpath='{.spec.template.spec.nodeSelector.beta.kubernetes.io/os}'", "delta": "0:00:00.123455", "end": "2019-04-30 08:27:34.348016", "msg": "non-zero return code", "rc": 1, "start": "2019-04-30 08:27:34.224561", "stderr": "Error from server (NotFound): daemonsets.extensions "kube-proxy" not found", "stderr_lines": ["Error from server (NotFound): daemonsets.extensions "kube-proxy" not found"], "stdout": "", "stdout_lines": []}

@RamanPndy Can you provide the files kube-proxy-user-clusterrolebinding.yaml kube-proxy-rbac.yaml kube-proxy-binding.yaml

@RamanPndy Can you provide the files kube-proxy-user-clusterrolebinding.yaml kube-proxy-rbac.yaml kube-proxy-binding.yaml

Please see my answer comment https://github.com/kubernetes-sigs/kubespray/issues/4600#issuecomment-486134395

it has kube-proxy-user-clusterrolebinding.yaml kube-proxy-daemonset.yaml kube-proxy-binding.yaml
and try with this kube-proxy-rbac.yml
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/kube-proxy/kube-proxy-rbac.yaml

@RamanPndy Why is kube-proxy installed on the master node and why is it not installed on the node node
After the installation,
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
Master Ready master 56m v1.14.1
Root @ master ~ #
Do I still need to use the kubeadm join to add the node nodes to the cluster?

How did this get closed? It does not seem to have been fixed. I just ran into this tonight with kubespray and have yet to figure out what the problem actually is. Manually configuring kube-proxy seems like a nasty hack workaround at best.

Kubespray doesn't create kube-proxy daemonset so i created it manually in /etc/kubernetes.
root@master:/etc/kubernetes# cat kube-proxy-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/affinity: '{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/arch","operator":"In","values":["amd64"]}]}]}}}'
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated","value":"master","effect":"NoSchedule"}]'
labels:
component: kube-proxy
k8s-app: kube-proxy
kubernetes.io/cluster-service: "true"
name: kube-proxy
tier: node
spec:
hostNetwork: true
containers:

  • name: kube-proxy
    image: gcr.io/google_containers/kube-proxy-amd64:v1.14.1
    imagePullPolicy: IfNotPresent
    command:
  • kube-proxy
  • --kubeconfig=/run/kubeconfig
    securityContext:
    privileged: true
    volumeMounts:
  • mountPath: /var/run/dbus
    name: dbus
  • mountPath: /run/kubeconfig
    name: kubeconfig
    volumes:
  • hostPath:
    path: /etc/kubernetes/kubelet.conf
    name: kubeconfig
  • hostPath:
    path: /var/run/dbus
    name: dbus

root@master:/etc/kubernetes# cat kube-proxy-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: gce:podsecuritypolicy:kube-proxy
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gce:podsecuritypolicy:privileged
subjects:

  • kind: ServiceAccount
    name: kube-proxy
    namespace: kube-systemroot@master:/etc/kubernetes# cat kube-proxy-rbac.yaml
    piVersion: v1
    kind: ServiceAccount
    metadata:
    name: kube-proxy
    namespace: kube-system
    labels:
    addonmanager.kubernetes.io/mode: Reconcile

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: system:kube-proxy
labels:
addonmanager.kubernetes.io/mode: Reconcile
subjects:

  • kind: ServiceAccount
    name: kube-proxy
    namespace: kube-system
    roleRef:
    kind: ClusterRole
    name: system:node-proxier
    apiGroup: rbac.authorization.k8s.io

root@master:/etc/kubernetes# cat kube-proxy-user-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:kube-proxy-user-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:

  • kind: User
    name: system:node:master
    namespace: kube-system

adding these files with these values worked for me and kube-proxy pods are working fine as expected.

@RamanPndy, how to apply these files manually ?
do we need to apply these file on master only /or all (master+worker) nodes ?

I did try applying files manually,
kubectl apply -f kube-proxy-daemontset.yaml
but got error -
error: unable to recognize "kube-proxy-daemonset.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

I recently faced this issue. I researched about how kubectl get ds kube-proxy is not working and after turning on high verbose logging on kubectl command, found out that it is talking to kube-apiserver to get the daemon set. This usually means that the kube-apiserver is not setup properly, which leads to it not finding daemonset for kube-proxy.

If a clean installation goes through, this issue should not be there. I just did a reset with reset.yml and then ran cluster.yml again and the installation went through smoothly.

@sirsikar-lalit thanks bro, i have followed same approach and issue got resolved. :)

I think the clean way to o around this is to reset kubeadm so that it reinstalls kube-proxy.
ssh into your server and run: kubeadm reset as root.
This is better than reset the whole cluster and loosing all your data pods...

Was this page helpful?
0 / 5 - 0 ratings