What happened?
clusterDNS in cluster.yaml as below and apply it.--cluster-dns accepts Comma-separated list of DNS server IP address. So I suppose setting multiple ip addresses to clusterDNS in cluster.yaml is acceptable.nodeGroups:
- name: nodegroup1
clusterDNS: 169.254.20.10,172.20.0.10
kubelet does not have ClusterDNS IP configured and cannot create Pod using \"ClusterFirst\" policy. Falling back to \"Default\" policy.
[ec2-user@ip-xx-xx-xx-xx ~]$ cat /etc/eksctl/kubelet.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/eksctl/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
clusterDNS:
- 169.254.20.10,172.20.0.10
clusterDomain: cluster.local
featureGates:
RotateKubeletServerCertificate: true
kind: KubeletConfiguration
serverTLSBootstrap: true
/etc/resolv.conf in a pod on the node is ↓ [ec2-user@ip-xx-xx-xx-xx ~] kubectl exec podname cat /etc/resolv.conf
nameserver 10.0.0.2
search ap-northeast-1.compute.internal
options timeout:2 attempts:5
What you expected to happen?
clusterDNS setting is clusterDNS: ["169.254.20.10","172.20.0.10"].
[ec2-user@ip-xx-xx-xx-xx ~]$ cat /etc/eksctl/kubelet.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/eksctl/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
clusterDNS: ["169.254.20.10","172.20.0.10"]
clusterDomain: cluster.local
featureGates:
RotateKubeletServerCertificate: true
kind: KubeletConfiguration
serverTLSBootstrap: true
systemctl restart kubelet) and deploy a pod, which's
/etc/resolv.conf is ↓. this is what I expected. [ec2-user@ip-xx-xx-xx-xx ~]$ kubectl exec podname cat /etc/resolv.conf
nameserver 169.254.20.10
nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ap-northeast-1.compute.internal
options ndots:5
How to reproduce it?
eksctl create cluster -f cluster.yaml or eksctl create nodegroup -f cluster.yaml ).apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: dev
region: ap-northeast-1
version: "1.13"
vpc:
id: "vpc-xxxxx"
cidr: "10.0.0.0/16"
subnets:
private:
ap-northeast-1a:
id: "subnet-xxxxx"
cidr: "10.0.144.0/24"
ap-northeast-1c:
id: "subnet-xxxxx"
cidr: "10.0.145.0/24"
ap-northeast-1d:
id: "subnet-xxxxx"
cidr: "10.0.146.0/24"
nodeGroups:
- name: ng1
clusterDNS: 169.254.20.10,172.20.0.10
labels: {role: workers}
tags: {Stack: development, Site: ikyucom, Role: eks-node, k8s.io/cluster-autoscaler/dev: owned, k8s.io/cluster-autoscaler/enabled: "true"}
instanceType: c5.xlarge
desiredCapacity: 4
maxSize: 5
privateNetworking: true
securityGroups:
attachIDs: [sg-xxxxx]
withShared: true
ssh:
allow: true
publicKeyPath: xxxxx
Anything else we need to know?
Versions
$ eksctl version
[ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.38"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-eks-c57ff8", GitCommit:"c57ff8e35590932c652433fab07988da79265d5b", GitTreeState:"clean", BuildDate:"2019-06-07T20:43:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
It maybe possible to set it via kubeletExtraConfig, would you mind trying?
@errordeveloper I tried, but not work.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: dev
region: ap-northeast-1
version: "1.13"
vpc:
id: "vpc-xxxxxx"
cidr: "10.0.0.0/16"
subnets:
private:
ap-northeast-1a:
id: "subnet-xxxxx"
cidr: "10.0.144.0/24"
ap-northeast-1c:
id: "subnet-xxxxx"
cidr: "10.0.145.0/24"
ap-northeast-1d:
id: "subnet-xxxxx"
cidr: "10.0.146.0/24"
nodeGroups:
- name: ng1-ExtraConfig
labels: {role: workers}
tags: {Stack: development, Site: ikyucom, Role: eks-node, k8s.io/cluster-autoscaler/dev: owned, k8s.io/cluster-autoscaler/enabled: "true"}
instanceType: c5.xlarge
desiredCapacity: 1
maxSize: 5
privateNetworking: true
securityGroups:
attachIDs: [sg-xxxxx]
withShared: true
ssh:
allow: true
publicKeyPath: xxxxx
kubeletExtraConfig:
clusterDNS: ["169.254.20.10","172.20.0.10"]
[ec2-user@ip-xx-xx-xx-xx ~]$ cat /etc/eksctl/kubelet.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/eksctl/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
clusterDNS:
- 169.254.20.10
- 172.20.0.10
clusterDomain: cluster.local
featureGates:
RotateKubeletServerCertificate: true
kind: KubeletConfiguration
serverTLSBootstrap: true
/etc/resolv.conf in a pod on the node is ↓[ec2-user@ip-xx-xx-xx-xx ~]$ docker exec -it k8s_kube-proxy_kube-proxy-xxxx cat /etc/resolv.conf
nameserver 10.0.0.2
search ap-northeast-1.compute.internal
options timeout:2 attempts:5
I've not used this mode myself, I am not sure how it's supposed to work. I believe it's a relatively recent feature. Did you check kubelet logs and see if it's got anything to say about this?
[ec2-user@ip-10-0-146-236 ~]$ journalctl -u kubelet
-- Logs begin at Fri 2019-07-05 10:51:19 UTC, end at Fri 2019-07-05 10:56:51 UTC. --
Jul 05 10:51:40 ip-10-0-146-236.ap-northeast-1.compute.internal systemd[1]: Starting Kubernetes Kubelet...
Jul 05 10:51:40 ip-10-0-146-236.ap-northeast-1.compute.internal systemd[1]: Started Kubernetes Kubelet.
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: Flag --allow-privileged has been deprecated, will be removed in a future version
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: Flag --allow-privileged has been deprecated, will be removed in a future version
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.559915 3728 server.go:407] Version: v1.13.7-eks-c57ff8
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.560111 3728 plugins.go:118] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a fJul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.561786 3728 aws.go:1041] Building AWS cloudprovider
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.561857 3728 aws.go:1007] Zone not specified in configuration file; querying AWS metadata service
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.765183 3728 tags.go:77] AWS cloud filtering on ClusterID: dev
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.790975 3728 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.791651 3728 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.791675 3728 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KJul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.791880 3728 container_manager_linux.go:272] Creating device plugin manager: true
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.792112 3728 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.795128 3728 kubelet.go:306] Watching apiserver
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.795328 3728 kubelet.go:476] Invalid clusterDNS ip '"169.254.20.10,172.20.0.10"'
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.797488 3728 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.798275 3728 client.go:104] Start docker client with request timeout=2m0s
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.800024 3728 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.800043 3728 docker_service.go:236] Hairpin mode set to "hairpin-veth"
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.800169 3728 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.802225 3728 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.802265 3728 docker_service.go:251] Docker cri networking managed by cni
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.811563 3728 docker_service.go:256] Docker Info: &{ID:4WCK:V5ST:MBA6:E65C:G55O:FKG7:JPIR:ZR6I:6JVN:OLBD:JZI4:WTTE Containers:0 ContainersRunniJul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.811719 3728 docker_service.go:269] Setting cgroupDriver to cgroupfs
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.830833 3728 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.06.1-ce, apiVersion: 1.38.0
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.832125 3728 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.833943 3728 server.go:999] Started kubelet
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.835210 3728 server.go:137] Starting to listen on 0.0.0.0:10250
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.835709 3728 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.835734 3728 status_manager.go:152] Starting to sync pod status with apiserver
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.835745 3728 kubelet.go:1829] Starting kubelet main sync loop.
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.835764 3728 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pJul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.836739 3728 server.go:333] Adding debug handlers to kubelet server.
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:41.837689 3728 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs infJul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.838059 3728 volume_manager.go:248] Starting Kubelet Volume Manager
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.839424 3728 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.840091 3728 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:41.841278 3728 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network pluJul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.929553 3728 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.929580 3728 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=c5.xlarge
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.929589 3728 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=ap-northeast-1d
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.929597 3728 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=ap-northeast-1
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.932274 3728 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.932291 3728 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.932301 3728 policy_none.go:42] [cpumanager] none policy: Start
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.942396 3728 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.942435 3728 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.942447 3728 kubelet_node_status.go:327] Adding node label from cloud provider: beta.kubernetes.io/instance-type=c5.xlarge
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.942455 3728 kubelet_node_status.go:338] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=ap-northeast-1d
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.942463 3728 kubelet_node_status.go:342] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=ap-northeast-1
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:41.943497 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:41.944658 3728 kubelet_node_status.go:72] Attempting to register node ip-10-0-146-236.ap-northeast-1.compute.internal
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:41.945563 3728 manager.go:537] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Jul 05 10:51:41 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:41.946176 3728 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "ip-10-0-146-236.ap-northeaJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.043679 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.143871 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.244050 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.344247 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.444384 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.544595 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.645062 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:42.745621 3728 kubelet.go:2266] node "ip-10-0-146-236.ap-northeast-1.compute.internal" not found
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.809701 3728 kubelet_node_status.go:75] Successfully registered node ip-10-0-146-236.ap-northeast-1.compute.internal
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.845772 3728 reconciler.go:154] Reconciler: start to sync state
Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.946288 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/cJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.946368 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/confiJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.946390 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xtxsv" (UniqueName: "kubJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.946406 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.946422 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "dockersock" (UniqueName: "kubernetes.io/hJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.947412 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "aws-node-token-rf4mc" (UniqueName: "kuberJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.947741 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.947990 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/Jul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.948210 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "log-dir" (UniqueName: "kubernetes.io/hostJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.948438 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.ioJul 05 10:51:42 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:42.948665 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/Jul 05 10:51:43 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:43.932970 3728 pod_container_deletor.go:75] Container "b465f0308cdf1b0fc62fec09e7f8cf487c6b05100ade6e920785c2a509790206" not found in pod's contJul 05 10:51:43 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:43.936297 3728 pod_container_deletor.go:75] Container "dc61faa6df59e25364b6c721aaea1c30ec6aa3595605098c2b92747d216cf980" not found in pod's contJul 05 10:51:46 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:51:46.946821 3728 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 05 10:51:46 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: E0705 10:51:46.947301 3728 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network pluJul 05 10:51:52 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:51:52.967366 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube2iam-token-lgnd9" (UniqueName: "kuberJul 05 10:54:00 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:54:00.814301 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "heapster-heapster-token-57h8g" (UniqueNamJul 05 10:54:01 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:54:01.631725 3728 pod_container_deletor.go:75] Container "be47e4fb7fb6384c2b197d4801ea0a6f0465a64b727c3f778a5c3a9afe2298dc" not found in pod's contJul 05 10:54:15 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: I0705 10:54:15.933553 3728 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2mm9q" (UniqueName: "kubernJul 05 10:54:16 ip-10-0-146-236.ap-northeast-1.compute.internal kubelet[3728]: W0705 10:54:16.721190 3728 pod_container_deletor.go:75] Container "68bd5b794d80319f1eb445635c74a514a7d52d9198a5a0e0d1243cf78a8eb700" not found in pod's contlines 1-88/88 (END)
journalctl -u kubelet.kubelet.go:476] Invalid clusterDNS ip '"169.254.20.10,172.20.0.10"' in line 15.ip-10-0-146-236.ap-northeast-1.compute.internal is this node.Does my answer make sense?
Hi @s-tokutake , I just tried the this nodegroup configuration:
nodeGroups:
- name: ng-1
instanceType: t3.medium
desiredCapacity: 1
kubeletExtraConfig:
clusterDNS: ["169.254.20.10","172.20.0.10"]
And got the following in the kubelet.yaml:
clusterDNS:
- 169.254.20.10
- 172.20.0.10
With this I get the expected dns:
$ kubectl exec busybox cat /etc/resolv.conf
nameserver 169.254.20.10
nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local us-west-2.compute.internal
options ndots:5
Did you restart your kubelet after changing the kubelet.yaml file?
Also, nodegroup-level configuration can only be applied by eksctl when you
create a new nodegroup, as nodegroup are immutable.
On Fri, 5 Jul 2019, 12:15 pm Martina Iglesias, notifications@github.com
wrote:
Hi @s-tokutake https://github.com/s-tokutake , I just tried the last
bit from your kubelet generated from the extraKubeletConfig:clusterDNS:
- 169.254.20.10
- 172.20.0.10
And after restarting the kubelet this seems to work. Did you restart it?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/987?email_source=notifications&email_token=AAB5MS367TCMLGIGT3W2ICDP54UMDA5CNFSM4H6JFW42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZJIQ4Y#issuecomment-508725363,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB5MS6XBIA2Y5DVLI47YHLP54UMDANCNFSM4H6JFW4Q
.
@errordeveloper @martina-if thank you for your reply.
I retried and got what I expected👍
kubeletExtraConfig, and deploy new pods to the node( kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml ). /etc/resolv.conf is ↓ [ec2-user@ip-xx-xx-xx-xx ~]$ kubectl exec podname cat /etc/resolv.conf
nameserver 169.254.20.10
nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ap-northeast-1.compute.internal
options ndots:5
I have another question. In this nodegroup, kube-proxy and aws-node pod's /etc/resolv.conf is as below, which is not the clusterDNS ip addresses.
nameserver 10.0.0.2
search ap-northeast-1.compute.internal
options timeout:2 attempts:5
Do you think this is corrent behavior?
@s-tokutake I think the clusterDNS is set in the kubelet, and maybe that's why it's not affecting the pods in the kube-system namespace?
@martina-if I understand. thanks.
Most helpful comment
Also, nodegroup-level configuration can only be applied by eksctl when you
create a new nodegroup, as nodegroup are immutable.
On Fri, 5 Jul 2019, 12:15 pm Martina Iglesias, notifications@github.com
wrote: