Version:
v1.17.1-rc1+k3s1
./k3s.sh server --no-deploy=coredns
Describe the bug
Adding nodes without coredns manifest prevents coredns from being deployed later. This appears to be because the NodeHosts configmap is only created or updated when nodes are added. If the coredns manifest is added after all nodes have been joined, the NodeHosts entry will never be created and coredns will fail to start
https://github.com/rancher/k3s/blob/master/pkg/node/controller.go#L40
To Reproduce
Expected behavior
coredns receives configuration and starts as usual
Actual behavior
E0120 18:01:29.201824 32534 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/5eaea202-9251-42c4-b527-03a78086831b-config-volume\" (\"5eaea202-9251-42c4-b527-03a78086831b\")" failed. No retries permitted until 2020-01-20 18:03:31.201771173 -0800 PST m=+337.323654190 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5eaea202-9251-42c4-b527-03a78086831b-config-volume\") pod \"coredns-d798c9dd-dtjnm\" (UID: \"5eaea202-9251-42c4-b527-03a78086831b\") : configmap references non-existent config key: NodeHosts"
Additional context
I was doing this to customize my coredns configuration, since the current k3s server overwrites files on startup.
Won't https://github.com/rancher/k3s/pull/1345 make this worse? Now if I ask to skip coredns, the NodeHosts ConfigMap key will no longer be dynamically updated as I add nodes. Is there any way I can keep the existing NodeHosts behavior while still being able to customize the CoreDNS Corefile without it getting overwritten on startup?
The proper thing to do is probably restart k3s without the --no-deploy=coredns flag.
It sounds like the workflow for modifying the coredns manifest (or manifests in general) is a different issue.
I meet the same problem as this how to solve
@daniel198609 if you're using the stock k3s coredns yaml, you need to set the NodeHosts configmap key manually. It should contain an /etc/hosts style list of IPs and hostnames for all k3s nodes:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
NodeHosts: |
10.0.1.20 seago.khaus
10.0.1.21 maersk.khaus
10.0.1.22 sealand.khaus
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
prometheus :9153
forward . 10.0.1.1:53
cache 30
loop
reload
loadbalance
}
I think I am hitting the issue as well.
I disabled the coredns deployment using --no-deploy=coredns, copied the stock coredns.yml file, substituted the %{CLUSTER_DOMAIN}% variables, and the important change for me, switched to a DaemonSet.
This won't start my containers, getting stuck at:
MountVolume.SetUp failed for volume "config-volume" : configmap references non-existent config key: NodeHosts
If I re-enable the default coredns deployment, I end up having a deployment and my daemonset working, which is somewhat of a workaround.
@m4rcu5 see the comment directly above yours. You have to include a NodeHosts entry in the configmap, which doesn't exist in the on-disk manifest since it's created and updated on-demand by k3s when nodes are added to or removed from the cluster.
@brandond I was hoping for a more integrated way, so we do not have to change the configmap by hand when adding or removing hosts.
Are there any plans to allow for customizations (maybe Kustomize) the manifests deployed by K3s?
Most helpful comment
@brandond I was hoping for a more integrated way, so we do not have to change the configmap by hand when adding or removing hosts.
Are there any plans to allow for customizations (maybe Kustomize) the manifests deployed by K3s?