K3s: More extensibility to CoreDNS configmap

Created on 9 May 2019  路  17Comments  路  Source: k3s-io/k3s

Is your feature request related to a problem? Please describe.
We have a desire to point requests from a specific domain (that doesn't line up with the current cluster domain schema) to the ingress in order to route requests from a web head pod to another pod on the same (single) node. For instance:

(webhead) web.dev.be.lan has an env var pointing to (other pod on same node) search.dev.be.lan.

We currently modify the rendered/written manifest at

/var/lib/rancher/k3s/server/manifests/coredns.yaml

To resemble:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        rewrite name regex [a-zA-Z.]+\.dev0\.be\.lan traefik.kube-system.svc.cluster.local <-- added
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

The problem is, on k3s server start, this file gets overwritten:
https://github.com/rancher/k3s/blob/master/pkg/server/server.go#L133

Describe the solution you'd like
Ideally, there would be a way to augment the config map in a clean way so that when the file is updated/overwritten on server start, the changes persist.

Describe alternatives you've considered
Another option I considered was copying the CoreDNS manifest into our repo, adding the line, and specifying --no-deploy coredns but I didn't realize there were template variables in the file: %{CLUSTER_DOMAIN}% and %{CLUSTER_DNS}% so I'm reluctant to go the route as cluster DNS is very likely to not be consistent between environments/nodes.

I'm not sure if there's another route I'm missing to either overwrite the config or reference it in another spot to survive restarts, but I'm definitely open to testing any solutions and experimenting a bit. Thank you for all the hard work on this project, it's great having a lightweight solution like k3s!!

kinenhancement

Most helpful comment

Turns out there's (a bit hacky) way to have custom CoreDNS configs: the manifests in /var/lib/rancher/k3s/server/manifests are applied in order sorted by filename. So, to override the default CoreDNS config, one can simply create a new file e.g. with a name d_coredns-config.yaml and there you can override the coredns ConfigMap.

Question for Rancher Labs: can we rely on this behaviour and what is their vision of the optimal way to do this?

All 17 comments

Maybe it would be possible to allow some usage of https://coredns.io/plugins/import/ with a CLI arg?

Or maybe the system could allow defining a custom name for the CoreDNS ConfigMap? This way the default configmap could stay in the system and if custom configs are needed, the admin could just create a new ConfigMap and point the configuration to load the Corefile from there?

Turns out there's (a bit hacky) way to have custom CoreDNS configs: the manifests in /var/lib/rancher/k3s/server/manifests are applied in order sorted by filename. So, to override the default CoreDNS config, one can simply create a new file e.g. with a name d_coredns-config.yaml and there you can override the coredns ConfigMap.

Question for Rancher Labs: can we rely on this behaviour and what is their vision of the optimal way to do this?

I was going to say, that might not leave us in much of a better spot, but since it's _just_ the CoreDNS ConfigMap, I suppose that could work. At least for the time being.

We could probably use the import plugin to achieve this, am doing something similar with hosts on k3s where the current coredns configmap looks like the following:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          reload 1s
          fallthrough
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
  NodeHosts: |
    10.135.135.100 k3s

But in general manifest stuff could probably be improved, would depend on a properly named patch.

The main issue with import is it doesn't really support graceful reloading so would require restarting coredns or updating Corefile.

Related: coredns/coredns/issues/2633

@jait I don't believe it's reliable behavior. I've got an Ansible task that exports the config itself from the k8s resource via kubectl and writes it to override_coredns.yaml, then deletes the coredns configmap and recreates it. Seems to work after it runs but while the file survives restarts, the functionality doesn't. I might be missing something, but but doesn't seem to work at least reliably.

@erikwilson have you made any progress on your PR here? https://github.com/erikwilson/coredns/pull/1

Are there any news on this? The workaround mentioned above seems to work (for me) right now, but apparently it's a rather unstable situation.

I opened a PR with coredns, please give it a thumbs up if it looks good: https://github.com/coredns/coredns/pull/3068, might help to move it along.

@erikwilson looks like your PR was merged, what do we need to do to get that work leveraged here? I'm willing to help with whatever I can.

Another note on this. Even if you override the configmap, when you join a node to the cluster it overwrites the configmap with a node list. Open to any suggestions here.

anyone has better solution.

We are having the same issue here. Does anyone know what's causing the coredns manifest file to override on restart?

All packaged components are re-deployed whenever a server node is restarted. If you want to replace coredns or any other component with your own, you can --disable it.

Was this page helpful?
0 / 5 - 0 ratings