Enhancements: Support configurable pod resolv.conf

Created on 28 Oct 2017  Â·  46Comments  Â·  Source: kubernetes/enhancements

Feature Description

  • One-line feature description (can be used as a release note): Support configurable pod resolv.conf.
  • Primary contact (assignee): @mrhohn
  • Responsible SIGs: sig-network
  • Design proposal link (community repo): https://github.com/kubernetes/community/pull/1276
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @bowei
  • Approver (likely from SIG/area to which feature belongs): @thockin
  • Feature target (which target equals to which milestone):

    • Alpha release target (1.9)

    • Beta release target (1.10)

    • Stable release target (1.14)

kinfeature sinetwork stagstable trackeno

Most helpful comment

@lledru Thanks for checking in. There is no open K/K PRs that need to merge. (I do need to send a PR to add testing plans to KEP but that is not K/K.)

All 46 comments

/sig network

/assign

@MrHohn the feature is still labeled as alpha, but your plans for 1.10 were beta. What is the actual status of the feature for 1.10?

@idvoretskyi Thanks for checking in. This feature is targeting beta for 1.10.

/stage beta

Seems like I don't have the label privilege though. @idvoretskyi Would be great to use your help :)

@MrHohn not sure if we have everything automated here :)

Added, thanks!

I'd like to understand if this API should help to achieve identical dns resolution on all kube nodes and all pods started with dnsPolicy: ClusterFirst.

Currently I run into following limitation in kubernetes 1.8 (that might be specific to my config):

  • kubelet is run with: --resolv-conf=/etc/resolv.conf --cluster-dns=10.233.0.3 [...]
  • kube host node's /etc/resolv.conf has "kube-dns" service ip as first entry: "nameserver 10.233.0.3", second entry is 8.8.8.8
  • kube-dns pods are launched with "dnsPolicy: Default" (default from kubespray) and inherit node's /etc/resolv.conf

    • As a result, dnsmasq container from kube-dns pod uses 10.233.0.3 as default resolver which results in intermittent resolution failures

In regards to this issue, can i "replace", not "append" nameservers for a pod, or is there a way to "exclude" certain nameservers to be used in a pod's /etc/resolv.conf?

In regards to this issue, can i "replace", not "append" nameservers for a pod, or is there a way to "exclude" certain nameservers to be used in a pod's /etc/resolv.conf?

@paravz "replace" sounds like an interesting use case. The closest choice for now would be using dnsPolicy: None, and specify your own resolver options in dnsPolicy field (https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-dns-policy). Would that work for you?

Or is "replace just a portion of the options but keep the remains" what you want?

I want to exclude, this way I would carry what I don't want in service podspec (kube-dns in this case) vs carrying cluster dns config in podspec.

Thanks for mentioning dnsPolicy: None - this might also be used.

PS: i have also commented a possibly related bug here: https://github.com/kubernetes/kubernetes/issues/30073#issuecomment-361410912

@paravz What do you think if we change the semantic for nameservers from "append" to "replace" (as it doesn't really make sense to append nameserver)? So that you could still inherit searches and options from node while replacing nameservers.

I'm not sure if there are strong enough use cases behind the "exclude" model (which seems like can be achieved with "replace" as well). Except excluding specific nameserver, do you have other use cases regarding options and search paths?

@MrHohn my only "exclude" scenario is that a pod with "dnsPolicy: Default" might still end up with cluster-dns as it's first nameserver (via host's /etc/resolv.conf). This "exclude" scenario can probably be solved in kubelet to filter-out cluster-dns explicitly when dnsPolicy != ClusterFirst (or when dnsPolicy: Default).

"replace" instead of "append" seems to make more sense, but i don't have enough scenarios to argue it one way or another. "replace" might introduce more unpredictably broken configurations, ie when podspec would not include cluster-dns.

my only "exclude" scenario is that a pod with "dnsPolicy: Default" might still end up with cluster-dns as it's first nameserver (via host's /etc/resolv.conf)

@paravz In fact, it seems wrong that host's /etc/resolv.conf contains "kube-dns" service ip. Could you explain the reasoning behind that?


To add more context, kube-dns pods are configured to use "dnsPolicy: Default", so that they can inherit the "resolv.conf" from node, hence pick up the upstream nameserver for external dns resolution.

An usual dns query from a pod would be like:

pod with "dnsPolicy: ClusterFirst" or "dnsPolicy: ClusterFirstWithHostNet"
    | (via "kube-dns" service ip)
    V
kube-dns pod
    | (when not resolvable locally)
    V
upstream nameserver

Having "kube-dns" service ip in host's /etc/resolv.conf may break the kube-dns pod -> upstream path.

@paravz In fact, it seems wrong that host's /etc/resolv.conf contains "kube-dns" service ip. Could you explain the reasoning behind that?

@MrHohn I wanted kube hosts (master really) to be able to resolve kube resources, as pods with "dnsPolicy: ClusterFirst".
I used it via kubespray option host_resolvconf

Having "kube-dns" service ip in host's /etc/resolv.conf may break the kube-dns pod -> upstream path.

Certainly, one way to avoid is to exclude "--cluster-dns=" from kube-dns' /etc/resolv.conf

I think your use DNS use case will be covered by dnsPolicy: none.

BTW having multiple entries for your nameserver will not yield portable results across *libc variants. For example, musl (which is used by Alpine Linux) will send requests to all nameservers and use the first response it receives.

@bowei I cannot test dnsPolicy: none for kube-dns since I don't have kubernentes 1.9, but this would require to add nameservers manually to kube-dns podspec

@MrHohn doc updates for 1.10, please? Also please update the feature tracking spreadsheet

@Bradamant3 Sorry I missed that, on it now.

@MrHohn
Any plans for this in 1.11?

If so, can you please ensure the feature is up-to-date with the appropriate:

  • Description
  • Milestone
  • Assignee(s)
  • Labels:

    • stage/{alpha,beta,stable}

    • sig/*

    • kind/feature

cc @idvoretskyi

@justaugustus Thanks for checking in. There is no plan for this feature in 1.11.

Thanks for the update!

This feature current has no milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.

If so, please ensure that this issue is up-to-date with ALL of the following information:

  • One-line feature description (can be used as a release note):
  • Primary contact (assignee):
  • Responsible SIGs:
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):

    • Alpha release target (x.y)

    • Beta release target (x.y)

    • Stable release target (x.y)

Set the following:

  • Description
  • Assignee(s)
  • Labels:

    • stage/{alpha,beta,stable}

    • sig/*

    • kind/feature

Once this feature is appropriately updated, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.


Please note that Features Freeze is tomorrow, July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

P.S. This was sent via automation

Maybe this will be a good option: https://github.com/kubernetes/kubernetes/issues/59031#issuecomment-422517587

Hi @MrHohn
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Docs (open placeholder PRs): 11/8
Code Slush: 11/9
Code Freeze Begins: 11/15
Docs Complete and Reviewed: 11/27

Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet

Thanks!

@kacole2 Thanks for checking in. There is currently no plan for graduating this in 1.13.

@MrHohn Are we waiting for anything to be able to mark this GA? Do we expect any changes here?

@thockin Nope, with this being supported for Windows (https://github.com/kubernetes/kubernetes/pull/67435) I think we are ready to mark it GA in the next milestone.

Ref https://github.com/kubernetes/kubernetes/issues/72651, I will see if we can get this to GA in 1.14.

@MrHohn Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP - I don't see a KEP for this issue can you please share a link for the KEP? Thanks.

@claurence There was not KEP process when this feature was first proposed. Hence I don't have a link for that. The original design proposal is on https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/pod-resolv-conf.md.

Is it mandatory to have a KEP for every enhancement (even the old one)? Do we need to convert the design proposal into KEP format retrospectively?

@MrHohn yes we would like a KEP for every issue please convert the old design to a KEP.

Additionally are there open PRs for this issue for 1.14?

@claurence Sure, will make a KEP for this. There is one open PR at the moment: https://github.com/kubernetes/kubernetes/pull/72832.

@claurence Thanks for handling the milestone and labels. To confirm, we need to merge the KEP PR (https://github.com/kubernetes/enhancements/pull/700) before the Enhancements freeze, right?

@MrHohn we'd like the KEP in an implementable state by enhancements freeze.

@MrHohn Hello - enhancements lead here - looking at the KEP it is still marked as "provisional". What more is needed for it to move to "implementable"

Additionally are there any open PRs that should be tracked for this issue for the 1.14 release?

@claurence Thanks for checking in. The KEP should be implementable --- I will update that.

One more open PR is https://github.com/kubernetes/website/pull/12514 for the doc change.

@MrHohn looking over the KEP I don't see any testing plans - can someone help PR in testing plans for this enhancement? This information is helpful for knowing readiness of this feature for the release and is specifically useful for CI Signal.

If we don't have testing plans this enhancement will be at risk for being included in the 1.14 release

@claurence We have comprehensive e2e tests implemented a while back, and that task was tracked by https://github.com/kubernetes/kubernetes/issues/56521. I can retrospectively add testing plans to the KEP if that is preferable.

@MrHohn if you can add them to the KEP that would be great!

Hello @MrHohn , 1.14 enhancement shadow here. Code Freeze is March 7th and all PRs must be merged by then to your issue to make the 1.14 release. What open K/K PRs do you still have that need to merge? Thanks

@lledru Thanks for checking in. There is no open K/K PRs that need to merge. (I do need to send a PR to add testing plans to KEP but that is not K/K.)

Hi @MrHohn, when do you think you could merge PR #867 ? Just checked with my enhancement teammates, this information is helpful for knowing readiness of this feature for the release and is specifically useful for CI Signal.
If we don't have testing plans this enhancement will be at risk for being included in the 1.14 release. Best regards

@lledru We will make sure to get that in before 1.14 code freeze (hopefully next Monday-ish).

/remove-stage beta
/stage stable

Closing this as it is GA with 1.14.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

euank picture euank  Â·  13Comments

robscott picture robscott  Â·  11Comments

justinsb picture justinsb  Â·  11Comments

wojtek-t picture wojtek-t  Â·  12Comments

mitar picture mitar  Â·  8Comments