Kops: It should be possible to have separate public/internal DNS zones

Created on 16 Feb 2017  路  17Comments  路  Source: kubernetes/kops

We're using --topology private, so we want most of our DNS names to be in a private hosted zone. We'd like masterPublicName to be an actual public name. Currently you can't have both because it will operate on a single DNS zone.

areDNS lifecyclrotten

Most helpful comment

@justinsb I'm thinking of taking a stab at this. How hard should I try to keep backwards compat? Ideally we could specify:

dnsZone:
  public: <zone name or ID>
  private: <zone name or ID>

And I assume we'd want to pick a default for each one depending on topology? So for a private topology the default would be private: $clusterName and vice-versa for public.

Should it just convert existing dnsZone: keys to this scheme? Or should I use a new key like:

dns:
  privateZone: <>
  publicZone: <>

And populate the new structure with the old dnsZone: if it's specified?

All 17 comments

I think this is "just" a bug fix, I don't know of any reason why we couldn't do "split" DNS. I think the biggest challenge is probably that we have a dnsZone field in the spec, which is probably the root of the "one zone" assumption.

I tried with and without it specified, I assume it's inferring the zone from the clusterDNSDomain currently? Or is it the cluster name?

I would be keen on this feature also.

@justinsb This is the split dns issue I was talking about during today's office hours.

if it's "just a bug fix" then that would be awesome - but we'll look at some of the alternatives in the meantime - I haven't looked through the code yet that touches this area.

/assign @justinsb

@justinsb I'm thinking of taking a stab at this. How hard should I try to keep backwards compat? Ideally we could specify:

dnsZone:
  public: <zone name or ID>
  private: <zone name or ID>

And I assume we'd want to pick a default for each one depending on topology? So for a private topology the default would be private: $clusterName and vice-versa for public.

Should it just convert existing dnsZone: keys to this scheme? Or should I use a new key like:

dns:
  privateZone: <>
  publicZone: <>

And populate the new structure with the old dnsZone: if it's specified?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

/remove-lifecycle rotten

This is still needed, we are using a nasty hack for it.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings