Hi all!
We're running a cluster in AWS, and within that cluster we have an internal docker registry which serves most of the containers we use. We use the cluster-internal DNS to reference containers (i.e., docker.ns.svc.cluster.local/foo/bar:baz) It almost works out of the box, except that we manually have to modify the DHCP options set for the VPC that Kops created so that it adds the kube-dns service's cluster IP address as a DNS server, in addition to AmazonProvidedDNS. (Otherwise, the docker instance on the hosts can't resolve the registry.) And of course, every time we run kops update cluster, it wants to change the DHCP options set back to the one it created when we spun up the cluster, which means we have to be a little careful around doing changes to the cluster spec to avoid newly spawned nodes being unable to resolve the registry (as they wouldn't be aware of kube-dns as a DNS server to resolve .svc.cluster.local addresses).
It'd be great if there was a way to either specify extraDNSServers somewhere in the cluster spec or even just to simply disable the VPC DHCP Options check when running kops update cluster
Cheers!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
This issue is maddening for use cases where you need to have a custom option set..
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
In my use case, I need to add additional DNS server from LDAP, to allow resolution of private IP ranges in peered vpc, the reliable and fast way to do it is via DHCP options of the VPC.
Good news for everyone affected by this issue, there's actually a workaround!
After your cluster is created, copy the name of the VPC (i.e., vpc-xxxxxxxx) into the networkID field of your ClusterSpec. You鈥檒l have to add it as it won鈥檛 be there by default. Once that's there, kops will treat the VPC it created as a shared VPC and no longer attempt to make the DHCP options match what it wants, and you can set the DHCP opts set in your VPC to one which has whatever setup you want. Finally, force a rolling update so all your masters/nodes do a DHCP lease and pick up the correct dns servers
Since kops doesn鈥檛 (currently) do any changes to the VPC definition (it only creates/modifies subnets), and doesn鈥檛 do anything to the DNS servers in the DHCP options set besides the default AmazonProvidedDNS, you don鈥檛 lose any functionality with this hack either.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Good news for everyone affected by this issue, there's actually a workaround!
After your cluster is created, copy the name of the VPC (i.e.,
vpc-xxxxxxxx) into thenetworkIDfield of yourClusterSpec. You鈥檒l have to add it as it won鈥檛 be there by default. Once that's there, kops will treat the VPC it created as a shared VPC and no longer attempt to make the DHCP options match what it wants, and you can set the DHCP opts set in your VPC to one which has whatever setup you want. Finally, force a rolling update so all your masters/nodes do a DHCP lease and pick up the correct dns serversSince kops doesn鈥檛 (currently) do any changes to the VPC definition (it only creates/modifies subnets), and doesn鈥檛 do anything to the DNS servers in the DHCP options set besides the default AmazonProvidedDNS, you don鈥檛 lose any functionality with this hack either.