Route53 resources?? More details please.
How did you generate the TF config(s)? What was your kops command?
DNS is usually done by the DNS controller so this ticket confuses me :P
Whoa responses! AHHHH!
Y'all are right, I needed to do a better job describing this initially and that's on me.
So here's what I'm seeing. I create a cluster using kops create cluster --name a.domain.i.own.io --zones=us-west-2a and kops generates the cluster config as expected. Then, as noted in the docs, I do a kops update cluster --target=terraform and I get a complete TF config dropped in out/terraform/. Sweet.
Here's what I feel is missing. When you create and update a cluster using just Kops, the application does a great job of creating the whole cluster _and_ DNS records in the hosted zone you specify (api, internal, etc). However, when you generate the corresponding terraform, it generates everything except the TF resources to create/manage those Route53 records.
I feel that if Kops itself can create and manage those records, it should also generate the TF to create and manage those just like it does for all the other resources.
@bbriggs in kops 1.4 all DNS entries generated by DNS controller, which runs on master.
neither kops or terraform do it directly. AFAIK...
kops is doing something TF is not, for sure.
These A records get created when doing a create. They're not represented in TF (although all the other resources like security groups, EC2 instances, etc. are)

I'm using TF, and those records are created by dns-controller/protokube, once master finishes bootsraping...
here is the corresponding log entry in protokube when updating etcd DNS records
aws_dns.go:194] Updating DNS record "etcd-us-east-1a.internal.production.k8s.mydomain.com."
How long should I wait? I created two clusters, one using just Kops and the other using TF. The records showed up right away when using Kops only, but never showed up when using TF.
I did try to give TF some time to bake, but 6 hours seems a bit excessive :/
Right 6 hours is way longer that it should be :)
I suggest, login to master and look at the protokube/kubelet logs for any errors
Will do. So how are DNS records propagating to route53 then, if not via TF or Kops? It seems like kubes wouldn't know how to authenticate to the AWS API with my creds and do this.
I am actually having this issue also. This didn't seem to be a problem when running kops 1.4alpha1. I just upgraded and launched a couple clusters and it seems I am having this issue now. Also previously launched clusters were Kubernetes 1.3~
it seems like kubes wouldn't know how to authenticate to the AWS API with my creds and do this.
kops generates IAM roles for master which includes authorization to Route53
@bbriggs are you sure that your kubelet is launching properly? I am having an issue referenced over here #767
I have had some instances where the DNS records were not created by protokube, and this was always due to me being sneaky and editing the terraform output before applying it and screwing it up.
My normal debug steps are....
sudo docker ps to see if anything is running under docker. If not, check the syslog and you will probably find the error.docker logs [containerid]) and look for any errors.Route53 record is being created for bastion, but not the API elb. I am not certain what the best pattern for this is.
Thoughts?
Protokube and dns-controller updates a lot of the other stuff when we launch.
This is working correctly now, in master. Closing
Hi guys,
I am facing the same issue explain above. I am on Kops 1.11 and Kubectl 1.13