Kops: Terraform Doesn't Work With Private Hosted Zones

Created on 13 Feb 2017  路  13Comments  路  Source: kubernetes/kops

I am attempting to use kops a private topology cluster with a private dns, however I receive the message Route53 private hosted zones are not supported for terraform. It is unclear if this means I need to instead create Route53 entries manually prior to trying to export to TF using kops, or if kops is just unable to run this command at all.

Most helpful comment

kops 1.8.1 the same.
terraform export isn't working for private topology

kops create cluster \
              --state=s3://bucket\
              --name=k8s.domain.local \
              --dns=private \
              --dns-zone=domain.local \
              --master-size=t2.medium \
              --node-size=t2.medium \
              --zones=eu-west-1a \
              --master-zones=eu-west-1a \
              --node-count=3 \
              --master-count=1 \
              --image=ami-a61464df \
              --master-volume-size=50 \
              --node-volume-size=50 \
              --topology=private \
              --networking=calico \
          --ssh-public-key=~/.ssh/development-kubernetes.pub \
          --api-loadbalancer-type=internal \
              --kubernetes-version=1.9.3 \
              --network-cidr=10.253.0.0/16 \
              --out=../../../../../terraform/k8s \
              --target=terraform

Output:

I0225 19:41:15.567792   30890 dnszone.go:242] Check for existing route53 zone to re-use with name "domain.local"
W0225 19:41:17.100616   30890 executor.go:109] error running task "DNSZone/domain.local" (9m14s remaining to succeed): Creation of Route53 hosted zones is not supported for terraform
I0225 19:41:17.100707   30890 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)

Zone exist

All 13 comments

I've the same issue, with private or public Route53 zone, I'm receving the same error. Route53 private hosted zones are not supported for terraform.
I'm executing this command:
kops create cluster --vpc vpc-dc26b8 k8s-test.internal --zones=eu-west-1a,eu-west-1b,eu-west-1c --dns private --node-count 2 --master-zones eu-west-1a --node-size t2.small --master-size t2.medium --out=. --target=terraform --state s3://s3.k8s.test

Kops Version 1.5.1 (git-01deca8)

The same command without --out=. --target=terraform works.

Im getting the same issue, kops 1.5.1

kops create cluster --name=dev.pdg.io --cloud=aws --target=terraform --out=. --state=s3://pdg-kube-aws --zones=eu-central-1a --dns-zone=dev.*****.io --node-size=t2.small --master-size=t2.small --dns=private

@justinsb any ideas?

Any updates here? I am having the same issue

I believe this is a duplicate of #1848. There is a concern around how to "manage" or "acquire" information about that private hosted zone with Terraform in a repeatable and safe way.

I ended up setting up my VPC, subnets, route tables, DNS private zone, ... manually with terraform.
I tagged my route-tables, igw, and subnets with the following tag.
KubernetesCluster = "${var.kops_cluster_name}"
Obviously, var.kops_cluster_name is the name of my cluster.

In my kops kops create cluster command I specified my VPC, network and private dns zone created by terraform. I didn't specify terraform output here.

After the creation, I've updated the cluster configuration so that the subnets match the ones I 've created.

  • cidr: 10.200.32.0/19
    id: subnet-e43xxxxx
    name: eu-central-1a
    type: Private
    zone: eu-central-1a

After that I applied the changes.

At this point I can manage my kubernetes resources with kops and my other resources with terraform.

I found most of the required info to get here in this post https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md

I believe this has been addressed in https://github.com/kubernetes/kops/pull/2297

This issue is still happening in 1.6.1

Just tried with:
--dns private
.. and it worked
kops v1.7.0

Closing, please use 1.7.1 kops, as it has a cve patch in it

Running the simple
kops create cluster --name=sample.domain.com --zones=eu-west-1a --target=terraform --out=./kops/sample --dns=private --dns-zone=domain.com
fails with output:

...
I1208 11:05:58.879563   28104 dnszone.go:242] Check for existing route53 zone to re-use with name "domain.com"
W1208 11:05:59.001850   28104 executor.go:109] error running task "DNSZone/domain.com" (9m57s remaining to succeed): Creation of Route53 hosted zones is not supported for terraform
I1208 11:05:59.001887   28104 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)
...

I am using kops 1.8.0.
Has it been fixed in 1.8.0 as well?

kops 1.8.1 the same.
terraform export isn't working for private topology

kops create cluster \
              --state=s3://bucket\
              --name=k8s.domain.local \
              --dns=private \
              --dns-zone=domain.local \
              --master-size=t2.medium \
              --node-size=t2.medium \
              --zones=eu-west-1a \
              --master-zones=eu-west-1a \
              --node-count=3 \
              --master-count=1 \
              --image=ami-a61464df \
              --master-volume-size=50 \
              --node-volume-size=50 \
              --topology=private \
              --networking=calico \
          --ssh-public-key=~/.ssh/development-kubernetes.pub \
          --api-loadbalancer-type=internal \
              --kubernetes-version=1.9.3 \
              --network-cidr=10.253.0.0/16 \
              --out=../../../../../terraform/k8s \
              --target=terraform

Output:

I0225 19:41:15.567792   30890 dnszone.go:242] Check for existing route53 zone to re-use with name "domain.local"
W0225 19:41:17.100616   30890 executor.go:109] error running task "DNSZone/domain.local" (9m14s remaining to succeed): Creation of Route53 hosted zones is not supported for terraform
I0225 19:41:17.100707   30890 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)

Zone exist

Had the same issue before payed attention to the following statement:

The only requirement to trigger this is to have the cluster name end with .k8s.local

Was this page helpful?
0 / 5 - 0 ratings