I'm following this tutorial https://github.com/kubernetes/kops/blob/master/docs/aws.md and trying to use topology private with bastion and found error below
$ kops version
Version 1.6.2 (git-98ae12a)
$ kops create cluster --topology private --networking calico --bastion="true" kpc-test.k8s.local
I0707 15:03:09.874526 30282 create_cluster.go:655] Inferred --cloud=aws from zone "ap-southeast-1a"
I0707 15:03:09.874687 30282 create_cluster.go:841] Using SSH public key: /home/winggundamth/.ssh/id_rsa.pub
I0707 15:03:10.103989 30282 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet ap-southeast-1a
I0707 15:03:10.104042 30282 subnets.go:183] Assigned CIDR 172.20.64.0/19 to subnet ap-southeast-1b
I0707 15:03:10.104076 30282 subnets.go:197] Assigned CIDR 172.20.0.0/22 to subnet utility-ap-southeast-1a
I0707 15:03:10.104107 30282 subnets.go:197] Assigned CIDR 172.20.4.0/22 to subnet utility-ap-southeast-1b
Previewing changes that will be made:
I0707 15:03:12.931898 30282 apply_cluster.go:396] Gossip DNS: skipping DNS validation
W0707 15:03:12.935014 30282 firewall.go:195] Opening etcd port on masters for access from the nodes, for calico. This is unsafe in untrusted environments.
I0707 15:03:12.936986 30282 loader.go:220] Known tasks:
I0707 15:03:12.937046 30282 loader.go:222] AutoscalingGroup/bastions.kpc-test.k8s.local
I0707 15:03:12.937053 30282 loader.go:222] AutoscalingGroup/master-ap-southeast-1a.masters.kpc-test.k8s.local
I0707 15:03:12.937060 30282 loader.go:222] AutoscalingGroup/nodes.kpc-test.k8s.local
I0707 15:03:12.937066 30282 loader.go:222] DHCPOptions/kpc-test.k8s.local
I0707 15:03:12.937073 30282 loader.go:222] DNSName/bastion.kpc-test.k8s.local
I0707 15:03:12.937079 30282 loader.go:222] EBSVolume/a.etcd-events.kpc-test.k8s.local
I0707 15:03:12.937085 30282 loader.go:222] EBSVolume/a.etcd-main.kpc-test.k8s.local
I0707 15:03:12.937092 30282 loader.go:222] ElasticIP/ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937098 30282 loader.go:222] ElasticIP/ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937103 30282 loader.go:222] IAMInstanceProfile/bastions.kpc-test.k8s.local
I0707 15:03:12.937109 30282 loader.go:222] IAMInstanceProfile/masters.kpc-test.k8s.local
I0707 15:03:12.937115 30282 loader.go:222] IAMInstanceProfile/nodes.kpc-test.k8s.local
I0707 15:03:12.937124 30282 loader.go:222] IAMInstanceProfileRole/bastions.kpc-test.k8s.local
I0707 15:03:12.937133 30282 loader.go:222] IAMInstanceProfileRole/masters.kpc-test.k8s.local
I0707 15:03:12.937143 30282 loader.go:222] IAMInstanceProfileRole/nodes.kpc-test.k8s.local
I0707 15:03:12.937151 30282 loader.go:222] IAMRole/bastions.kpc-test.k8s.local
I0707 15:03:12.937161 30282 loader.go:222] IAMRole/masters.kpc-test.k8s.local
I0707 15:03:12.937170 30282 loader.go:222] IAMRole/nodes.kpc-test.k8s.local
I0707 15:03:12.937179 30282 loader.go:222] IAMRolePolicy/additional.bastions.kpc-test.k8s.local
I0707 15:03:12.937189 30282 loader.go:222] IAMRolePolicy/additional.masters.kpc-test.k8s.local
I0707 15:03:12.937198 30282 loader.go:222] IAMRolePolicy/additional.nodes.kpc-test.k8s.local
I0707 15:03:12.937206 30282 loader.go:222] IAMRolePolicy/bastions.kpc-test.k8s.local
I0707 15:03:12.937215 30282 loader.go:222] IAMRolePolicy/masters.kpc-test.k8s.local
I0707 15:03:12.937224 30282 loader.go:222] IAMRolePolicy/nodes.kpc-test.k8s.local
I0707 15:03:12.937233 30282 loader.go:222] InternetGateway/kpc-test.k8s.local
I0707 15:03:12.937272 30282 loader.go:222] Keypair/kops
I0707 15:03:12.937280 30282 loader.go:222] Keypair/kube-controller-manager
I0707 15:03:12.937289 30282 loader.go:222] Keypair/kube-proxy
I0707 15:03:12.937297 30282 loader.go:222] Keypair/kube-scheduler
I0707 15:03:12.937304 30282 loader.go:222] Keypair/kubecfg
I0707 15:03:12.937314 30282 loader.go:222] Keypair/kubelet
I0707 15:03:12.937321 30282 loader.go:222] Keypair/master
I0707 15:03:12.937330 30282 loader.go:222] LaunchConfiguration/bastions.kpc-test.k8s.local
I0707 15:03:12.937339 30282 loader.go:222] LaunchConfiguration/master-ap-southeast-1a.masters.kpc-test.k8s.local
I0707 15:03:12.937347 30282 loader.go:222] LaunchConfiguration/nodes.kpc-test.k8s.local
I0707 15:03:12.937356 30282 loader.go:222] LoadBalancer/api.kpc-test.k8s.local
I0707 15:03:12.937366 30282 loader.go:222] LoadBalancer/bastion.kpc-test.k8s.local
I0707 15:03:12.937374 30282 loader.go:222] LoadBalancerAttachment/api-master-ap-southeast-1a
I0707 15:03:12.937381 30282 loader.go:222] LoadBalancerAttachment/bastion-elb-attachment
I0707 15:03:12.937390 30282 loader.go:222] NatGateway/ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937399 30282 loader.go:222] NatGateway/ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937407 30282 loader.go:222] Route/0.0.0.0/0
I0707 15:03:12.937415 30282 loader.go:222] Route/private-ap-southeast-1a-0.0.0.0/0
I0707 15:03:12.937423 30282 loader.go:222] Route/private-ap-southeast-1b-0.0.0.0/0
I0707 15:03:12.937432 30282 loader.go:222] RouteTable/kpc-test.k8s.local
I0707 15:03:12.937441 30282 loader.go:222] RouteTable/private-ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937450 30282 loader.go:222] RouteTable/private-ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937457 30282 loader.go:222] RouteTableAssociation/private-ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937466 30282 loader.go:222] RouteTableAssociation/private-ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937476 30282 loader.go:222] RouteTableAssociation/utility-ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937484 30282 loader.go:222] RouteTableAssociation/utility-ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937493 30282 loader.go:222] SSHKey/kubernetes.kpc-test.k8s.local-c5:9a:fb:74:d4:91:8e:d8:c8:2e:c7:c4:b3:cb:32:b5
I0707 15:03:12.937503 30282 loader.go:222] SecurityGroup/api-elb.kpc-test.k8s.local
I0707 15:03:12.937512 30282 loader.go:222] SecurityGroup/bastion-elb.kpc-test.k8s.local
I0707 15:03:12.937521 30282 loader.go:222] SecurityGroup/bastion.kpc-test.k8s.local
I0707 15:03:12.937531 30282 loader.go:222] SecurityGroup/masters.kpc-test.k8s.local
I0707 15:03:12.937566 30282 loader.go:222] SecurityGroup/nodes.kpc-test.k8s.local
I0707 15:03:12.937577 30282 loader.go:222] SecurityGroupRule/all-master-to-master
I0707 15:03:12.937587 30282 loader.go:222] SecurityGroupRule/all-master-to-node
I0707 15:03:12.937595 30282 loader.go:222] SecurityGroupRule/all-node-to-node
I0707 15:03:12.937604 30282 loader.go:222] SecurityGroupRule/api-elb-egress
I0707 15:03:12.937612 30282 loader.go:222] SecurityGroupRule/bastion-egress
I0707 15:03:12.937621 30282 loader.go:222] SecurityGroupRule/bastion-elb-egress
I0707 15:03:12.937630 30282 loader.go:222] SecurityGroupRule/bastion-to-master-ssh
I0707 15:03:12.937638 30282 loader.go:222] SecurityGroupRule/bastion-to-node-ssh
I0707 15:03:12.937646 30282 loader.go:222] SecurityGroupRule/https-api-elb-0.0.0.0/0
I0707 15:03:12.937656 30282 loader.go:222] SecurityGroupRule/https-elb-to-master
I0707 15:03:12.937665 30282 loader.go:222] SecurityGroupRule/master-egress
I0707 15:03:12.937675 30282 loader.go:222] SecurityGroupRule/node-egress
I0707 15:03:12.937684 30282 loader.go:222] SecurityGroupRule/node-to-master-protocol-ipip
I0707 15:03:12.937692 30282 loader.go:222] SecurityGroupRule/node-to-master-tcp-1-4001
I0707 15:03:12.937700 30282 loader.go:222] SecurityGroupRule/node-to-master-tcp-4003-65535
I0707 15:03:12.937710 30282 loader.go:222] SecurityGroupRule/node-to-master-udp-1-65535
I0707 15:03:12.937717 30282 loader.go:222] SecurityGroupRule/ssh-elb-to-bastion
I0707 15:03:12.937725 30282 loader.go:222] SecurityGroupRule/ssh-external-to-bastion-elb-0.0.0.0/0
I0707 15:03:12.937736 30282 loader.go:222] Subnet/ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937746 30282 loader.go:222] Subnet/ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937754 30282 loader.go:222] Subnet/utility-ap-southeast-1a.kpc-test.k8s.local
I0707 15:03:12.937766 30282 loader.go:222] Subnet/utility-ap-southeast-1b.kpc-test.k8s.local
I0707 15:03:12.937773 30282 loader.go:222] VPC/kpc-test.k8s.local
I0707 15:03:12.937781 30282 loader.go:222] VPCDHCPOptionsAssociation/kpc-test.k8s.local
I0707 15:03:12.937794 30282 loader.go:222] kpc-test.k8s.local-addons-bootstrap
I0707 15:03:12.937803 30282 loader.go:222] kpc-test.k8s.local-addons-core.addons.k8s.io
I0707 15:03:12.937812 30282 loader.go:222] kpc-test.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.6
I0707 15:03:12.937822 30282 loader.go:222] kpc-test.k8s.local-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
I0707 15:03:12.937832 30282 loader.go:222] kpc-test.k8s.local-addons-kube-dns.addons.k8s.io-k8s-1.6
I0707 15:03:12.937842 30282 loader.go:222] kpc-test.k8s.local-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
I0707 15:03:12.937852 30282 loader.go:222] kpc-test.k8s.local-addons-limit-range.addons.k8s.io
I0707 15:03:12.937863 30282 loader.go:222] kpc-test.k8s.local-addons-networking.projectcalico.org-k8s-1.6
I0707 15:03:12.937874 30282 loader.go:222] kpc-test.k8s.local-addons-networking.projectcalico.org-pre-k8s-1.6
I0707 15:03:12.937885 30282 loader.go:222] kpc-test.k8s.local-addons-storage-aws.addons.k8s.io
I0707 15:03:12.937896 30282 loader.go:222] secret/admin
I0707 15:03:12.937906 30282 loader.go:222] secret/kube
I0707 15:03:12.937917 30282 loader.go:222] secret/kube-proxy
I0707 15:03:12.937927 30282 loader.go:222] secret/kubelet
I0707 15:03:12.937937 30282 loader.go:222] secret/system-controller_manager
I0707 15:03:12.937947 30282 loader.go:222] secret/system-dns
I0707 15:03:12.937958 30282 loader.go:222] secret/system-logging
I0707 15:03:12.937968 30282 loader.go:222] secret/system-monitoring
I0707 15:03:12.937980 30282 loader.go:222] secret/system-scheduler
error building tasks: unexpected error resolving task "DNSName/bastion.kpc-test.k8s.local": Unable to find task "DNSZone/", referenced from DNSName/bastion.kpc-test.k8s.local:.Zone
Anyone knows how to fix this?
Sorry I overlooked this issue. I submitted #3053. Because it isn't a totally trivial fix, I think this will likely have to wait till 1.7.1, which we'll do shortly after 1.7.0 (which itself is imminent).
The explanation:
Hi @justinsb,
Just wanted to say I was running into the same problem when using a bastion node with a gossip cluster, and building kops from source with your branch worked great. Thanks for the fix and I'm hoping it can get merged soon!
I'm running in a problem when I try to set up a cluster with private topology. Here is my call:
kops create cluster
--node-size t2.micro
--master-size t2.micro
--zones eu-central-1b
--master-zones eu-central-1b
--dns-zone ${ZONE}
--ssh-public-key="~/.ssh/id_rsa.pub"
--topology private
--networking calico
--bastion
${NAME}
and I'm getting the error
error building tasks: unexpected error resolving task "DNSName/bastion.mycustomer-kops-cluster.k8s.local": Unable to find task "DNSZone/itest-uuid.com", referenced from DNSName/bastion.mycustomer-kops-cluster.k8s.local:.Zone
Could this be the same problem?
I got same problem . My command is
kops create cluster \
--node-count 3 \
--topology private \
--zones us-east-1a,us-east-1b,us-east-1c \
--master-size t2.medium \
--node-size t2.medium \
--networking calico \
--bastion \
${NAME}
For all of you that still searching for this solution, you can create cluster with private networking first (without bastion), and then add bastion with this.
It appears the above suggestion does not work in the case I have. I think that kops 1.8.0 is not creating the utility subnet ?
If the utility subnet is used during the create we get a go panic, if the utility- is not used then the create opens the editor and after the editor is closed then a readable message is produced saying that the utility subnet cannot be found. See below:
$ kops update cluster --name example.cluster.k8s.local
I0108 10:05:50.133498 22038 apply_cluster.go:450] Gossip DNS: skipping DNS validation
error building tasks: could not find utility subnet in zone: "us-west-2a"
$ kops create instancegroup bastions --role Bastion --subnet utility-us-west-2a
Using cluster from kubectl context: example.cluster.k8s.local
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1e0 pc=0x26c154f]
goroutine 1 [running]:
main.RunCreateInstanceGroup(0xc42079f3e0, 0xc4202f6d80, 0xc420a22140, 0x1, 0x5, 0x4a706e0, 0xc42000c018, 0xc420763e50, 0x0, 0x0)
/go/src/k8s.io/kops/cmd/kops/create_ig.go:160 +0x2ff
main.NewCmdCreateInstanceGroup.func1(0xc4202f6d80, 0xc420a22140, 0x1, 0x5)
/go/src/k8s.io/kops/cmd/kops/create_ig.go:87 +0x77
k8s.io/kops/vendor/github.com/spf13/cobra.(Command).execute(0xc4202f6d80, 0xc420763f90, 0x5, 0x5, 0xc4202f6d80, 0xc420763f90)
/go/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:603 +0x22b
k8s.io/kops/vendor/github.com/spf13/cobra.(Command).ExecuteC(0x4c60000, 0x29fc900, 0x0, 0x0)
/go/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:689 +0x339
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(0x4c60000, 0x4c90710, 0x0)
/go/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:648 +0x2b
main.Execute()
/go/src/k8s.io/kops/cmd/kops/root.go:95 +0x9b
main.main()
/go/src/k8s.io/kops/cmd/kops/main.go:25 +0x20
@karlmutch You probably have a cluster with public topology, bastions only work with private topology.
To see the difference, try:
kops --state=$KOPS_STATE_STORE --name=$CLUSTER_NAME create cluster --cloud=aws --zones=us-east-1a --dry-run=true --output=yaml
vs
kops --state=$KOPS_STATE_STORE --name=$CLUSTER_NAME create cluster --cloud=aws --zones=us-east-1a --dry-run=true --output=yaml --bastion --topology=private --networking=weave
Notice the utility subnets when using --topology=private in the second example.
Regardless the panic is not good ;(
This behavior still happens without the fix mentioned by @duboisf.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I am still having this issue. /remove-lifecycle stale
I have tried not including a dnsZone configuration, passing a hosted zone name, and passing a hosted zone id.
kops create cluster \
--name=foobar.k8s.local \
--cloud=aws \
--kubernetes-version=1.11.0 \
--state=s3://kops-cluster1 \
--topology=private \
--bastion \
--networking=canal \
--node-count=3 \
--master-count=1 \
--zones=us-east-1b,us-east-1c \
--master-zones=us-east-1a \
--node-size=t2.micro \
--master-size=t2.small \
--dns-zone=marshallford.me \
--dry-run --output yaml | tee foobar.k8s.local.yaml
Thanks.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
I have also hit this issue, and suspect, that in my case the cluster name ending with k8s.local was the problem. After I have changed it to mycluster.mydomain.com -- kops create ... was successful.
This still is an issue,
I0830 13:28:46.399938 40508 loader.go:293] prod.k8s.local-addons-storage-aws.addons.k8s.io-v1.7.0
error building tasks: unexpected error resolving task "DNSName/bastion.prod.k8s.local": Unable to find task "DNSZone/", referenced from DNSName/bastion.prod.k8s.local:.Zone
I understand the reasons, since the bastion args are attempting to setup a a R53 entry with bastion.${NAME}, and since the hosted zone does not exist in R53 as prod.k8s.local it fails.
I've resorted to deploy with out the bastion, and then updating to including the bastion config
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@abhyuditjain: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@justinsb: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
This behavior still happens without the fix mentioned by @duboisf.