kops version: 1.7.0
kubernetes version: 1.7.6
If you try to use set --api-loadbalancer-type to internal and --topology public, kops will complain about subnets not being specified for the load balancer. My assumption, is that kops is looking for private subnets and not seeing any specified. While that should be the default, in a public setup, it should fallback to using the public subnets.
I've confirmed this is an issue in the latest code from master as well.
W0925 16:19:59.275176 27827 executor.go:109] error running task "LoadBalancer/api.foo.com" (9m58s remaining to succeed): Field is required: Subnets
Also ran into this just now, kops 1.7.0 / Kubernetes 1.6.7.
Just ran into this too. kops 1.7.1, kubernetes 1.7.9.
Don't we need a private subnet for an internal ELB?
You do not. The internal ELB just doesn't associate a public IP with the ELB. So for companies that have a direct connect/VPN solution in place, they will typically run everything except for user facing ELBs on private IPs only.
Yeah we'll definitely only add same type subnets into ELBs:
The error also exists with the following config:
The same config works fine in a new vpc managed by kops
kops: 1.8.0
K8s: 1.8.4
So if it is ok to run Internal loadbalancer on public subnet, can support for this be added simply by remove the check mentioned by @mikesplain ?
@Sprinkle Could we worth a test. I'll give it a shot.
I've opened up a PR with the potential fix. That said, I may be forgetting about an edge case of why we would need what was removed.
Feel free to give it a test if you have a kops dev environment setup.
cc @Sprinkle @arthur-c @joshgarnett @braedon @elblivion
Quick test worked for my scenario, thanks @mikesplain
Is there any update on this? I'm still struggling with it and we need kubernetes version 1.9.x for StorageClass support (We're attempting to use EFS for persistent storage). Thoughts?
My situation is as follows:
I've tried the following combinations:
kops 1.8.0/1.8.1
kubernetes 1.8.6, 1.8.9
kubernetes 1.9.x (1.9.0, 1.9.3, 1.9.4)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
+1
+1 kops
I鈥檓 running into this issue following the instructions on https://github.com/kubernetes/kops/blob/release-1.8/docs/topology.md#changing-topology-of-the-api-server. We use a VPN but the applications running on the pods need to connect to the internet. As I understand it we do not have the option of making the topology private.
Our workaround has been to use:
kubectl config set-cluster REDACTED --insecure-skip-tls-verify=true --server=https://{ Private IPs}
Where {Private IPs} is the private IP of the Kubernetes master node.
Using this we can use kubectl normally and access the API from https:// { Private IPs}/ui
This is not ideal but it works.
My question is would this approach still work if we scaled to 3 Kubernetes master nodes and used the IP address of just one of them or would we need to manually put an internal load balancer in place?
It seem like the new SSL did not roll out to master after changing API LB from Internal to Public, I get error message saying the API endpoint only accepting previously created ssl. I need to use insecure-skip-tls-verify option to ignore the checking.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Any progress on this?
@mikesplain is there any progress on this? From what I understood https://github.com/kubernetes/kops/pull/4488 was supposed to fix this but got closed without merging.
I was also waiting for this but found another way.
I'm applying Kops with terraform only, so I have a terraform.override file in the dir where Kops outputs the terraform files:
resource "aws_elb" "cluster_name" {
internal = true
}
This makes it internal and I can commit the definitions to a repo.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
The error also exists with the following config:
The same config works fine in a new vpc managed by kops
kops: 1.8.0
K8s: 1.8.4