Kops: NodePort inaccessible on Public AWS cluster default network

Created on 5 Aug 2017  路  10Comments  路  Source: kubernetes/kops

Hi I am having difficulties with a NodePort service:

Goal

  • set up ALB Ingress controller so that i can use websockets and http/2
  • setup NodePort service as required by that controller

Steps taken

  • Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.
  • Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.
  • Changed service type from LoadBalancer to NodePort using kubectl replace --force

Results

  • Curl from ALB hanges
  • Curl from <public ip address of all nodes>:<node port for service> hangs

Expected
Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)

aredocumentation lifecyclstale

Most helpful comment

Just stumbled upon this problem. Is it now documented somewhere else than StackOverflow?

All 10 comments

Update
I've followed an issue from the main github for kubernetes around hostport issues:

https://github.com/kubernetes/kubernetes/issues/23920

Here's some additional diagnostics

> kubectl describe svc my-nodeport-service
Name:                   my-node-port-service
Namespace:              default
Labels:                 <none>
Selector:               service=my-selector
Type:                   NodePort
IP:                     100.71.211.249
Port:                   <unset> 80/TCP
NodePort:               <unset> 30176/TCP
Endpoints:              100.96.2.11:3000
Session Affinity:       None
Events:                 <none>

> kubectl describe pods my-nodeport-pod
Name:           my-nodeport-pod
Node:           <ip>.eu-west-1.compute.internal/<ip>
Labels:         service=my-selector
Status:         Running
IP:             100.96.2.11
Containers:
  update-center:
    Port:               3000/TCP
    Ready:              True
    Restart Count:      0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6       0      0 :::30176                :::*                    LISTEN      2093/kube-proxy

Following suggestions in that ticket:

  • I have upgraded from Kubernetes 1.6.3 to 1.7.3 (and kops to version 1.7.0) and no change in behavior

  • changed pod hostNetwork to true and no change

I am using the default Kubenet and Public AWS topology with Kops.

This question has also been posted to Stackoverflow: https://stackoverflow.com/questions/45543694/kubernetes-cluster-on-aws-with-kops-nodeport-service-unavailable

Curl from one of the nodes to the node IP set internally with the node port works.

How do I bridge this internal node IP with the public instance address?

I've also posted this issue over to the alb-ingress-controller repo (https://github.com/coreos/alb-ingress-controller/issues/169) on the off-chance that they may be able to help resolve the issue. However I really think that this has something to do with the way that my cluster is setup in AWS, which kops has thankfully largely taken care of for me.

The question remains how to open up the instance so that the configured NodePort gets proxied correctly.

Update I've tailed the kube-proxy and there doesn't seem to be anything interesting

I've found the solution to my problem involved editing the node and master security groups to forward these ports.

Fuller answer is here: https://stackoverflow.com/questions/45543694/kubernetes-cluster-on-aws-with-kops-nodeport-service-unavailable/45561848#45561848

I suggest that kops provide some documentation in the networking doc that explains the security group must be added, unless it would make sense to change the defaults here?

This is the first time ive really had to dig around the AWS config of the instances that kops setup and I feel like specifying the nodeport should be sufficient to get a nodeport exposed. Afterall, nodeports are mentioned in documentation in the section on exposing your services.

@jwickens yes it would be a good item to be documented, admittedly I am not comfortable opening nodeport up by default. For instance, I believe it would open the weave port by default then. Marking this as a documentation issue. You can add security groups to instance groups in order to enable this, via kops funcationality.

For instance, I believe it would open the weave port by default then.

@chrislovecnm can I drill into what you mean here? The solution linked says to open ports 30,000-32,767, which would not include any ports Weave Net is listening on by default.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Just stumbled upon this problem. Is it now documented somewhere else than StackOverflow?

@chrislovecnm you said "You can add security groups to instance groups in order to enable this, via kops funcationality." however the answer linked in stack overflow edits a security group using the amazon cli. Is there a way to do it directly from kops? If yes. Could you please share it?

I am also in this case too. Is there any solution/workaround to do this on AWS?

However, I am not sure the problem should be addressed to kubectl service creation. I mean, During the svc NodePort creation using the kubectl will change the security group of that cluster to be accessible externally.

Any thoughts?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

olalonde picture olalonde  路  4Comments

joshbranham picture joshbranham  路  3Comments

minasys picture minasys  路  3Comments

RXminuS picture RXminuS  路  5Comments

pluttrell picture pluttrell  路  4Comments