Here's the use case,
I have a Kubernetes cluster on AWS, built on an existing VPC with an existing load balancer. I'd like to simply be able to forward to hub using an existing load balancer without creating a new one.
What is the best way to do this? What node should be exposed?
Originally, I changed the hub service to a NodePort and thought that worked however, recently I see a lot of redirection issues /user->/hub/user -> /user and /service ->/hub/service -> 404. So either there is an issue with my network or this approach doesn't work.
Appreciate any suggestions for what to try next.
I think the right approach is to change public proxy from Loadbalancer to NodePort. Since the Kubernetes cluster for jupyterhub expects traffic to flow through there.
For posterity, you can make this change with your config file you use with helm. Here's my relevant section
proxy:
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
Hi @minrk,
@summerswallow met with us in San Diego last week. This was one of the items that he mentioned in his presentation.
Hey @summerswallow - I tried this on my z2jh deployment with Ambassador but now the pods just hang in "reconnecting". Did you ever do any further testing with this using "NodePort" ?
@josibake As I recall, we did get it working as described on an AWS cluster in the days prior to EKS. However due to concerns of our security guy over K8S, we moved away from using z2jh and instead homegrew our own "hub" to use directly with jupyter notebook and jupyter lab.
So I'm afraid I can be of more help here.
Thanks for following up! I did end up getting it working with Ambassador and ClusterIP
For posterity, you can make this change with your config file you use with helm. Here's my relevant section
proxy: service: type: NodePort nodePorts: http: 30080 https: 30443
In this case are you assuming SSL to be offloaded at proxy-public? If you are in AWS and offloading SSL at an ALB why would you need an https nodeport?
I have a Kubernetes cluster on AWS, built on an existing VPC with an existing load balancer. I'd like to simply be able to forward to hub using an existing load balancer without creating a new one.
If you have a loadbalancer already, you should:
yaml
proxy:
service:
type: ClusterIP
If you cannot assume your load balancer can reach your cluster local addresses, you need an entrypoint to the cluster somehow, and then perhaps you need to make the service have an externalIP hmmm...
I have a Kubernetes cluster on AWS, built on an existing VPC with an existing load balancer. I'd like to simply be able to forward to hub using an existing load balancer without creating a new one.
If you have a loadbalancer already, you should:
- Make your-jupyterhub-domain.com point to it
- Assuming your load balancer can reach your cluster local addresses, configure proxy-public like this:
yaml proxy: service: type: ClusterIP- Make your loadblancer redirect traffic to http://proxy-public./
If you cannot assume your load balancer can reach your cluster local addresses, you need an entrypoint to the cluster somehow, and then perhaps you need to make the service have an externalIP hmmm...
There seems to be an issue when trying to use this value:
Error: UPGRADE FAILED: Service "proxy-public" is invalid: [spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP', spec.ports[1].nodePort: Forbidden: may not be used when `type` is 'ClusterIP']
I have tried setting nodePorts.{http,https} to nothing, "", and false, all of them result in the same issue.
All our kubernetes traffic goes trough the ingress-controller, we are trying to disable the proxy-public to avoid exposing it.
The template has some conditions around it but they are not having effect:
{{- if .Values.proxy.service.nodePorts.https }}
nodePort: {{ .Values.proxy.service.nodePorts.https }}
{{- end }}
Chart version: 0.9.1
Helm Version: 2.14.3
Kubernetes Version: 1.15 (EKS)
Ingress controller: traefik (with https)
@sortigoza I believe your issue is a result of trouble with Helm to reconcile the old configuration with the new. It will probably require some kubectl delete service proxy-public and redo helm upgrade and, also, whenever you use helm upgrade, i strongly recommend --cleanup-on-fail to avoid other issues.
Most helpful comment
For posterity, you can make this change with your config file you use with helm. Here's my relevant section