Facing this issue if i am connecting to ingress for web socket service
failed: Error during WebSocket handshake: Unexpected response code: 400
_Ingress YAML_
** kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: websocket-producer-cdph
spec:
rules:
_Service YAML_
kind: Service
apiVersion: v1
metadata:
name: websocket-producer-cdph
spec:
ports:
When i try to listen ws://some.domain.com/ws its showing
Error during WebSocket handshake: Unexpected response code: 400
But if go and update spec type in service to LoadBalancer, it'll generate an IP 192.168.1.17:8183 and ilistening to that ws://192.168.1.17:8183/ws its working but i need to expose the URL using ingress so it can be use outside of network.
I am using following image for ingress controller
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
How can i create ingress for a web-socket service?
I checked in nginx.conf this is also present -
Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
Should work fine if you've set the backend protocol on LoadBalancer to TCP
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
Try to add annotation into your ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_token"
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
For others looking to solve their WebSockets/Ingress issues, I created a checklist here: https://gist.github.com/jsdevtom/7045c03c021ce46b08cb3f41db0d76da#file-ingress-service-yaml
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
is it work for bare metal ?
@prakasa-tkpd: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
https://gist.github.com/jsdevtom/7045c03c021ce46b08cb3f41db0d76da#file-ingress-service-yaml
doesn't work for me, behind bare metal Private VPS
ws from ingress always response 400, if direct to the service, everything is fine. Whats wrong with my configuration here:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/http2-push-preload: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
name: nginx-ingress
spec:
rules:
- host: www.terpusat.com
http:
paths:
- path: /
backend:
serviceName: anakin
servicePort: 3000
And more question, how I can forward this headers below to my backend just for specific path:
more_set_headers "Sec-WebSocket-Key: $http_sec_websocket_key";
more_set_headers "Sec-WebSocket-Version: $http_sec_websocket_version";
more_set_headers "Sec-WebSocket-Protocol: $http_sec_websocket_protocol";
more_set_headers "Sec-WebSocket-Extensions: $http_sec_websocket_extensions";
Most helpful comment
Should work fine if you've set the backend protocol on LoadBalancer to TCP
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"