Hey,
not sure if this is an issue on your side or on mine but let me explain the situation:
I'm using the ingress-nginx with an aws alb using L4 because I need websocket support. At the same time I want to terminate TLS on that ALB because the certificates are issued from aws. So my service looks like this:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
# certificate from the AWS console
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<redacted>"
# the backend instances are HTTP/HTTPS/TCP so let Nginx do that
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
# Map port 443
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
now I also want to force https (wss) on the nginx side, so I use nginx.ingress.kubernetes.io/force-ssl-redirect: "true". However that causes the initial upgrade request to be answered with 308 even though it is already initiated with wss://, because $redirect_to_https is set with
map "$scheme:$pass_access_scheme" $redirect_to_https {
default 0;
"http:http" 1;
"https:http" 1;
}
Do you have a suggestion for me how to solve this? I actually just want to make sure nobody connects through an unsecure connection (http/ws) and they always get redirected to the secure connection.
If I don't use force-ssl-redirect everything works fine (ws and wss)
Thx
+1
Having same issue
@m1schka How are you creating an ALB when using ingress-nginx? I've had to switch to use alb-ingress in order to get an ALB and not the classic ELB. Not got the redirects working just yet though..
@himeshladva-ni there is no support for ALB. If you want to use an ALB please check https://github.com/coreos/alb-ingress-controller
@aledbf thanks, that's what i've ended up using. i was just intrigued as @m1schka mentioned he's using an ALB but it doesn't quite line up with what the supplied yaml file would do (create an ELB).
Sorry for the confusion, with ALB I meant the classic load balancer
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
+1
Having same issue