Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
helm version
Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Which chart:
stable/oauth2-proxy
What happened:
Ingress times out while connecting to oauth2-proxy. This is in an Azure AKS cluster with stable/nginx-ingress installed. My ingress setup:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kuard-ingress
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: https://my.cloudapp.azure.com/oauth2/auth
nginx.ingress.kubernetes.io/auth-signin: https://my.cloudapp.azure.com/oauth2/start
spec:
tls:
- hosts:
- my.cloudapp.azure.com
secretName: my-cloudapp-prod-tls
rules:
- host: my.cloudapp.azure.com
http:
paths:
- path: /
backend:
serviceName: kuard-service
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
spec:
rules:
- host: my.cloudapp.azure.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 80
My values.yaml used during helm install:
config:
clientID: <private>
clientSecret: <private>
cookieSecret: <private>
extraArgs: {"provider":"github", "cookie-secure":false, "redirect-url":"https://my.cloudapp.azure.com/"}
image:
repository: "a5huynh/oauth2_proxy"
tag: "2.2"
pullPolicy: "IfNotPresent"
When navigating to my app I receive a 500 error. The logs reveal:
kubectl logs nginx-ingress-controller-79598bb4f6-ctf5r --namespace kube-system --tail 10
10.10.0.4 - [10.10.0.4] - - [05/Jun/2018:20:45:08 +0000] "GET / HTTP/2.0" 504 0 "-" "Mozilla/5.0 (X11; SunOS sun4u; rv:54.0) Gecko/20100101 Firefox/54.0" 0 60.075 [default-kuard-service-80] 40.117.152.139:443 0 60.000 504
10.10.0.4 - [10.10.0.4] - - [05/Jun/2018:20:45:08 +0000] "GET / HTTP/2.0" 500 194 "-" "Mozilla/5.0 (X11; SunOS sun4u; rv:54.0) Gecko/20100101 Firefox/54.0" 257 60.075 [default-kuard-service-80] - - - -
2018/06/05 20:45:08 [error] 5754#5754: *30791 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.10.0.4, server: my.cloudapp.azure.com, request: "GET / HTTP/2.0", subrequest: "/_external-auth-Lw", upstream: "https://my-public-ip:443/oauth2/auth", host: "my.cloudapp.azure.com"
2018/06/05 20:45:08 [error] 5754#5754: *30791 auth request unexpected status: 504 while sending to client, client: 10.10.0.4, server: my.cloudapp.azure.com, request: "GET / HTTP/2.0", host: "my.cloudapp.azure.com"
What you expected to happen:
Navigating directly to https://my.cloudapp.azure.com/oauth2/ping works.
Also navigating to https://my.cloudapp.azure.com/ when the auth-url and auth-signin annotations are removed.
hi @tdebiasio , same issue for me, did you happen to find the answer?
I ended up moving away from the helm chart and deploying from a5huynh/oauth2_proxy manually. I did have a few issues with my ingress setup though with this line:
nginx.ingress.kubernetes.io/auth-url: https://my.cloudapp.azure.com/oauth2/auth
which should have been:
http://oauth2-proxy.kube-system.svc.cluster.local:4180/oauth2/auth
If you are using Azure AD, my other issue was that the reply URL needed was not .../oauth2/signin-oidc but /oauth2/callback.
Thanks @tdebiasio , http://oauth2-proxy.kube-system.svc.cluster.local:4180/oauth2/auth
works for me even in case of helm chart.
Thanks @tdebiasio @olegchorny . What ended up working for me with the helm chart was:
OAUTH_CLUSTER_SERVICE=oauth2-proxy
OAUTH_CLUSTER_NAMESPACE=cluster-svc
http://$OAUTH_CLUSTER_SERVICE.$OAUTH_CLUSTER_NAMESPACE.svc.cluster.local:80/oauth2/auth
E.g.
nginx.ingress.kubernetes.io/auth-url: "http://kibana-ingress-oauth2-proxy.cluster-svc.svc.cluster.local:80/oauth2/auth"
More concretely, when installing Oauth2Proxy as a child chart to the ingress of interest, this is what works for me:
kind: Ingress
metadata:
name: kibana
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "http://{{ .Release.Name }}-oauth2-proxy.{{ .Release.Namespace }}.svc.cluster.local:80/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
Port 4180 didn't work for me.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Most helpful comment
I ended up moving away from the helm chart and deploying from a5huynh/oauth2_proxy manually. I did have a few issues with my ingress setup though with this line:
nginx.ingress.kubernetes.io/auth-url: https://my.cloudapp.azure.com/oauth2/authwhich should have been:
http://oauth2-proxy.kube-system.svc.cluster.local:4180/oauth2/authIf you are using Azure AD, my other issue was that the reply URL needed was not .../oauth2/signin-oidc but /oauth2/callback.