NGINX Ingress controller version:
0.9.0-beta.18
0.9.0-beta.19
Kubernetes version (use kubectl version):
v1.8.4
Environment:
uname -a): 4.13.16-coreos-r1kubectl apply -f addons/nginx-ingress/googleWhat happened:
Nginx Ingress controller pods are never running and healthy as they are unable to bind :443.
ingress nginx-ingress-controller-d7c456dbf-gtxcn 0/1 Running 0 2m
ingress nginx-ingress-controller-d7c456dbf-lrt2r 0/1 Running 0 2m
ingress nginx-ingress-controller-d7c456dbf-x4d76 0/1 Running 0 2m
2017/12/02 21:44:06 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:06 [emerg] 14#14: still could not bind()
I1202 21:44:09.397326 5 controller.go:211] backend reload required
I1202 21:44:09.491448 5 controller.go:220] ingress backend successfully reloaded...
2017/12/02 21:44:09 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/02 21:44:09 [emerg] 14#14: still could not bind()
Is this contending with itself? The attempts are close together.
Running netstat or lsof on the host shows :443 either not bound or bound by nginx-ingress-controller itself. I've also tried deleting the old nginx ingress controller and with a fresh cluster to be sure there are no previous replicas somehow holding onto the port.
The telltale symptom hinting this might be a bug is that rolling back to 0.9.0-beta.17, the issue is immediately resolved. I don't see any clean red flags in the 17 -> 18 commits, but need to read through further.
What you expected to happen:
Nginx Ingress controller binds :443 successfully.
How to reproduce it (as minimally and precisely as possible):
https://github.com/poseidon/typhoon/blob/master/addons/nginx-ingress/google-cloud/deployment.yaml
Anything else we need to know:
@dghubble two things: you are using hostNetwork: true scaled to two. How many nodes do you have? Could be that you have just one node and this error is triggered in just one of the pods?
If you want to use hostNetwork a daemonset is a better option
I am seeing this same behavior on CoreOS/OpenStack when comparing 0.9.0 with 0.9.0-beta.17 using the following template (only changing the image tag here):
apiVersion: v1
data:
server-name-hash-bucket-size: "512"
ssl-protocols: "TLSv1.2"
proxy-read-timeout: "300"
proxy-send-timeout: "300"
custom-http-errors: "404,502,503"
body-size: 50m
kind: ConfigMap
metadata:
name: nginx-ingress-conf
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ilb-rc
labels:
app: nginx-ilb
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend
- --enable-ssl-passthrough
- --healthz-port=9999
- --configmap=$(POD_NAMESPACE)/nginx-ingress-conf
hostNetwork: true
NOTE: hostNetwork, only 1 replica
Stepping through the new releases in this repo, I see this same problem happening with all versions after 0.9.0-beta.17.
0.9.0:
core@my_vm01 ~ $ kubectl logs -f nginx-ilb-rc-r5m81
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.9.0
Build: git-6816630
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
...
I1204 17:52:00.722956 5 controller.go:220] ingress backend successfully reloaded...
2017/12/04 17:51:58 [emerg] 14#14: still could not bind()
2017/12/04 17:52:01 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 17:52:01 [emerg] 14#14: still could not bind()
0.9.0-beta.19:
core@my_vm01 ~/ndslabs-startup $ kubectl logs -f nginx-ilb-rc-993tj
I1204 20:26:54.650422 5 main.go:227] Creating API client for https://10.0.0.1:443
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.9.0-beta.19
Build: git-37a230c
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
...
I1204 20:26:55.400493 5 controller.go:220] ingress backend successfully reloaded...
2017/12/04 20:26:55 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:26:55 [emerg] 14#14: still could not bind()
0.9.0-beta.18:
core@my_vm01 ~/ndslabs-startup $ kubectl logs -f nginx-ilb-rc-00ffb
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.9.0-beta.18
Build: git-8896412
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
...
I1204 20:28:19.779541 5 controller.go:220] ingress backend successfully reloaded...
2017/12/04 20:28:19 [emerg] 13#13: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to 0.0.0.0:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: bind() to [::]:443 failed (98: Address already in use)
2017/12/04 20:28:19 [emerg] 13#13: still could not bind()
As you can see, everything after 0.9.0-beta.17 appears to be failing with the same message, but 0.9.0-beta.17 and prior appear to work just fine:
core@my_vm01 ~/ndslabs-startup $ kubectl logs -f nginx-ilb-rc-25qrv
I1204 20:36:12.336918 6 launch.go:128]
Name: NGINX
Release: 0.9.0-beta.17
Build: git-baa6bcb0
Repository: https://github.com/kubernetes/ingress-nginx
I1204 20:36:12.337039 6 launch.go:131] Watching for ingress class: nginx
I1204 20:36:12.337195 6 launch.go:307] Creating API client for https://10.0.0.1:443
I1204 20:36:12.361495 6 launch.go:319] Running in Kubernetes Cluster version v1.5 (v1.5.2) - git (clean) commit 08e099554f3c31f6e6f07b448ab3ed78d0520507 - platform linux/amd64
I1204 20:36:12.363071 6 launch.go:155] validated default/default-http-backend as the default backend
I1204 20:36:12.367310 6 nginx.go:174] starting NGINX process...
I1204 20:36:12.368471 6 controller.go:1262] starting Ingress controller
I1204 20:36:12.380898 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"noflo-ingress", UID:"d52a1ee8-c590-11e7-a966-08002767665f", APIVersion:"extensions", ResourceVersion:"549668", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/noflo-ingress
I1204 20:36:12.383829 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"lambert8", Name:"s1mwng-cloudcmd-ingress", UID:"426936ce-d70c-11e7-9a31-08002767665f", APIVersion:"extensions", ResourceVersion:"549672", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress lambert8/s1mwng-cloudcmd-ingress
I1204 20:36:12.383927 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"lambert8", Name:"sp2vjm-cloud9rails-ingress", UID:"505534df-d7f5-11e7-83d0-08002767665f", APIVersion:"extensions", ResourceVersion:"549673", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress lambert8/sp2vjm-cloud9rails-ingress
I1204 20:36:12.383969 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"cis-ingress", UID:"d5444b70-c590-11e7-a966-08002767665f", APIVersion:"extensions", ResourceVersion:"549670", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/cis-ingress
I1204 20:36:12.384009 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"cloud9", UID:"d559f99b-c590-11e7-a966-08002767665f", APIVersion:"extensions", ResourceVersion:"565945", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/cloud9
I1204 20:36:12.384039 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ndslabs-auth", UID:"184ddc45-d911-11e7-88d1-08002767665f", APIVersion:"extensions", ResourceVersion:"553414", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/ndslabs-auth
I1204 20:36:12.384065 6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ndslabs-open", UID:"1859c876-d911-11e7-88d1-08002767665f", APIVersion:"extensions", ResourceVersion:"565656", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/ndslabs-open
I1204 20:36:18.266097 6 controller.go:1270] running initial sync of secrets
I1204 20:36:18.267262 6 backend_ssl.go:64] adding secret default/ndslabs-tls-secret to the local store
I1204 20:36:18.268959 6 backend_ssl.go:64] adding secret default/cis-tls-secret to the local store
I1204 20:36:18.271337 6 backend_ssl.go:64] adding secret lambert8/lambert8-tls-secret to the local store
W1204 20:36:18.279857 6 controller.go:869] service lambert8/s1mwng-cloudcmd does not have any active endpoints
W1204 20:36:18.280013 6 controller.go:869] service lambert8/sp2vjm-cloud9rails does not have any active endpoints
W1204 20:36:18.280052 6 controller.go:869] service lambert8/sp2vjm-cloud9rails does not have any active endpoints
I1204 20:36:18.281022 6 controller.go:307] backend reload required
I1204 20:36:18.281192 6 metrics.go:34] changing prometheus collector from to default
I1204 20:36:18.285441 6 leaderelection.go:174] attempting to acquire leader lease...
I1204 20:36:18.289649 6 status.go:199] new leader elected: nginx-ilb-rc-00ffb
I1204 20:36:18.411241 6 controller.go:316] ingress backend successfully reloaded...
127.0.0.1 - [127.0.0.1] - - [04/Dec/2017:20:36:28 +0000] "GET /dashboard/home HTTP/2.0" 200 2194 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" 276 0.065 [default-ndslabs-webui-80] 172.17.0.4:3000 2211 0.065 200
127.0.0.1 - [127.0.0.1] - - [04/Dec/2017:20:36:28 +0000] "GET /shared/footer/FooterController.js HTTP/2.0" 200 479 "https://www.local.ndslabs.org/dashboard/home" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" 42 0.379 [default-ndslabs-webui-80] 172.17.0.4:3000 960 0.379 200
Is there perhaps a breaking change that is missing from the changelog? I did see the notes about the default SSL ciphers and the expected format of the ingress annotations, but these seemed inconsequential in my case.
Is there something that I might be missing here?
EDIT: it seems that removing the --enable-ssl-passthrough flag allows the ILB to start as expected... so maybe I'm just misunderstanding the behavior that this flag introduces?
I still see this with 0.9.0.
@aledbf There are plenty of nodes, so I don't think that's the issue (deployment is fine here, just because daemonset would be overkill it doesn't need to run across masters or every worker necessarily)
it seems that removing the --enable-ssl-passthrough flag allows...
Aha! GCE is a red herring. It so happens my GCE clusters are the only ones with apps needing pass-through enabled on the controller. Removing the flag lets Nginx start (but breaks apps needing the feature). Thanks @bodom0015
So, something is fishy about --enable-ssl-passthrough starting in 0.9.0-beta.18.
My example to reproduce is incorrect (actually works fine), it is missing the --enable-ssl-passthrough which surfaces the issue. Sorry about that.
The binding regression with --enable-ssl-passthrough was introduced somewhere between:
https://github.com/kubernetes/ingress-nginx/commit/1e9e2e0718c68cbb2f4fc33120e4320d0e3a851c (0.9.0-beta.17, good)
https://github.com/kubernetes/ingress-nginx/commit/fdd231816c3c80a3f5bbef16fbbe6e82aab55a6a (bad)
Git bisect is a bit tricky bc of some unrelated issues in commits between releases (using make sub-container-amd64): nginx.tmpl:116:14: executing "nginx.tmpl" at <$cfg.UseBrotli>: can't evaluate field UseBrotli in type config.Configuration.
@dghubble we are going to remove the go proxy for the ssl passthrough feature and just use nginx.
Can you elaborate a bit? Is the current behavior is expected or part of some deprecation I missed or you mean its being fixed in an upcoming switch?
No deprecation, just a switch of implementations.
I'm getting the same problem:
I0101 00:09:32.806807 5 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"wb-ingress", UID:"bb16d517-ee84-11e7-84c1-42010a9e0004", APIVersion:"extensions", ResourceVersion:"14288372", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/wb-ingress
I0101 00:09:32.888711 5 controller.go:220] ingress backend successfully reloaded...
2018/01/01 00:09:32 [emerg] 12#12: bind() to 0.0.0.0:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to [::]:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to 0.0.0.0:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to [::]:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to 0.0.0.0:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to [::]:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to 0.0.0.0:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to [::]:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to 0.0.0.0:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: bind() to [::]:443 failed (98: Address already in use)
2018/01/01 00:09:32 [emerg] 12#12: still could not bind()
I'm using ssl-passthrough (also added --enable-ssl-passthrough) and my ingress configuration is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
name: wb-ingress
spec:
rules:
- host: hm.admin.site.com.br
http:
paths:
- backend:
serviceName: vs-hm-admin-ing
servicePort: 443
path: /
And my service:
kind: Service
apiVersion: v1
metadata:
name: vs-hm-admin-ing
spec:
selector:
app: vs-hm-admin
ports:
- protocol: TCP
port: 443
targetPort: 3001
I've trying a lot of things and I'm lost on this problem. I'm running on Google Kubernetes Engine.
My ingress-nginx-controller yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-ingress-controller","namespace":"ingress-nginx"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"ingress-nginx"}},"template":{"metadata":{"labels":{"app":"ingress-nginx"}},"spec":{"containers":[{"args":["/nginx-ingress-controller","--default-backend-service=$(POD_NAMESPACE)/default-http-backend","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--publish-service=$(POD_NAMESPACE)/ingress-nginx","--annotations-prefix=nginx.ingress.kubernetes.io"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0","name":"nginx-ingress-controller","ports":[{"containerPort":80,"name":"http"},{"containerPort":443,"name":"https"}]}]}}}}
creationTimestamp: 2017-12-31T23:38:58Z
generation: 5
labels:
app: ingress-nginx
name: nginx-ingress-controller
namespace: ingress-nginx
resourceVersion: "14285763"
selfLink: /apis/extensions/v1beta1/namespaces/ingress-nginx/deployments/nginx-ingress-controller
uid: c603251e-ee83-11e7-84c1-42010a9e0004
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-passthrough
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
imagePullPolicy: IfNotPresent
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2017-12-31T23:38:58Z
lastUpdateTime: 2017-12-31T23:38:58Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I am running on Bare metal (kubernetes 1.8.4) and have the same issue
Also running into this issue, and I _think_ that the value of --enable-ssl-passthrough isn't propagated correctly to the template configuration.
If I run the controller with --enable-ssl-passthrough and --v=3, the configuration that's dumped to stdout has the following: {...,"IsSSLPassthroughEnabled":false,...}, and I also noticed that the configuration file looks different from 0.9.0-beta.10 (which is what we were running previously):
# 0.9.0-beta.10
# map port 442 to 443 for header X-Forwarded-Port
map $pass_server_port $pass_port {
442 443;
default $pass_server_port;
}
vs
# 0.9.0
map $pass_server_port $pass_port {
443 443;
default $pass_server_port;
}
@aledbf Would you consider releasing a minor of 0.9 to fix SSL passthrough? The feature is essentially broken right now and I'm not sure if it is viable to just wait for the 0.10.0 release since there are no breaking changes.
@jeremyzahner we are going to release 0.10.0 before the end of the week
Most helpful comment
@jeremyzahner we are going to release 0.10.0 before the end of the week