NGINX Ingress controller version:
Release: 0.30.0
Build: git-7e65b90c4
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.8
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a): Linux 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
I am getting error in NGINX Controller while adding below ingress, After that now its in error. if i delete this ingress Controller is now again in running state.
W0306 16:39:05.723067 6 flags.go:260] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0306 16:39:05.723149 6 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0306 16:39:05.723378 6 main.go:193] Creating API client for https://10.96.0.1:443
I0306 16:39:05.730621 6 main.go:237] Running in Kubernetes cluster version v1.17 (v1.17.3) - git (clean) commit 06ad960bfd03b39c8310aaf92d1e7c12ce618213 - platform linux/amd64
I0306 16:39:06.112296 6 main.go:102] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I0306 16:39:06.180460 6 nginx.go:263] Starting NGINX Ingress controller
I0306 16:39:06.208221 6 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"7e7560ab-c4b1-4ab5-9283-108d4f61a6a0", APIVersion:"v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0306 16:39:06.211197 6 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"db51ad8d-bbf1-43e1-ae8f-1b31d7580cd3", APIVersion:"v1", ResourceVersion:"1377", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0306 16:39:06.219682 6 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"29079ede-4943-4863-9313-0402b8d9894f", APIVersion:"v1", ResourceVersion:"1380", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0306 16:39:07.287198 6 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard1", UID:"8091c66e-534f-45f9-b554-5045c9ae309b", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"346622", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kubernetes-dashboard/kubernetes-dashboard1
I0306 16:39:07.287257 6 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"ee88f247-b00d-4ab5-a136-ef5c903d3944", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"343566", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kubernetes-dashboard/kubernetes-dashboard
I0306 16:39:07.287739 6 backend_ssl.go:66] Adding Secret "kubernetes-dashboard/dashboard-admin-sa-token-kfxvc" to the local store
I0306 16:39:07.287816 6 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard2", UID:"73884b1d-8b92-4800-9ddd-70804c4e4a4f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"351445", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kubernetes-dashboard/kubernetes-dashboard2
I0306 16:39:07.383495 6 nginx.go:760] Starting TLS proxy for SSL Passthrough
I0306 16:39:07.383594 6 nginx.go:307] Starting NGINX process
I0306 16:39:07.383597 6 leaderelection.go:242] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
E0306 16:39:07.387342 6 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 155 [running]:
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x163bb00, 0x276b5b0)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x163bb00, 0x276b5b0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/runtime/panic.go:679 +0x1b2
crypto/x509.(*Certificate).hasSANExtension(...)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/x509.go:787
crypto/x509.(*Certificate).commonNameAsHostname(0x0, 0x1c)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:932 +0x82
crypto/x509.(*Certificate).VerifyHostname(0x0, 0xc0006dd5c0, 0x1c, 0xc0004c85a0, 0x0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:1015 +0x22b
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).createServers(0xc0002f4000, 0xc00087e4c0, 0x3, 0x4, 0xc000880540, 0xc0006f5a20, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1110 +0x173a
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getBackendServers(0xc0002f4000, 0xc00087e4c0, 0x3, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:457 +0xe4
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getConfiguration(0xc0002f4000, 0xc00087e4c0, 0x3, 0x4, 0x4, 0x20, 0x16b7760, 0xc000029a01, 0xc00087e3e0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:405 +0x80
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).syncIngress(0xc0002f4000, 0x16dd5a0, 0xc000875300, 0xc0170f7355, 0x102b82e8a55)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:125 +0xcc
k8s.io/ingress-nginx/internal/task.(*Queue).worker(0xc000546480)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:129 +0x30c
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00068cfa8)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00089ffa8, 0x3b9aca00, 0x0, 0xb65d01, 0xc000088180)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/ingress-nginx/internal/task.(*Queue).Run(0xc000546480, 0x3b9aca00, 0xc000088180)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61 +0x6b
created by k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).Start
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:310 +0x48e
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x2e8 pc=0x700ec2]
goroutine 155 [running]:
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x163bb00, 0x276b5b0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/runtime/panic.go:679 +0x1b2
crypto/x509.(*Certificate).hasSANExtension(...)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/x509.go:787
crypto/x509.(*Certificate).commonNameAsHostname(0x0, 0x1c)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:932 +0x82
crypto/x509.(*Certificate).VerifyHostname(0x0, 0xc0006dd5c0, 0x1c, 0xc0004c85a0, 0x0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:1015 +0x22b
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).createServers(0xc0002f4000, 0xc00087e4c0, 0x3, 0x4, 0xc000880540, 0xc0006f5a20, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1110 +0x173a
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getBackendServers(0xc0002f4000, 0xc00087e4c0, 0x3, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:457 +0xe4
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getConfiguration(0xc0002f4000, 0xc00087e4c0, 0x3, 0x4, 0x4, 0x20, 0x16b7760, 0xc000029a01, 0xc00087e3e0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:405 +0x80
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).syncIngress(0xc0002f4000, 0x16dd5a0, 0xc000875300, 0xc0170f7355, 0x102b82e8a55)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:125 +0xcc
k8s.io/ingress-nginx/internal/task.(*Queue).worker(0xc000546480)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:129 +0x30c
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00068cfa8)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00089ffa8, 0x3b9aca00, 0x0, 0xb65d01, 0xc000088180)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/ingress-nginx/internal/task.(*Queue).Run(0xc000546480, 0x3b9aca00, 0xc000088180)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61 +0x6b
created by k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).Start
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:310 +0x48e
What you expected to happen:
it should accept ingress or show error if syntax is not correct
How to reproduce it:
echo "
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/ssl-backends: "kubernetes-dashboard"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: kubernetes-dashboard2
namespace: kubernetes-dashboard
spec:
tls:
- hosts:
- 123.com
secretName: kubernetes-dashboard-certs
rules:
- host: 123.com
http:
paths:
- path: /dashboard1
backend:
serviceName: kubernetes-dashboard
servicePort: 443
" | kubectl apply -f -
/kind bug
@imranrazakhan I am sorry, I cannot reproduce this issue
echo "
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
name: kubernetes-dashboard2
namespace: kubernetes-dashboard
spec:
tls:
- hosts:
- 123.com
secretName: kubernetes-dashboard-certs
rules:
- host: 123.com
http:
paths:
- path: /dashboard1
backend:
serviceName: kubernetes-dashboard
servicePort: 443
" | kubectl apply -f -
ingress.kubernetes.io/ssl-passthrough: "true" -> nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/ssl-backends: "kubernetes-dashboard" -> invalid annotation
kubernetes.io/ingress.allow-http: "false" -> invalid annotation
@aledbf if it was due to invalid annotation, it should throw proper error rather than a panic error.
@imranrazakhan no, is not about that. Those annotations make no difference. I meant, they are not valid/supported.
@imranrazakhan how are you generating the SSL certificate?
I was using self-signed certificates.
$ openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
$ openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
$ rm dashboard.pass.key
$ openssl req -new -key dashboard.key -out dashboard.csr
...
$ openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Due to these failed tries, For time being i changed strategy and using basic-auth with --enable-skip-login option. but will try again and update accordingly.
My Updated Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard-ing-basic
annotations:
nginx.ingress.kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - Dashboard'
nginx.ingress.kubernetes.io/configuration-snippet: rewrite ^(/dashboard)$ $1/ permanent;
spec:
rules:
- host: 123.com
http:
paths:
- path: /dashboard(/|$)(.*)
backend:
serviceName: kubernetes-dashboard
servicePort: 443
@imranrazakhan what version of openssl? (still unable to reproduce the issue)
# openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017
Still unable to reproduce
@aledbf I am able to reproduce it again if i create below ingress with tls it throw panic error and if recreate below ingress without tls in works fine. (sorry i tried many different options )
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard2
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- 123.com
secretName: dashboard-admin-sa-token-kfxvc
rules:
- host: 123.com
http:
paths:
- path: /dashboard2
backend:
serviceName: kubernetes-dashboard
servicePort: 443
Below secret are for testing
# kubectl get secret dashboard-admin-sa-token-kfxvc -o yaml -n kubernetes-dashboard
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdOREl5TWpnd09Gb1hEVE13TURNd01qSXlNamd3T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT2hWCkM0NHl1bjZDL1VZY1dSdXBsQUVZYWttcUtNQm8ySVlpMHIyNXRaL1dUOVZWM0h2a1FHYTJyY1lWSytRWDI0OWgKVjlwT2MvQW9IbVlRZVhUR2NuQTNQSDhOMExUSHBVNnZwYVE3eGpuS0JOUTl4eDlGQVVnUjVBcmRDZWRaYzRaTgpkOEpucjdjaVBvTCtyQlBDdG9hQmsyemdXMEN0TGNsQ2RmT3Z4Yzd2dENMWUdJK2FmRDRhcGhkS1NpTThtMEtMCjZjbERTMm5nZFcyeFR3SVJIclUrY2ZWcFZGK25qQzVuZFVDLzBsd0VvTnYwUjg3NUl5Zi9UWUIvRmNVSldZNWcKY25BYTR0Qm9BZTFXcG9QN2ttS2M1dW5kWWFaYS9qWngvVlo5UzdWK1FyNExpbXJBVEpXRW9uNm9GYXdTT1NzWgo5ZFdQc3dib0RrR0dMcEh5dEpNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEeGg0SVF3d0Vvd0kxYS92aHlhUURhM3lRUnQKd2tKMjl2eGxKWmpUb1orQkJDNk83YURlbytRMnBmdmhhVUV0bWNLSVFXbUpHbFJNbng5MzNxcm5BNHBqNmgxUApGRGJCM0QvbDlrQmgzVE9pR3hwUmNJNlRwTEVWM2JXZDgwVXNxcExROUFWTFd6N1lxUXhUK1JjTi8rSmNvbW9yCkJEK1JWcDF2R0tmbVpsQ3RoN0ZWNi9Zbk9VVjVHTHY0b2pwZTNtZDltdlRsZmg5VmdDc2h1K3g0VTRjbmZqMHQKREZnWmZwNDhuOG84cnlrOFMzQWpXL1E1Qm1Bcmp6YVkwdStnbmFBQ3BhT1FBSTlzdW9pUGRzTi9CSFZYeG02Ngp2VzRCeDRad0l0K2ZyNjFJa00rN3NtbGJzdHBGS1JCS0tRdEI3K2JzZWFrRTc0RVBBQTNXakl3MUhmUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: a3ViZXJuZXRlcy1kYXNoYm9hcmQ=
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqazBVVzgzYUdWdFZVRm9lRmR3YjBKRk0zSTVkR1o1Tm1ZNWJsQnFWV3hoUlRGTVkwTjRlSFJKYTFFaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpyZFdKbGNtNWxkR1Z6TFdSaGMyaGliMkZ5WkNJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZqY21WMExtNWhiV1VpT2lKa1lYTm9ZbTloY21RdFlXUnRhVzR0YzJFdGRHOXJaVzR0YTJaNGRtTWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzV1WVcxbElqb2laR0Z6YUdKdllYSmtMV0ZrYldsdUxYTmhJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpYSjJhV05sTFdGalkyOTFiblF1ZFdsa0lqb2lPRE5sTmpNd016Z3ROVFEwWXkwMFpqSTBMV0ZpWmpZdFlURXpPVEEyTldGaFpUWTJJaXdpYzNWaUlqb2ljM2x6ZEdWdE9uTmxjblpwWTJWaFkyTnZkVzUwT210MVltVnlibVYwWlhNdFpHRnphR0p2WVhKa09tUmhjMmhpYjJGeVpDMWhaRzFwYmkxellTSjkuaTVBQUVmcWpHQzNDNHJTdklWR1U2QVJWT05FVkVPVGxFei1QaERqcFpjWDQ2cmRla3RFV0JjSHhfbUI3X292QUJQTFU1dmpSLVlPRDRKZzRLRkc0c01HMWpPcTljY01zaV81TUxzb25mVm5DbmdOdkZIcFV2S0E5cE92MFVpVDRzR19teS1HTzV2b3ZvR1JVZmtpTUVYR1luSFZVXzg0NVVQaC1WRWVYUWNqUGF0ZDNuNmdZdjZrOHE5VEswclhiemdHM3dzNU5UbF9NWkU0ZFk3RlhDdG1ieC02S0pzaEtBRld2a1JKTUVheEVIa3BkM2N0eWtLMU9mOVRFMXlKaUJ2ekxNaDBhNVBMNVhmUEdqOHU0eEpFRjlwZmVYZ3RxaUZ0b191MHNqSzkxbUdnczE0YmZxcTRVOGJ2UkhUOUdXYjBqazhJbW5zbmQ2OVZzRjFZQk5B
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: dashboard-admin-sa
kubernetes.io/service-account.uid: 83e63038-544c-4f24-abf6-a139065aae66
creationTimestamp: "2020-03-06T09:06:16Z"
name: dashboard-admin-sa-token-kfxvc
namespace: kubernetes-dashboard
type: kubernetes.io/service-account-token
type: kubernetes.io/service-account-token
Ok, that is not a valid secret for SSL. What command are you using to generate the secret?
@aledbf Yes i just used secret generated by below command, but still it should show proper exception by saying invalid secret.
kubectl create serviceaccount dashboard-admin-sa -n kubernetes-dashboard
Yes i just used secret generated by below command, but still it should show proper exception by saying invalid secret.
I am aware this is a bug. From the report, until your last comment, I assumed you were using an SSL certificate, not a service account. I can reproduce the issue with this new detail.
Please use quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:dev
This image contains #5225
NGINX Ingress controller
Release: 0.30.0
Build: git-7e65b90c4
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.8
W0315 18:48:17.509616 6 flags.go:260] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0315 18:48:17.509687 6 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0315 18:48:17.509826 6 main.go:193] Creating API client for https://10.96.0.1:443
I0315 18:48:17.516899 6 main.go:237] Running in Kubernetes cluster version v1.17 (v1.17.4) - git (clean) commit 8d8aa39598534325ad77120c120a22b3a990b5ea - platform linux/amd64
I0315 18:48:17.676256 6 main.go:102] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I0315 18:48:17.695832 6 nginx.go:263] Starting NGINX Ingress controller
I0315 18:48:17.702773 6 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"81a8ba30-6e3b-401e-9ee7-21078aaaba73", APIVersion:"v1", ResourceVersion:"32494", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0315 18:48:17.707108 6 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"45f61b38-fd9e-4cba-af18-be5462f94846", APIVersion:"v1", ResourceVersion:"32496", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0315 18:48:17.707139 6 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"e098077b-4e35-4466-87ec-8cd8f984c2fc", APIVersion:"v1", ResourceVersion:"32498", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0315 18:48:18.798352 6 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"60da8499-4e59-45d8-8023-38037a94688b", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"27384", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kubernetes-dashboard/dashboard-ingress
I0315 18:48:18.798523 6 backend_ssl.go:66] Adding Secret "kubernetes-dashboard/admin-user-token-r7hpd" to the local store
I0315 18:48:18.896662 6 nginx.go:307] Starting NGINX process
I0315 18:48:18.896755 6 leaderelection.go:242] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
E0315 18:48:18.897182 6 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 159 [running]:
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x163bb00, 0x276b5b0)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x163bb00, 0x276b5b0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/runtime/panic.go:679 +0x1b2
crypto/x509.(Certificate).hasSANExtension(...)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/x509.go:787
crypto/x509.(Certificate).commonNameAsHostname(0x0, 0x1a)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:932 +0x82
crypto/x509.(Certificate).VerifyHostname(0x0, 0xc0000be880, 0x1a, 0xc0008b40f0, 0x0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:1015 +0x22b
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).createServers(0xc00043bb20, 0xc00000e460, 0x1, 0x1, 0xc0003ec9f0, 0xc0000e6c60, 0xc0006482c0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1110 +0x173a
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).getBackendServers(0xc00043bb20, 0xc00000e460, 0x1, 0x1, 0xc0003f9080, 0xc000106bc0, 0xc00044e1b0, 0x7f45f196c008, 0x203000, 0xc0003f7c00)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:457 +0xe4
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).getConfiguration(0xc00043bb20, 0xc00000e460, 0x1, 0x1, 0x1, 0x20, 0x16b7760, 0xc000537a01, 0xc00038d220)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:405 +0x80
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).syncIngress(0xc00043bb20, 0x16dd5a0, 0xc0001075e0, 0xc035774165, 0x2a3e6a314390)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:125 +0xcc
k8s.io/ingress-nginx/internal/task.(Queue).worker(0xc00027ef60)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:129 +0x30c
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00064dfa8)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000677fa8, 0x3b9aca00, 0x0, 0x1, 0xc000402240)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/ingress-nginx/internal/task.(Queue).Run(0xc00027ef60, 0x3b9aca00, 0xc000402240)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61 +0x6b
created by k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).Start
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:310 +0x48e
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x2e8 pc=0x700ec2]
goroutine 159 [running]:
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x163bb00, 0x276b5b0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/runtime/panic.go:679 +0x1b2
crypto/x509.(Certificate).hasSANExtension(...)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/x509.go:787
crypto/x509.(Certificate).commonNameAsHostname(0x0, 0x1a)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:932 +0x82
crypto/x509.(Certificate).VerifyHostname(0x0, 0xc0000be880, 0x1a, 0xc0008b40f0, 0x0)
/home/ubuntu/.gimme/versions/go1.13.8.linux.amd64/src/crypto/x509/verify.go:1015 +0x22b
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).createServers(0xc00043bb20, 0xc00000e460, 0x1, 0x1, 0xc0003ec9f0, 0xc0000e6c60, 0xc0006482c0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:1110 +0x173a
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).getBackendServers(0xc00043bb20, 0xc00000e460, 0x1, 0x1, 0xc0003f9080, 0xc000106bc0, 0xc00044e1b0, 0x7f45f196c008, 0x203000, 0xc0003f7c00)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:457 +0xe4
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).getConfiguration(0xc00043bb20, 0xc00000e460, 0x1, 0x1, 0x1, 0x20, 0x16b7760, 0xc000537a01, 0xc00038d220)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:405 +0x80
k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).syncIngress(0xc00043bb20, 0x16dd5a0, 0xc0001075e0, 0xc035774165, 0x2a3e6a314390)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:125 +0xcc
k8s.io/ingress-nginx/internal/task.(Queue).worker(0xc00027ef60)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:129 +0x30c
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00064dfa8)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000677fa8, 0x3b9aca00, 0x0, 0x1, 0xc000402240)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/tmp/go/src/k8s.io/ingress-nginx/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/ingress-nginx/internal/task.(Queue).Run(0xc00027ef60, 0x3b9aca00, 0xc000402240)
/tmp/go/src/k8s.io/ingress-nginx/internal/task/queue.go:61 +0x6b
created by k8s.io/ingress-nginx/internal/ingress/controller.(NGINXController).Start
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:310 +0x48e
@zouchengli Please use quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:dev
This image contains #5225
@zouchengli Please use quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:dev
This image contains #5225
OK,I'm try it now
@zouchengli Please use quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:dev
This image contains #5225
Thank you.
@aledbf
I am waiting this fix, when will you release next version? do you have expected date?
@Hokwang end of April. In the meantime, you can use the dev tag posted in a previous comment
Most helpful comment
Please use
quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:devThis image contains #5225