Dashboard version: image i'm using is - k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Kubernetes version: 1.9.3
Operating system: coreos
Have followed the recommended setup of creating my own certs and chain and everything that i believe is required.
I can access the dashboard in https using the config below, certs coming from secret and mounted in the usual way.
spec:
containers:
- args:
- --tls-cert-file=tls.crt
- --tls-key-file=tls.key
- --disable-skip
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
[...]
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8443
scheme: HTTPS
[...]
ports:
- containerPort: 8443
protocol: TCP
i expose this deployment via service on 443 -> 8443 and i can indeed see the dashboard and login via token.
But i've noticed on the POD log showing:
2018/11/15 22:04:24 Starting overwatch
2018/11/15 22:04:24 Using in-cluster config to connect to apiserver
2018/11/15 22:04:24 Using service account token for csrf signing
2018/11/15 22:04:24 No request provided. Skipping authorization
2018/11/15 22:04:24 Successful initial request to the apiserver, version: v1.9.3
2018/11/15 22:04:24 Generating JWE encryption key
2018/11/15 22:04:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/11/15 22:04:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/11/15 22:04:24 Initializing JWE encryption key from synchronized object
2018/11/15 22:04:24 Creating in-cluster Heapster client
2018/11/15 22:04:24 Serving securely on HTTPS port: 8443
2018/11/15 22:04:24 Successful request to heapster
2018/11/15 22:04:31 http: TLS handshake error from 192.168.100.0:2456: EOF
2018/11/15 22:04:35 http: TLS handshake error from 192.168.101.0:60602: EOF
2018/11/15 22:04:35 http: TLS handshake error from 192.168.102.0:22088: EOF
and the same 3 lines looping, where 192.168.100|101|102 being in the range of my podCIDR.
I can not seem to find any reference to those messages in the code/github and I can guess whatever is trying to come in is coming in via http instead of https but i can't seem to find out who/what and how to fix this log entires.
thanks
Just to add the incoming IP does not seem to match any of the PODs i have running, at least from the output of kubectl get pods --all-namespaces and filtered on .status.podIP
in fact, what I should have said is, the message actually has the .0 ending, for example 192.168.1.0
i'll update the issue with this new IP info.
After finding similar issues for other Kubernetes related apps (in my case, this issue https://github.com/dexidp/dex/issues/992) with same message, it led me to look at healthcheck done by my ELB which indeed was initially set to TCP - changing this to SSL made the log to stop - Closing this issue!
Sorry for the confusion!
Most helpful comment
After finding similar issues for other Kubernetes related apps (in my case, this issue https://github.com/dexidp/dex/issues/992) with same message, it led me to look at healthcheck done by my ELB which indeed was initially set to TCP - changing this to SSL made the log to stop - Closing this issue!
Sorry for the confusion!