Inside the Docker container for the dashboard I have the following logs
docker logs 274e6c5548d8
2017/10/06 09:43:07 Using in-cluster config to connect to apiserver
2017/10/06 09:43:07 Using service account token for csrf signing
2017/10/06 09:43:07 No request provided. Skipping authorization
2017/10/06 09:43:07 Starting overwatch
2017/10/06 09:43:37 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
I just did a check using curl to access
curl https://10.96.0.1:443/version
curl: (60) Peer's Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
curl https://10.96.0.1:443/version --insecure
{
"major": "1",
"minor": "7",
"gitVersion": "v1.7.4+0c1a5fc",
"gitCommit": "0c1a5fc302a1f849e245e64f3ec0d71fb54df8a7",
"gitTreeState": "not a git tree",
"buildDate": "2017-09-06T14:42:45Z",
"goVersion": "go1.8.3",
"compiler": "gc",
"platform": "linux/amd64"
Because I'm new on kubernetes, can somebody guide me in finding the cause.
Regards
Walter
On which host you are running curl? This ip needs to be accessible from inside the cluster. You would need to exec into pod to run curl.
The docker container runni g the dashboard always stops
docker logs 274e6c5548d8
2017/10/06 09:43:07 Using in-cluster config to connect to apiserver
2017/10/06 09:43:07 Using service account token for csrf signing
2017/10/06 09:43:07 No request provided. Skipping authorization
2017/10/06 09:43:07 Starting overwatch
2017/10/06 09:43:37 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information:
So, not able to get curl done in that one, that's what you mean, no?
No, you need to run some other container inside the cluster. Not dashboard. Exec into it and then run curl. Networking inside the cluster is different from the one outside.
Started a container and checked
[root@c70427cb7f76 /]# ping 10.96.0.1
PING 10.96.0.1 (10.96.0.1) 56(84) bytes of data.
From 150.45.87.248 icmp_seq=1 Time to live exceeded
From 150.45.87.248 icmp_seq=2 Time to live exceeded
...
[root@c70427cb7f76 /]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
^]
Connection closed by foreign host.
[root@c70427cb7f76 /]# curl -v https://10.96.0.1:443/version
CONNECT 10.96.0.1:443 HTTP/1.1
Host: 10.96.0.1:443
User-Agent: curl/7.29.0
Proxy-Connection: Keep-Alive< HTTP/1.1 503 Service Unavailable
< Cache-Control: no-cache
< Pragma: no-cache
< Content-Type: text/html; charset=utf-8
< Proxy-Connection: close
< Connection: close
< Content-Length: 1172
<
- Received HTTP code 503 from proxy after CONNECT
- Connection #0 to host 150.45.87.133 left intact
curl: (56) Received HTTP code 503 from proxy after CONNECT
seems to use the company proxy
Disbaled that and tried again
[root@c70427cb7f76 /]# curl -v https://10.96.0.1:443/version
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Seems to be some certificate issue
If networking inside the cluster will work then Dashboard will also be able to connect to API server.
Network wise I think we are fine, because I see that certificate for APi server is requested, but not found which may be normal in this test container
Bu inside the dashboard one I still see
2017/10/06 09:43:37 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
I was ato get inside the container before it gets stopped
docker exec -it bbe6eb3bef79 sh
HOME='/root'
HOSTNAME='kubernetes-dashboard-4056215011-zj025'
IFS='
'
KUBERNETES_DASHBOARD_PORT='tcp://10.107.126.68:443'
KUBERNETES_DASHBOARD_PORT_443_TCP='tcp://10.107.126.68:443'
KUBERNETES_DASHBOARD_PORT_443_TCP_ADDR='10.107.126.68'
KUBERNETES_DASHBOARD_PORT_443_TCP_PORT='443'
KUBERNETES_DASHBOARD_PORT_443_TCP_PROTO='tcp'
KUBERNETES_DASHBOARD_SERVICE_HOST='10.107.126.68'
KUBERNETES_DASHBOARD_SERVICE_PORT='443'
KUBERNETES_PORT='tcp://10.96.0.1:443'
KUBERNETES_PORT_443_TCP='tcp://10.96.0.1:443'
KUBERNETES_PORT_443_TCP_ADDR='10.96.0.1'
KUBERNETES_PORT_443_TCP_PORT='443'
KUBERNETES_PORT_443_TCP_PROTO='tcp'
KUBERNETES_SERVICE_HOST='10.96.0.1'
KUBERNETES_SERVICE_PORT='443'
KUBERNETES_SERVICE_PORT_HTTPS='443'
KUBE_DNS_PORT='udp://10.96.0.10:53'
KUBE_DNS_PORT_53_TCP='tcp://10.96.0.10:53'
KUBE_DNS_PORT_53_TCP_ADDR='10.96.0.10'
KUBE_DNS_PORT_53_TCP_PORT='53'
KUBE_DNS_PORT_53_TCP_PROTO='tcp'
KUBE_DNS_PORT_53_UDP='udp://10.96.0.10:53'
KUBE_DNS_PORT_53_UDP_ADDR='10.96.0.10'
KUBE_DNS_PORT_53_UDP_PORT='53'
KUBE_DNS_PORT_53_UDP_PROTO='udp'
KUBE_DNS_SERVICE_HOST='10.96.0.10'
KUBE_DNS_SERVICE_PORT='53'
KUBE_DNS_SERVICE_PORT_DNS='53'
KUBE_DNS_SERVICE_PORT_DNS_TCP='53'
OPTIND='1'
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
PPID='0'
PS1='# '
PS2='> '
PS4='+ '
PWD='/'
TERM='xterm'
curl: try 'curl --help' or 'curl --manual' for more information
curl: try 'curl --help' or 'curl --manual' for more information
Seems to be blocked, stays on trying the connection, until time-out occurs
鈥rying 10.96.0.1...
You can not exec into Dashboard container as it was build based on scratch image. There is no shell in it. How are you starting Dashboard?
I just followed this procedure:
_master lxdocapt13 (Oracle Linux v7.3)
worker lxdocapt14 (Oracle Linux v7.3)_
On master
**curl --proxy http://150.45.87.133:8080 --output /tmp/kubernetes-dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f /tmp/kubernetes-dashboard.yaml**
This deploys the dashboard.
On the worker I see the Docker container for the dashboard been started from time to time, not doing this manually.
_e156ea00ee9d gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:327cfef378e88ffbc327f98dd24adacf6c9363c042db78e922d050f2bdcf6f78 "/dashboard --inse..." 5 seconds ago Up Less than a second k8s_kubernetes-dashboard_kubernetes-dashboard-4056215011-zj025_kube-system_05511968-aa6c-11e7-b661-005056800b9d_82_
Then after some time
docker logs e156ea00ee9d
2017/10/06 12:36:05 Starting overwatch
2017/10/06 12:36:05 Using in-cluster config to connect to apiserver
2017/10/06 12:36:05 Using service account token for csrf signing
2017/10/06 12:36:05 No request provided. Skipping authorization
2017/10/06 12:36:35 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
Ignore my previous check inside container, because I did it inside wrong one :-)
There has to be some issue with networking then. Dashboard tries to access API server through kubernetes service but fails: Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout. From inside the container traffic is somehow blocked.
Is it normal that kubernetes-dashboard-certd had no files inside?
They are generated by init container.
So, this can't be the issue then?
The issue is pretty obvious. Container logged the error Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout. If it was certificates issue then error message would be different.
Ok, then we need to see why connection doesn't be established? No way to get inside the dashboard container?
Nope, no way. You can only check it from inside some other container that runs in kubernetes.
Tested with a container started via kubectl
kubectl run test-jetty --image=dockerdtrtest.toyota-europe.com/toyota/jdk-8-jetty-9.3-appserver:1.0
Container running, when
docker exec -it 630adb1e19ad sh
_sh-4.2$ set
_BASH=/usr/bin/sh
BASHOPTS=cmdhist:expand_aliases:extquote:force_fignore:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="4" [1]="2" [2]="46" [3]="2" [4]="release" [5]="x86_64-redhat-linux-gnu")
BASH_VERSION='4.2.46(2)-release'
COLUMNS=237
DIRSTACK=()
EUID=216
GROUPS=()
HISTFILE=/home/jetty/.bash_history
HISTFILESIZE=500
HISTSIZE=500
HOME=/home/jetty
HOSTNAME=test-jetty-2802385339-j4dwl
HOSTTYPE=x86_64
IFS='
'
JAVA_HOME=/opt/jdk8
JETTY_BASE=/opt/jetty-toyota
JETTY_GPG_KEYS='AED5EE6C45D0FE8D5D1B164F27DED4BF6216DB8F 2A684B57436A81FA8706B53C61C3351A438A3B7D 5989BAF76217B843D66BE55B2D0E1FB8FE4B68B4 B59B67FD7904984367F931800818D9D68FB67BAC BFBB21C246D7776836287A48A04E0C74ABB35FEA 8B096546B1A8F02656B15D3B1677D141BCF3584D FBA2B18D238AB852DF95745C76157BDF03D0DCD6 5C9579B3DB2E506429319AAEF33B071B29559E1E'
JETTY_HOME=/opt/jetty
JETTY_TGZ_URL=https://repo1.maven.org/maven2/org/eclipse/jetty/jetty-distribution/9.3.20.v20170531/jetty-distribution-9.3.20.v20170531.tar.gz
JETTY_VERSION=9.3.20.v20170531
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LINES=71
MACHTYPE=x86_64-redhat-linux-gnu
MAILCHECK=60
OLDPWD=/opt/jetty-toyota
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/opt/jetty/bin:/opt/jdk8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PIPESTATUS=([0]="2")
POSIXLY_CORRECT=y
PPID=0
PS1='\s-\v\$ '
PS2='> '
PS4='+ '
PWD=/opt/jetty-toyota/logs
REFRESHED_AT=2017-09-04
SHELL=/bin/bash
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor:posix
SHLVL=1
TERM=xterm
TMPDIR=/tmp/jetty
UID=216
_=10.96.0.1
http_proxy=150.45.87.133:8080
https_proxy=150.45.87.133:8080_
_sh-4.2$ telnet 10.96.0.1 443
Trying 10.96.0.1..._
Not able to connect
I have now deployed one of our business applications inside pod and this one is also not able to access some resources.
'telnet aaa.bbb.ccc.ddd' 3202 from host machine works, connects to our DB2 system
Executing exactly the same from the pod (docker container) is NOT working, is not replying, so probably blocked :-(
What may be the possible cause of that, looks to me some missed configuration. Looks to be exactly the same as I have for the dashboard installation.
inspect of the container
docker inspect dd6e7226ba75
[
{
"Id": "dd6e7226ba757f310051645129bf2650bb537da219abba16113492143d3ba637",
"Created": "2017-10-09T12:55:53.512971109Z",
"Path": "/docker-entrypoint.bash",
"Args": [
"java",
"-jar",
"/opt/jetty/start.jar"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 15345,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-10-09T12:55:55.589943142Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:f937f9fd70a140d8c9a1d806a998a5444d9ac50ba659f00c4a5e16502ced8c95",
"ResolvConfPath": "/var/lib/docker/containers/8bdcbe05986ac06eff4c70720cd252706acd744b4c6e6e289007abf5d6104e07/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/8bdcbe05986ac06eff4c70720cd252706acd744b4c6e6e289007abf5d6104e07/hostname",
"HostsPath": "/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/etc-hosts",
"LogPath": "/var/lib/docker/containers/dd6e7226ba757f310051645129bf2650bb537da219abba16113492143d3ba637/dd6e7226ba757f310051645129bf2650bb537da219abba16113492143d3ba637-json.log",
"Name": "/k8s_npaqit-service_npaqit-service-2193241535-8f3x6_default_2e3e59dd-acf1-11e7-9138-005056800b9d_0",
"RestartCount": 0,
"Driver": "devicemapper",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/opt/apps-logs:/opt/apps-logs",
"/opt/data/npaa:/opt/data/npaa",
"/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/volumes/kubernetes.io~secret/admin-confidential:/opt/secrets:ro",
"/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/volumes/kubernetes.io~secret/default-token-5mzdh:/var/run/secrets/kubernetes.io/serviceaccount:ro",
"/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/etc-hosts:/etc/hosts",
"/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/containers/npaqit-service/8dec9878:/dev/termination-log"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "container:8bdcbe05986ac06eff4c70720cd252706acd744b4c6e6e289007abf5d6104e07",
"PortBindings": null,
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "container:8bdcbe05986ac06eff4c70720cd252706acd744b4c6e6e289007abf5d6104e07",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 1000,
"PidMode": "container:8bdcbe05986ac06eff4c70720cd252706acd744b4c6e6e289007abf5d6104e07",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined"
],
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 2,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "/kubepods/besteffort/pod2e3e59dd-acf1-11e7-9138-005056800b9d",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Name": "devicemapper",
"Data": {
"DeviceId": "8103",
"DeviceName": "docker-251:0-264590-5b4f9f315a9a70957ea2468813b7c97d249f44d86281593b6551e3b5b6ac13ba",
"DeviceSize": "10737418240"
}
},
"Mounts": [
{
"Type": "bind",
"Source": "/opt/data/npaa",
"Destination": "/opt/data/npaa",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/volumes/kubernetes.io~secret/admin-confidential",
"Destination": "/opt/secrets",
"Mode": "ro",
"RW": false,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/volumes/kubernetes.io~secret/default-token-5mzdh",
"Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
"Mode": "ro",
"RW": false,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/etc-hosts",
"Destination": "/etc/hosts",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/containers/npaqit-service/8dec9878",
"Destination": "/dev/termination-log",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/opt/apps-logs",
"Destination": "/opt/apps-logs",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "npaqit-service-2193241535-8f3x6",
"Domainname": "",
"User": "jetty",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"ENVIRONMENT=uat",
"SERVICE_NAME=npaqit",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_SERVICE_PORT_HTTPS=443",
"KUBERNETES_PORT=tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
"KUBERNETES_SERVICE_HOST=10.96.0.1",
"PATH=/opt/jetty/bin:/opt/jdk8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"REFRESHED_AT=2017-09-04",
"http_proxy=150.45.87.133:8080",
"https_proxy=150.45.87.133:8080",
"JAVA_HOME=/opt/jdk8",
"JETTY_HOME=/opt/jetty",
"JETTY_VERSION=9.3.21.v20170918",
"JETTY_TGZ_URL=https://repo1.maven.org/maven2/org/eclipse/jetty/jetty-distribution/9.3.21.v20170918/jetty-distribution-9.3.21.v20170918.tar.gz",
"JETTY_GPG_KEYS=AED5EE6C45D0FE8D5D1B164F27DED4BF6216DB8F \t2A684B57436A81FA8706B53C61C3351A438A3B7D \t5989BAF76217B843D66BE55B2D0E1FB8FE4B68B4 \tB59B67FD7904984367F931800818D9D68FB67BAC \tBFBB21C246D7776836287A48A04E0C74ABB35FEA \t8B096546B1A8F02656B15D3B1677D141BCF3584D \tFBA2B18D238AB852DF95745C76157BDF03D0DCD6 \t5C9579B3DB2E506429319AAEF33B071B29559E1E",
"JETTY_BASE=/opt/jetty-toyota",
"TMPDIR=/tmp/jetty"
],
"Cmd": [
"java",
"-jar",
"/opt/jetty/start.jar"
],
"ArgsEscaped": true,
"Image": "dockerdtrtest.toyota-europe.com/toyota/npaqit@sha256:d216922feb0a625e282b50bc2a5582bb6f4336262c45eebb57142eae97d1794a",
"Volumes": null,
"WorkingDir": "/opt/jetty-toyota",
"Entrypoint": [
"/docker-entrypoint.bash"
],
"OnBuild": null,
"Labels": {
"annotation.io.kubernetes.container.hash": "3aae2497",
"annotation.io.kubernetes.container.restartCount": "0",
"annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
"annotation.io.kubernetes.container.terminationMessagePolicy": "File",
"annotation.io.kubernetes.pod.terminationGracePeriod": "30",
"io.kubernetes.container.logpath": "/var/log/pods/2e3e59dd-acf1-11e7-9138-005056800b9d/npaqit-service_0.log",
"io.kubernetes.container.name": "npaqit-service",
"io.kubernetes.docker.type": "container",
"io.kubernetes.pod.name": "npaqit-service-2193241535-8f3x6",
"io.kubernetes.pod.namespace": "default",
"io.kubernetes.pod.uid": "2e3e59dd-acf1-11e7-9138-005056800b9d",
"io.kubernetes.sandbox.id": "8bdcbe05986ac06eff4c70720cd252706acd744b4c6e6e289007abf5d6104e07"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": null,
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
}
]
kubectl describe pod npaqit-service-2193241535-8f3x6
Name: npaqit-service-2193241535-8f3x6
Namespace: default
Node: lxdocapt14/150.45.89.109
Start Time: Mon, 09 Oct 2017 14:55:52 +0200
Labels: io.kompose.service=npaqit-service
pod-template-hash=2193241535
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"npaqit-service-2193241535","uid":"2e3d351c-acf1-11e7-9138-005056...
Status: Running
IP: 10.244.3.30
Created By: ReplicaSet/npaqit-service-2193241535
Controlled By: ReplicaSet/npaqit-service-2193241535
Containers:
npaqit-service:
Container ID: docker://dd6e7226ba757f310051645129bf2650bb537da219abba16113492143d3ba637
Image: dockerdtrtest.toyota-europe.com/toyota/npaqit:2.5
Image ID: docker-pullable://dockerdtrtest.toyota-europe.com/toyota/npaqit@sha256:d216922feb0a625e282b50bc2a5582bb6f4336262c45eebb57142eae97d1794a
Port:
State: Running
Started: Mon, 09 Oct 2017 14:55:55 +0200
Ready: True
Restart Count: 0
Environment:
ENVIRONMENT: uat
SERVICE_NAME: npaqit
Mounts:
/opt/apps-logs from volume-appslogs (rw)
/opt/data/npaa from volume-npaa (rw)
/opt/secrets from admin-confidential (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5mzdh (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
volume-appslogs:
Type: HostPath (bare host directory volume)
Path: /opt/apps-logs
volume-npaa:
Type: HostPath (bare host directory volume)
Path: /opt/data/npaa
admin-confidential:
Type: Secret (a volume populated by a Secret)
SecretName: npaqit-admin-confidential
Optional: false
default-token-5mzdh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5mzdh
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
22m 22m 1 default-scheduler Normal Scheduled Successfully assigned npaqit-service-2193241535-8f3x6 to lxdocapt14
22m 22m 1 kubelet, lxdocapt14 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "volume-npaa"
22m 22m 1 kubelet, lxdocapt14 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "volume-appslogs"
22m 22m 1 kubelet, lxdocapt14 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "admin-confidential"
22m 22m 1 kubelet, lxdocapt14 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-5mzdh"
22m 22m 1 kubelet, lxdocapt14 spec.containers{npaqit-service} Normal Pulled Container image "dockerdtrtest.toyota-europe.com/toyota/npaqit:2.5" already present on machine
22m 22m 1 kubelet, lxdocapt14 spec.containers{npaqit-service} Normal Created Created container
22m 22m 1 kubelet, lxdocapt14 spec.containers{npaqit-service} Normal Started Started container
For kubernetes related issues try to seek for help on core repository. This repo is for dashboard related issues only. My guess is that it has something to do with company proxy.
I just passed the information because looks to be exactly the same as I have currently in using the dashboard. I will open ticket in the core repository too for this.
Don't see why company proxy related, telnet xxx.xxx.xxx.xxx 3202 is not using proxy at that time
Could you link the ticket in the core repository here? Would be really helpful. Thanks.
I am also facing the issue in v1.10.0 cluster. I was able to use dashboard in previous version v1.5. i am using flannel. I tested with heapster which is connecting to Api server from the pod and there is no issue. It is happening only with dashboard.
Is there any resolution for this problem?
We had the same issue and after many hours we realized that the networking and the dnsPolicy had to be updated to make it work. It never worked for us in the BRIDGE mode but did work on the hostmode.
The change was to add this to the spec for kubernetes-dashboard Deployment config
hostNetwork: true
dnsPolicy: Default
Hope this helps. If you are able to make it work in the BRIDGE mode, we would like to know how.
When I changed the CNI plugin to Calico, it started working fine. I was facing issue whenever I used Flannel.
I followed kubernetes.io/docs guide on configuring Kubernetes on CentOS master-node, everything goes ok, but when I go for UI, I had nothing and couldn't install as described in kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ guide, so I created
and install as
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
which installed fine, but now when I go for
https://
I get
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/resetMetrics",
"/swaggerapi/",
"/version"
]
}
I don't know how to fix it. Any suggestions?
We had the same issue and after many hours we realized that the networking and the dnsPolicy had to be updated to make it work. It never worked for us in the BRIDGE mode but did work on the hostmode.
The change was to add this to the spec for kubernetes-dashboard Deployment confighostNetwork: true dnsPolicy: DefaultHope this helps. If you are able to make it work in the BRIDGE mode, we would like to know how.
This helped me. Running Kubernetes 1.16, with Flannel. Was struggling for some time getting the dashboard to run. Using Kubernetes Dashboard v2.0.0-beta5.
We had the same issue and after many hours we realized that the networking and the dnsPolicy had to be updated to make it work. It never worked for us in the BRIDGE mode but did work on the hostmode.
The change was to add this to the spec for kubernetes-dashboard Deployment confighostNetwork: true dnsPolicy: DefaultHope this helps. If you are able to make it work in the BRIDGE mode, we would like to know how.
work for me , i had update my CNI plugin , dashboard pod seek for api-server and router to 10.96.0.1 ,but my api-server is 192.168.8.14. (hostIP: 192.168.8.14 ,podIP: 192.168.8.14) . so it get i/o timeout .
it is totally a dns problem.
Most helpful comment
We had the same issue and after many hours we realized that the networking and the dnsPolicy had to be updated to make it work. It never worked for us in the BRIDGE mode but did work on the hostmode.
The change was to add this to the spec for kubernetes-dashboard Deployment config
Hope this helps. If you are able to make it work in the BRIDGE mode, we would like to know how.