Kubeadm: Forbidden error when retrieving logs from non-master node's pods

Created on 28 Mar 2017  路  25Comments  路  Source: kubernetes/kubeadm

What keywords did you search in kubeadm issues before filing this one?

kubectl logs
logs forbidden curl insecure

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.5", GitCommit:"894ff23729bbc0055907dd3a496afb725396adda", GitTreeState:"clean", BuildDate:"2017-03-23T16:14:24Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration: Vsphere
  • OS (e.g. from /etc/os-release): Ubuntu 16.04
  • Kernel (e.g. uname -a): Linux 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

Kubernetes cluster consists of a single master node and minion node, joined together by kubeadm.

What happened?

From a remote machine (that is not the master or minion), when doing a kubectl logs on any pods that lives on the minion node, the following error occurs:

Error from server: Get https://<minion_ip>:10250/containerLogs/default/critics-1347287238-wdssk/critics: Forbidden

When doing a kubectl logs on any of the pods that lives on the master node, no error occurs and logs can be retrieved as expected.

When doing a curl of the URL returned in the error above with a --insecure, I am able to pull the logs from the affected node.

What you expected to happen?

Should be able to retrieve logs of a pod from a non-master node.

Anything else we need to know?

documentatiocontent-gap prioritbacklog

Most helpful comment

I found the reason.
Its the no_proxy that must be set to include all nodes ip
otherwise it try to use the proxy, and thats the proxy which answer Forbiden

All 25 comments

I suspect the minion is not being given serving certs the master apiserver trusts and is simply generating its own

same issue here
cluster installation done with kubeadm

$ kubeadm version

kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

list pods runnings:

ubuntu@master-01:~$ kubectl get pods -n kube-system -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP            NODE
etcd-master-01                      1/1       Running   4          6d        10.100.0.98   master-01
kube-apiserver-master-01            1/1       Running   4          6d        10.100.0.98   master-01
kube-controller-manager-master-01   1/1       Running   5          6d        10.100.0.98   master-01
kube-dns-3913472980-tx9gk           3/3       Running   9          6d        10.44.0.1     master-01
kube-proxy-5lfr4                    1/1       Running   3          6d        10.100.0.91   node-06
kube-proxy-7gk91                    1/1       Running   3          6d        10.100.0.94   node-03
kube-proxy-7kkd3                    1/1       Running   3          6d        10.100.0.93   node-01
kube-proxy-994v3                    1/1       Running   3          6d        10.100.0.95   node-05
kube-proxy-bbmkp                    1/1       Running   3          6d        10.100.0.97   node-02
kube-proxy-g593h                    1/1       Running   4          6d        10.100.0.98   master-01
kube-proxy-lft8f                    1/1       Running   3          6d        10.100.0.96   node-04
kube-scheduler-master-01            1/1       Running   4          6d        10.100.0.98   master-01
weave-net-1948p                     2/2       Running   9          6d        10.100.0.91   node-06
weave-net-2632r                     2/2       Running   9          6d        10.100.0.93   node-01
weave-net-394xl                     2/2       Running   9          6d        10.100.0.94   node-03
weave-net-ffl0r                     2/2       Running   9          6d        10.100.0.96   node-04
weave-net-j1d9d                     2/2       Running   9          6d        10.100.0.95   node-05
weave-net-lcf3c                     2/2       Running   11         6d        10.100.0.97   node-02
weave-net-pmss7                     2/2       Running   13         6d        10.100.0.98   master-01

Logs from a pod running on the master:

ubuntu@master-01:~$ kubectl -n kube-system logs kube-proxy-g593h
I0510 07:40:35.618186       1 server.go:225] Using iptables Proxier.
W0510 07:40:35.695824       1 server.go:469] Failed to retrieve node info: User "system:serviceaccount:kube-system:kube-proxy" cannot get nodes at the cluster scope. (get nodes master-01)
W0510 07:40:35.696192       1 proxier.go:293] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0510 07:40:35.696260       1 proxier.go:298] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0510 07:40:35.696312       1 server.go:249] Tearing down userspace rules.
E0510 07:40:35.731250       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:49: Failed to list *api.Endpoints: User "system:serviceaccount:kube-system:kube-proxy" cannot list endpoints at the cluster scope. (get endpoints)
E0510 07:40:35.731407       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:46: Failed to list *api.Service: User "system:serviceaccount:kube-system:kube-proxy" cannot list services at the cluster scope. (get services)
E0510 07:40:36.749827       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:49: Failed to list *api.Endpoints: User "system:serviceaccount:kube-system:kube-proxy" cannot list endpoints at the cluster scope. (get endpoints)
E0510 07:40:36.751095       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:46: Failed to list *api.Service: User "system:serviceaccount:kube-system:kube-proxy" cannot list services at the cluster scope. (get services)
I0510 07:40:37.829246       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0510 07:40:37.829987       1 conntrack.go:66] Setting conntrack hashsize to 32768
I0510 07:40:37.830401       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0510 07:40:37.830423       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600

try to get logs for a pod running on a node

ubuntu@master-01:~$ kubectl logs -n kube-system kube-proxy-lft8f
Error from server: Get https://10.100.0.96:10250/containerLogs/kube-system/kube-proxy-lft8f/kube-proxy: Forbidden

I found the reason.
Its the no_proxy that must be set to include all nodes ip
otherwise it try to use the proxy, and thats the proxy which answer Forbiden

@gousse Could you document that on the kubeadm reference page, please?

I'm hitting this at the moment - a work around would be great!

I spent a while trying to use no_proxy both with * and with the IP addresses of all the nodes, but it still did not resolve the problem. Any specific guidance would be really useful

@gousse So setting export NO_PROXY=$no_proxy,<node1-ip>,<node2-ip>,... solved the issue for you?

@jamiehannaford @tomdee
Yes, in my case "no_proxy" should be made before k8s cluster is setup.
And the forbidden error was solved.

Which components are involved doing the kubectl logs command? So does the master nodes need to have the worker nodes in there no_proxy only? Does master node means the api_server or any other controller?

kubectl > apiserver > node hosting the pod

thx

Is there a way to make sure kubectl logs goes for DNS instead of IPs? Autoscaling and IPs don't work well.

Nodes report their network addresses in their Node API object status.

The apiserver contacts nodes using the preferred address type as determined by the --kubelet-preferred-address-types flag:

List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])

Not all kubelet cloud providers report dns addresses currently.

\o/ Awesome. Saved my day thx!
Worked great in AWS.

@jamiehannaford you're working in the troubleshooting doc. Could you add this to the list?
(if kubectl logs don't work, check the proxy settings)

@yanhongwang I'm hitting the same proxy issue. My cluster runs well so far but i can't retrieve logs. The no_proxy ip's are set. Do I really need to recreate my cluster? Or is there any other way to get this running?

Hi @Snipes999

My environment:
Ubuntu: 16.04 LTS
Kubernetes: 1.7.8-00
Deployment: Ansible

http_proxy, https_proxy was set by some default value in my network environment.

So I add master ip and minion ip to "no_proxy" environment variable to all kubernetes cluster machine.
And then all the machines can talk to each other without passing through proxy server in case proxy server block some kubernetes port.

Because I don't know what exactly "kubeadm init" done with system.
So I destroy the machine and add "no_proxy" before "kubeadm init".

I use Ansible to deploy machine automatically. So it is not difficult in my case.

Otherwise, probably you can do "kubeadm reset". And then try again.

Hope this can help.

Hong

I'm using Ubuntu 17.04 and Kubernetes 1.8.1
seems to work now. I tried a couple of things, but It think the resolution was to change the no_proxy settings in the yaml files (/etc/kubernetes/manifests/...yaml) to the same as listed in current environment settings in /etc/environment

@Snipes999 I'll close this issue as solved then. Thank you!

@luxas I don't think this is solved. Unless I'm not understanding this correctly...

  1. A user creates a kubernetes cluster with kubeadm
  2. At some point they try to use kubectl logs ...
  3. They find it doesn't work and if they are lucky they find this issue or some troubleshooting doc with advice on needing to manually edit some files and then destroy and recreate their cluster!

Shouldn't this be actually fixed so that kubectl logs just works?

That can only happen under certain conditions when you're behind proxies.
The umbrella issue for making detection of front proxies better in k8s/kubeadm is https://github.com/kubernetes/kubeadm/issues/324, and @kad is owning that area. I think it gets better all the time.

@tomdee I'm constantly hitting issues where something doesn't work if person is in isolated network behind proxies, and trying to fix as much as I can. We have several patches that are already merged into 1.9 and some even backported to 1.8.x to get it better. Some PRs are still under review, but hopefully will soon be merged in 1.9. If you hit something, please feel free to open issue and assign to me or CC me.

@luxas @kad Thanks for the replies. I think I'm always hitting this with kubeadm running under vagrant. I don't think a proxy is being used so maybe I'm hitting a different issue?

@tomdee open support issue with details about your environment (Vagrant file, network connectivity, distro, vagrant plugins installed, etc). we will see what might be an issue.

Was this page helpful?
0 / 5 - 0 ratings