Kind: failed to lock config file: open /var/run/kubernetes/admin.kubeconfig.lock

Created on 15 Jan 2021  ยท  3Comments  ยท  Source: kubernetes-sigs/kind

What happened:

uname -a
Linux ubuntu-20-64 5.4.0-60-generic #67-Ubuntu SMP Tue Jan 5 18:31:36 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

ERROR: failed to create cluster: failed to lock config file: open /var/run/kubernetes/admin.kubeconfig.lock: permission denied

What you expected to happen:
kind create cluster succeeds.

How to reproduce it (as minimally and precisely as possible):

curl -Lo ./kind "https://kind.sigs.k8s.io/dl/v0.9.0/kind-$(uname)-amd64"
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind
kind create cluster

kind create cluster
Creating cluster "kind" ...
 โœ“ Ensuring node image (kindest/node:v1.19.1) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ
 โœ“ Writing configuration ๐Ÿ“œ
 โœ“ Starting control-plane ๐Ÿ•น๏ธ
 โœ“ Installing CNI ๐Ÿ”Œ
 โœ“ Installing StorageClass ๐Ÿ’พ

ERROR: failed to create cluster: failed to lock config file: open /var/run/kubernetes/admin.kubeconfig.lock: permission denied

Anything else we need to know?:
sudo chown -Rv $USER:docker /var/run/kubernetes fixed the issue.

Is this a known issue? Should something be added to the docs meanwhile?
Haven't seen this issue in CentOS7, Fedora 32, or MacOSx.

Environment:

  • kind version:
    kind v0.9.0 go1.15.2 linux/amd64

  • Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version:
Client: Docker Engine - Community
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        2291f61
 Built:             Mon Dec 28 16:17:43 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8891c58
  Built:            Mon Dec 28 16:15:19 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  • OS (e.g. from /etc/os-release):
cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
kinsupport

Most helpful comment

Yep, this happened after a couple of local-up-cluster.sh runs.
Thanks for the thorough explanation - makes sense.

Feel free to close the "issue".

All 3 comments

This is most likely because your KUBECONFIG is set to point to that file but you don't have permission over that location as your current user. It is not a bug, it is your environment. The same issue will happen with any of the kubectl commands that modify kubeconfig e.g. setting the current context.

The KUBECONFIG locking behavior comes from the official kubernetes client behavior, this is needed to prevent races when modifying the file to add the new cluster.

You should do one of:

  • run as a user that owns the KUBECONFIG location if set
  • set a different KUBECONFIG location when running kind
  • leave KUBECONFIG unset and let it be the Kubernetes default $HOME/.kube/config like default
  • pass the --kubeconfig flag to tell kind where you'd like it for this specific invocation

If I had to guess more specifically:
/var/run/kubernetes/admin.kubeconfig. is typically where you'd find a kubeconfig from local-up-cluster.sh, and is not a normal location for a kubeconfig.

An additional option: If you want to still have the /var/run/kubernetes/admin.kubeconfig cluster details available to tools that only read while creating and deleting kind clusters you can do:

export KUBECONFIG="${HOME}/.kube/config:/var/run/kubernetes/admin.kubeconfig"
kind create cluster --kubeconfig="${HOME}/.kube/config"
# do stuff
# now when you delete remember to just pass it again
kind delete cluster --kubeconfig="${HOME}/.kube/config"

This is a highly unusual need though, the reason you would need to do it this way is kubeconfig clients that may write are expected to lock all files in the list before reading and then writing (again to prevent read => write races), but since most commands only read, you could set KUBECONFIG to point to both and specify which kubeconfig to commands that perform writes.

It's might be worth a "known issues" entry that explains this error, but it's not a bug, it's intended behavior specified upstream and we haven't seen anyone else bring up a similar issue yet. Usually KUBECONFIG is just in the default location, and if not it's normally somewhere writeable to $USER, otherwise you run into issues with other tools.

Yep, this happened after a couple of local-up-cluster.sh runs.
Thanks for the thorough explanation - makes sense.

Feel free to close the "issue".

Was this page helpful?
0 / 5 - 0 ratings