kubeadm version (use kubeadm version): 1.15.0
Environment:
kubectl version): 1.15.0uname -a): 4.15.0-51-genericI've created a single node, deleted it and recreated on a fresh Ubuntu machine. When I try to recreate the cluster I get the error:
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
When I recreate the cluster after kubeadm reset, I should expect to get no errors.
I installed a kubeadm on a single node and did the following:
# created a single node
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
# reseted a single node
sudo kubeadm reset
rm -fr .kube/
# recreated a single node
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
You can ask me.
Thanks for reporting it @staticdev
It is a bug of kubeadm reset.
It has been addressed by PR kubernetes/kubernetes#79331 kubernetes/kubernetes#79326
/assign
@SataQiu what should I do for now? Skip preflight checks? Remove this etcd folder (/var/lib/etcd)?
@SataQiu what should I do for now? Skip preflight checks? Remove this etcd folder (/var/lib/etcd)?
It would be fixed with 1.15.1 https://github.com/kubernetes/sig-release/blob/3a3c9f92ef484656f0cb4867f32491777d629952/releases/patch-releases.md#115
@staticdev
please re-open if 1.15.1 does not fix this for you (once it's out).
you have to delete /var/lib/etcd
@sabour Deleting is the workaround to go. I will try 1.15.1 ASAP @neolit123 .
@SataQiu @neolit123 the problem does not occur in 1.15.1 =)
get same problem in 1.15.3
get same problem in 1.15.3
could you please share you logs and kubeadm version output?
I too get the same error
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
kubeadm reset should be cleaning this directory for you.
if it cannot clean it, try manually.
the same problem in v1.17.3
after rm -rf /var/lib/etcd, it works fine
@neolit123 I think this issue should be reopened, it was never corrected. People still have to delete manually /var/lib/etcd =(
@staticdev you did mention that the bug on the side of kubeadm for 1.15.1 is resolved in your comment here:
https://github.com/kubernetes/kubeadm/issues/1642#issuecomment-513492385
i cannot reproduce the issue here with 1.17*
what conditions prevent kubeadm reset from deleting the directory?
kubeadm reset is a best effort command and if for example the folder is protected or locked, kubeadm will not succeed.
@neolit123 you are right. @inceabdullah what did you do exactly?
I'm getting this same error with 1.19.0 on Ubuntu-18. You say the work around is to just delete /var/lib/etcd but there are subdirectories in there.
This needs to be fixed.
if you are creating a new cluster and the directory exists and has contents, kubeadm will fail, which is by design.
you can skip that check with --ignore-preflight-errors=DirAvailable--var-lib-etcd but no guarantees etcd will work if you do that!
better delete the directory contents if kubeadm reset for some reason could not do that.
It must be harmless to delete this directory, then. I hope so since I've already deleted it.
Most helpful comment
you have to delete /var/lib/etcd