Kubespray: Backup etcd

Created on 13 Jan 2017  路  7Comments  路  Source: kubernetes-sigs/kubespray

BUG REPORT:
How do you backup etcd (on master)?

I try to use:

ssh -i ~/.ssh/id_rsa_k core@master1
sudo docker exec -it etcd1 sh
mkdir /var/lib/etcd/backup
sudo etcdctl backup --data-dir=/var/lib/etcd --backup-dir=var/lib/etcd/backup

if I run 2 backups: 2017-01-13 10:29:08.105348 I | failed creating backup snapshot dir var/lib/etcd/backup/member/snap: expected "var/lib/etcd/backup/member/snap" to be empty, got ["0000000000000011-00000000001e5e3b.snap"]

I can see that folder on host master /var/lib/etcd/member/snap got already some data. Can I just safe this folder somewhere?

Etcd is running on master:

  • in docker: etcd --version 3.0.6
  • on host: etcd --version 0.4.9 and etcd2 --version 2.3.7

I was thinking running a backup and encrypt and export that backup to another server or s3.
Any good recommendations on how to do that...

Thank you for your help and this great tool!
Greg.

feature

Most helpful comment

@alexiacobws do you also have a working etcd restore procedure ?

All 7 comments

@gregbkr @bogdando are there not any workarounds for this? Are there plans to address this?

@gregbkr

I have a similar setup and the way I do the backup to spawn an intermediate container that just creates the snapshot on the host:

docker run --rm --net=host -v /tmp:/etcd_backup -e ETCDCTL_API=3 quay.io/coreos/etcd:v3.0.17 etcdctl --endpoints=[1.1.1.1:2379,2.2.2.2:2379,3.3.3.3:2379] snapshot save etcd_backup/snapshot.db

You will find your snapshot.db under /tmp

@alexiacobws thanks for sharing, gonna have to use that :)

1159

@alexiacobws do you also have a working etcd restore procedure ?

This restore method also interests me.

I try to restore with the following method, but unfortunately I did not succeed.

Restore:
docker run --rm --net=host -v /tmp/etcd_bak:/etcd_backup -e ETCDCTL_API=3 registry:5000/quay.io/coreos/etcd:v3.1.5 etcdctl snapshot restore etcd_backup/snapshot.db --name etcd0 --initial-cluster etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380 --initial-cluster-token etcd-cluster-1 --initial-advertise-peer-urls http://etcd0:2380

Result:
2018-06-04 09:25:52.314747 I | etcdserver/membership: added member 7ff5c9c6942f82e [http://etcd0:2380] to cluster 5d1b637f4b7740d5
2018-06-04 09:25:52.314940 I | etcdserver/membership: added member 91b417e7701c2eeb [http://etcd2:2380] to cluster 5d1b637f4b7740d5
2018-06-04 09:25:52.315096 I | etcdserver/membership: added member faeb78734ee4a93d [http://etcd1:2380] to cluster 5d1b637f4b7740d5

The snapshot file will be placed in the appropriate folder within the etcddocker image, but I can not see the old keys.

to avoid the failure, you should clear the destination backup directory firstly, or new an empty dir.
e.g

[root@SCSP01539 data]# ls
docker  etcd-backups  log  test 
[root@SCSP01539 data]# rm -rf test/
[root@SCSP01539 data]# ls
docker  etcd-backups  log  zhiyun
[root@SCSP01539 data]# mkdir test
[root@SCSP01539 data]# etcdctl backup --data-dir=/var/etcd/data --backup-dir /data/test 
2019-01-21 19:40:47.872672 I | wal: segmented wal file /data/test/member/wal/0000000000000001-000000009893d067.wal is created

Was this page helpful?
0 / 5 - 0 ratings

Related issues

IvanBiv picture IvanBiv  路  3Comments

dylanzr picture dylanzr  路  3Comments

ionsquare picture ionsquare  路  4Comments

danielm0hr picture danielm0hr  路  4Comments

mattdornfeld picture mattdornfeld  路  4Comments