etcdctl snapshot with multiple endpoints for automated backup scripts

Created on 14 Mar 2020  路  2Comments  路  Source: etcd-io/etcd

the snapshot behavior changed on v3.3 and requires only one endpoint for backup. The issue is that now we can't use it on our backup script since if we use only one node on the backup script, that node might be down at that moment and backup fails. while if we give cluster endpoints to the command and etcdctl finds the leader itself and get snapshot from it, that would make this command usable on automated script with no prior knowledge on availability of the endpoint

stale

Most helpful comment

I have the same issue, its really stupid that you have to lock to one of endpoints!
d character in etcd, stands for distributed and by this new bug as a feature, we have to check each node, to see the snapshot was successful or not! the status of the nodes, should be in another task, not in the snapshot.

Before this behavior, snapshot script could be like this:

export ETCDCTL_API=3
export ETCDCTL_KEY=key
export ETCDCTL_CERT=cert
export ENDPOINTS=`echo 192.168.10.13{1..3}:2379 | tr ' ' ','`
export CMD="etcdctl --insecure-skip-tls-verify --endpoints=${ENDPOINTS}"
export ext=`date +%F_%H-%M-%S`
export dir="/opt/nfs/etcd/snapshots"

$CMD member list
$CMD endpoint health
$CMD snapshot save $dir/$ext

now, without having multiple endpionts in snapshot command, the script would be like this:

export ETCDCTL_API=3
export ETCDCTL_KEY=key
export ETCDCTL_CERT=cert

export ext=`date +%F_%H-%M-%S`
export dir="/opt/backups/etcd"

for endpoint in 192.168.10.13{1..3}:2379
do
  etcdctl --insecure-skip-tls-verify --endpoints=$endpoint snapshot save $dir/$ext &&
    echo "snapshot was taken successfully from $endpoint" &&
    break ||
    echo "taking snapshot from $endpoint failed" >> /dev/stderr
done

I think there is no reason to remove multiple endpoints from the snapshot command.
Or there is might be another solution to taking snapshots with multiple endpoints, if any?

All 2 comments

I have the same issue, its really stupid that you have to lock to one of endpoints!
d character in etcd, stands for distributed and by this new bug as a feature, we have to check each node, to see the snapshot was successful or not! the status of the nodes, should be in another task, not in the snapshot.

Before this behavior, snapshot script could be like this:

export ETCDCTL_API=3
export ETCDCTL_KEY=key
export ETCDCTL_CERT=cert
export ENDPOINTS=`echo 192.168.10.13{1..3}:2379 | tr ' ' ','`
export CMD="etcdctl --insecure-skip-tls-verify --endpoints=${ENDPOINTS}"
export ext=`date +%F_%H-%M-%S`
export dir="/opt/nfs/etcd/snapshots"

$CMD member list
$CMD endpoint health
$CMD snapshot save $dir/$ext

now, without having multiple endpionts in snapshot command, the script would be like this:

export ETCDCTL_API=3
export ETCDCTL_KEY=key
export ETCDCTL_CERT=cert

export ext=`date +%F_%H-%M-%S`
export dir="/opt/backups/etcd"

for endpoint in 192.168.10.13{1..3}:2379
do
  etcdctl --insecure-skip-tls-verify --endpoints=$endpoint snapshot save $dir/$ext &&
    echo "snapshot was taken successfully from $endpoint" &&
    break ||
    echo "taking snapshot from $endpoint failed" >> /dev/stderr
done

I think there is no reason to remove multiple endpoints from the snapshot command.
Or there is might be another solution to taking snapshots with multiple endpoints, if any?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings