What steps did you take and what happened:
No volume named stolon-data found in pod flowable/fsdb-flowable-stolon-keeper-0, skipping (stateful set is annotated and volume is present.)
[root@atl-k8-maintenance stage]# kubectl -n flowable exec -it fsdb-flowable-stolon-keeper-0 -- /bin/bash -c "df -h"
Filesystem Size Used Avail Use% Mounted on
overlay 44G 24G 21G 54% /
tmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/3624a93706cc4276aa9b041390036a9a6 4.0T 2.4T 1.7T 59% /stolon-data
/dev/mapper/centos-root 44G 24G 21G 54% /etc/hosts
shm 64M 100K 64M 1% /dev/shm
tmpfs 32G 8.0K 32G 1% /etc/secrets/stolon
tmpfs 32G 12K 32G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 32G 0 32G 0% /proc/acpi
tmpfs 32G 0 32G 0% /proc/scsi
tmpfs 32G 0 32G 0% /sys/firmware
What did you expect to happen:
Velero to be able to backup this volume, storage and set is consistent with 115 other pvs that we are backing up with velero on a daily basis.
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)
velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yamlapiVersion: velero.io/v1
kind: Backup
metadata:
creationTimestamp: "2020-01-06T20:00:09Z"
generation: 3
labels:
velero.io/storage-location: default
name: flowable
namespace: velero
resourceVersion: "158312924"
selfLink: /apis/velero.io/v1/namespaces/velero/backups/flowable
uid: 24237bfc-30bf-11ea-bc8c-20677cd9f3d4
spec:
hooks: {}
includedNamespaces:
velero backup logs <backupname>Environment:
velero version): 1.2velero client config get features): kubectl version): 1.14.6Hey @never1701 can you please give me the output for kubectl -n flowable get po fsdb-flowable-stolon-keeper-0 -o yaml?
@never1701 it looks like per https://gist.github.com/never1701/71c3bf72aa6b80bcef7fa3c525ead57f#file-stolon-keeper-0-pod-yaml-L119 your volume is named "data", not "stolon-data", so you'll need to set the annotation value to match :)
@skriss Thanks for catching that, I wondered about that when I pulled the yaml yesterday. I won't be able to test for a couple of days as the set is production but I'm sure this is the correct fix. Appreciate the help.
I'll close this out as resolved, but feel free to reach out again if needed!
@skriss confirmed that this fixed our issue, thanks for catching the typo.
Most helpful comment
@skriss confirmed that this fixed our issue, thanks for catching the typo.