Velero: rook-ceph restic backups can not be restored

Created on 7 Aug 2019  路  7Comments  路  Source: vmware-tanzu/velero

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)
Issued a velero restore command for postgresql backup that uses rook-ceph block storage, pvc and pv are re-created however I do not see rook-ceph image created in replica pool. Pods fail to start due to not being able to mount image from replicapool.

What did you expect to happen:
Restore restic backup by creating pvc, pv, and rook-ceph image in replicapool.

Environment:

  • Velero version (use velero version): v1.0.0
  • Kubernetes version (use kubectl version): v1.15.0

Most helpful comment

Since moving to latest beta release this has not been an issue, thanks!

All 7 comments

This is the error I get from dashboard when viewing the postgresql pods after issuing a velero restore command

MountVolume.SetUp failed for volume "pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume replicapool/pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4: failed to map image replicapool/pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4 cluster rook-ceph. failed to map image replicapool/pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4: Failed to complete 'rbd': exit status 2. . output: rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (2) No such file or directory

Logs of restore action:

velero restore logs postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="starting restore" logSource="pkg/controller/restore_controller.go:440" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="Not including resource" groupResource=events logSource="pkg/restore/restore.go:128" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="Not including resource" groupResource=nodes logSource="pkg/restore/restore.go:128" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="Not including resource" groupResource=events.events.k8s.io logSource="pkg/restore/restore.go:128" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="Not including resource" groupResource=backups.velero.io logSource="pkg/restore/restore.go:128" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="Not including resource" groupResource=resticrepositories.velero.io logSource="pkg/restore/restore.go:128" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:33Z" level=info msg="Not including resource" groupResource=restores.velero.io logSource="pkg/restore/restore.go:128" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Starting restore of backup velero/postgresql-backup-20190804000006" logSource="pkg/restore/restore.go:364" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring cluster level resource 'persistentvolumes' from: /tmp/069433601/resources/persistentvolumes/cluster" logSource="pkg/restore/restore.go:718" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for /v1, Kind=PersistentVolume" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="No snapshot found for persistent volume" logSource="pkg/restore/pv_restorer.go:81" persistentVolume=pvc-08ce1908-746d-11e9-841c-782bcb25840a restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolume: pvc-08ce1908-746d-11e9-841c-782bcb25840a" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="No snapshot found for persistent volume" logSource="pkg/restore/pv_restorer.go:81" persistentVolume=pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolume: pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="No snapshot found for persistent volume" logSource="pkg/restore/pv_restorer.go:81" persistentVolume=pvc-97b5d4fc-9170-4e2a-8e89-7e4c5b607d37 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolume: pvc-97b5d4fc-9170-4e2a-8e89-7e4c5b607d37" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="No snapshot found for persistent volume" logSource="pkg/restore/pv_restorer.go:81" persistentVolume=pvc-aeb3ff9b-14ff-41fc-ad84-74c249b5d954 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolume: pvc-aeb3ff9b-14ff-41fc-ad84-74c249b5d954" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring resource 'persistentvolumeclaims' into namespace 'postgresql' from: /tmp/069433601/resources/persistentvolumeclaims/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for /v1, Kind=PersistentVolumeClaim" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PV pvc-aeb3ff9b-14ff-41fc-ad84-74c249b5d954 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumes/pvc-aeb3ff9b-14ff-41fc-ad84-74c249b5d954 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolumeClaim: data-postgresql-master-0" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PV pvc-97b5d4fc-9170-4e2a-8e89-7e4c5b607d37 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumes/pvc-97b5d4fc-9170-4e2a-8e89-7e4c5b607d37 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolumeClaim: data-postgresql-slave-0" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PV pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumes/pvc-1c8cd809-66b9-4d38-b9f6-f9a9b1acd9f4 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolumeClaim: data-postgresql-slave-1" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for persistentvolumeclaims" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVFromPVCAction" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PV pvc-08ce1908-746d-11e9-841c-782bcb25840a as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumes/pvc-08ce1908-746d-11e9-841c-782bcb25840a because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore PersistentVolumeClaim: data-postgresql-slave-2" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring resource 'secrets' into namespace 'postgresql' from: /tmp/069433601/resources/secrets/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for /v1, Kind=Secret" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore Secret: darksystems-ca-tls" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of Secret: darksystems-ca-tls because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore Secret: default-token-zqzzl" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of Secret: default-token-zqzzl because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring resource 'configmaps' into namespace 'postgresql' from: /tmp/069433601/resources/configmaps/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for /v1, Kind=ConfigMap" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ConfigMap: postgresql-configs" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ConfigMap: postgresql-configs because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring resource 'serviceaccounts' into namespace 'postgresql' from: /tmp/069433601/resources/serviceaccounts/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for /v1, Kind=ServiceAccount" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for serviceaccounts" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing ServiceAccountAction" cmd=/velero logSource="pkg/restore/service_account_action.go:47" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Done executing ServiceAccountAction" cmd=/velero logSource="pkg/restore/service_account_action.go:78" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ServiceAccount: default" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring resource 'pods' into namespace 'postgresql' from: /tmp/069433601/resources/pods/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for /v1, Kind=Pod" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVCFromPodAction" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PVC postgresql/data-postgresql-master-0 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:58" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumeclaims/postgresql/data-postgresql-master-0 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:59" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restic snapshot ID annotations found" cmd=/velero logSource="pkg/restore/restic_restore_action.go:75" pluginName=velero pod=postgresql/postgresql-master-0 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Using image \"gcr.io/heptio-images/velero-restic-restore-helper:v1.0.0\"" cmd=/velero logSource="pkg/restore/restic_restore_action.go:86" pluginName=velero pod=postgresql/postgresql-master-0 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Done executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:131" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore Pod: postgresql-master-0" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVCFromPodAction" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PVC postgresql/data-postgresql-slave-0 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:58" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumeclaims/postgresql/data-postgresql-slave-0 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:59" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restic snapshot ID annotations found" cmd=/velero logSource="pkg/restore/restic_restore_action.go:75" pluginName=velero pod=postgresql/postgresql-slave-0 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=error msg="unable to successfully complete restic restores of pod's volumes" error="restic repository is not ready: error running command=restic check --repo=s3:http://independent-magpie-minio.minio:9000/velero/restic/postgresql --password-file=/tmp/velero-restic-credentials-postgresql692395645 --cache-dir=/scratch/.cache/restic, stdout=using temporary cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreated new cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreate exclusive lock for repository\n, stderr=Fatal: unable to create lock in backend: repository is already locked by PID 1915 on restic-wnsc2 by  (UID 0, GID 0)\nlock was created at 2019-07-30 00:14:49 (199h21m42.330718684s ago)\nstorage ID 545c85a2\n: exit status 1" logSource="pkg/restore/restore.go:1060" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Using image \"gcr.io/heptio-images/velero-restic-restore-helper:v1.0.0\"" cmd=/velero logSource="pkg/restore/restic_restore_action.go:86" pluginName=velero pod=postgresql/postgresql-slave-0 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Done executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:131" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore Pod: postgresql-slave-0" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=error msg="unable to successfully complete restic restores of pod's volumes" error="restic repository is not ready: error running command=restic check --repo=s3:http://independent-magpie-minio.minio:9000/velero/restic/postgresql --password-file=/tmp/velero-restic-credentials-postgresql692395645 --cache-dir=/scratch/.cache/restic, stdout=using temporary cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreated new cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreate exclusive lock for repository\n, stderr=Fatal: unable to create lock in backend: repository is already locked by PID 1915 on restic-wnsc2 by  (UID 0, GID 0)\nlock was created at 2019-07-30 00:14:49 (199h21m42.330718684s ago)\nstorage ID 545c85a2\n: exit status 1" logSource="pkg/restore/restore.go:1060" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVCFromPodAction" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PVC postgresql/data-postgresql-slave-1 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:58" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumeclaims/postgresql/data-postgresql-slave-1 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:59" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restic snapshot ID annotations found" cmd=/velero logSource="pkg/restore/restic_restore_action.go:75" pluginName=velero pod=postgresql/postgresql-slave-1 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Using image \"gcr.io/heptio-images/velero-restic-restore-helper:v1.0.0\"" cmd=/velero logSource="pkg/restore/restic_restore_action.go:86" pluginName=velero pod=postgresql/postgresql-slave-1 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Done executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:131" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore Pod: postgresql-slave-1" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=error msg="unable to successfully complete restic restores of pod's volumes" error="restic repository is not ready: error running command=restic check --repo=s3:http://independent-magpie-minio.minio:9000/velero/restic/postgresql --password-file=/tmp/velero-restic-credentials-postgresql692395645 --cache-dir=/scratch/.cache/restic, stdout=using temporary cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreated new cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreate exclusive lock for repository\n, stderr=Fatal: unable to create lock in backend: repository is already locked by PID 1915 on restic-wnsc2 by  (UID 0, GID 0)\nlock was created at 2019-07-30 00:14:49 (199h21m42.330718684s ago)\nstorage ID 545c85a2\n: exit status 1" logSource="pkg/restore/restore.go:1060" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing AddPVCFromPodAction" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:44" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Adding PVC postgresql/data-postgresql-slave-2 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pvc_from_pod_action.go:58" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping persistentvolumeclaims/postgresql/data-postgresql-slave-2 because it's already been restored." logSource="pkg/restore/restore.go:815" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing item action for pods" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:59" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restic snapshot ID annotations found" cmd=/velero logSource="pkg/restore/restic_restore_action.go:75" pluginName=velero pod=postgresql/postgresql-slave-2 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Using image \"gcr.io/heptio-images/velero-restic-restore-helper:v1.0.0\"" cmd=/velero logSource="pkg/restore/restic_restore_action.go:86" pluginName=velero pod=postgresql/postgresql-slave-2 restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Done executing ResticRestoreAction" cmd=/velero logSource="pkg/restore/restic_restore_action.go:131" pluginName=velero restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore Pod: postgresql-slave-2" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Restoring resource 'controllerrevisions.apps' into namespace 'postgresql' from: /tmp/069433601/resources/controllerrevisions.apps/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=error msg="unable to successfully complete restic restores of pod's volumes" error="restic repository is not ready: error running command=restic check --repo=s3:http://independent-magpie-minio.minio:9000/velero/restic/postgresql --password-file=/tmp/velero-restic-credentials-postgresql692395645 --cache-dir=/scratch/.cache/restic, stdout=using temporary cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreated new cache in /scratch/.cache/restic/restic-check-cache-108665301\ncreate exclusive lock for repository\n, stderr=Fatal: unable to create lock in backend: repository is already locked by PID 1915 on restic-wnsc2 by  (UID 0, GID 0)\nlock was created at 2019-07-30 00:14:49 (199h21m42.330718684s ago)\nstorage ID 545c85a2\n: exit status 1" logSource="pkg/restore/restore.go:1060" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Getting client for apps/v1, Kind=ControllerRevision" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-54f4c8d9b8" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-54f4c8d9b8 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-5f84b99f54" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-5f84b99f54 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-5f9b75946" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-5f9b75946 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-685b4dd6ff" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-685b4dd6ff because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-6d9885ccdb" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-6d9885ccdb because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-75cfc96977" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-75cfc96977 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-79bff458c8" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-79bff458c8 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-7b56bf4ccb" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-7b56bf4ccb because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-874c65dc4" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-874c65dc4 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:34Z" level=info msg="Attempting to restore ControllerRevision: postgresql-master-b9db795cb" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-master-b9db795cb because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-5979799748" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-5979799748 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-59986cc5dd" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-59986cc5dd because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-5cc95cfc84" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-5cc95cfc84 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-68fc7dcf76" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-68fc7dcf76 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-6fd7876d9d" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-6fd7876d9d because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-76d697c844" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-76d697c844 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-7f5bdb84c4" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-7f5bdb84c4 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-85ff7ff7c4" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-85ff7ff7c4 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-86f48d88f6" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-86f48d88f6 because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:35Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-985b457db" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-985b457db because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore ControllerRevision: postgresql-slave-f96c9cdbb" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Skipping restore of ControllerRevision: postgresql-slave-f96c9cdbb because it already exists in the cluster and is unchanged from the backed up version" logSource="pkg/restore/restore.go:1037" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Restoring resource 'endpoints' into namespace 'postgresql' from: /tmp/069433601/resources/endpoints/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Getting client for /v1, Kind=Endpoints" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore Endpoints: postgresql-headless" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore Endpoints: postgresql-read" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore Endpoints: postgresql" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Restoring resource 'jobs.batch' into namespace 'postgresql' from: /tmp/069433601/resources/jobs.batch/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="postgresql/postgresql-backup-manual-9cj is complete - skipping" logSource="pkg/restore/restore.go:802" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Restoring resource 'services' into namespace 'postgresql' from: /tmp/069433601/resources/services/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Getting client for /v1, Kind=Service" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore Service: postgresql-headless" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore Service: postgresql-read" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:882" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore Service: postgresql" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Restoring resource 'statefulsets.apps' into namespace 'postgresql' from: /tmp/069433601/resources/statefulsets.apps/namespaces/postgresql" logSource="pkg/restore/restore.go:716" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Getting client for apps/v1, Kind=StatefulSet" logSource="pkg/restore/restore.go:764" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore StatefulSet: postgresql-master" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="Attempting to restore StatefulSet: postgresql-slave" logSource="pkg/restore/restore.go:980" restore=velero/postgresql-backup-20190804000006-20190807003733
time="2019-08-07T07:37:36Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:442" restore=velero/postgresql-backup-20190804000006-20190807003733

I can confirm latest beta resolves the issue but only if the backup was taken with the beta version, backups from v1.0.0 do not restore with beta version.

@echel0n does your PV have a reclaim policy of Retain? Just wondering if this is related to #1713?

@prydonius yes all of them have a reclaim policy of retain, I do not have issues with backups taken with latest beta just had issues restoring images created with v1.0.0 using the latest beta, i've since rebuilt using other means and will continue testing the beta till it moves to stable.

馃憤 what you're seeing makes sense then, as we're using a new annotation to track and properly restore PVCs backed up with restic that have a Retain policy, so that will only work for new backups.

Since moving to latest beta release this has not been an issue, thanks!

Was this page helpful?
0 / 5 - 0 ratings