What steps did you take and what happened:
I created ark backup of app with Persistent Volumes snapshots. I could see in AWS console that these snapshots were created correctly. However the output of "ark backup describe" always shows:
Persistent Volumes: none included
It seems issue exists in ark/pkg/cmd/util/output/backup_describer.go and was introduced together with commit 聽f014cab1fe9cfe413fc1f642dc57ef9f187125f9
What did you expect to happen:
Correct information about Persistent Volumes
The output of the following commands will help us better understand what's going on:
ark backup describe nginx-backup
Name: nginx-backup
Namespace: heptio-ark
Labels: ark.heptio.com/storage-location=default
Annotations:
Phase: Completed
Namespaces:
Included: *
Excluded:
Resources:
Included: *
Excluded:
Cluster-scoped: auto
Label selector: app=nginx
Snapshot PVs: auto
TTL: 720h0m0s
Hooks:
Backup Format Version: 1
Started: 2018-11-26 13:33:24 +0000 UTC
Completed: 2018-11-26 13:33:28 +0000 UTC
Expiration: 2018-12-26 13:33:24 +0000 UTC
Validation errors:
Persistent Volumes:
Environment:
ark version): 0.10kubectl version): 1.9.6/etc/os-release): ubuntu 18.04 LTSThanks for the report, @egagala. Would you be able to provide the output of kubectl get backup/nginx-backup -n heptio-ark -o yaml?
@nrb I don't have any longer Ark 0.10.0 setup. Unfortunately I decided to switch back to Ark 0.9.11 as it seems to be more stable now. However this issue seems to be easy to catch with steps I provided earlier.
It's possible this would be observed if using a pre-v0.10 client with a v0.10 server. This has worked as expected when using a matching client/server, though.
closing as inactive, reopen if needed. thanks!
kubectl version output:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
then I do:
velero --kubeconfig ./config/kubernetes/cluster-kubeconfig.yaml \
backup create nginx-backup-4 --selector app=nginx
Backup request "nginx-backup-4" submitted successfully.
Run `velero backup describe nginx-backup-4` or `velero backup logs nginx-backup-4` for more details.
velero --kubeconfig ./config/kubernetes/cluster-kubeconfig.yaml \
backup describe nginx-backup-4
Name: nginx-backup-4
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: <none>
Phase: Completed
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: app=nginx
Storage Location: default
Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2019-03-08 23:43:41 -0500 CST
Completed: 2019-03-08 23:43:51 -0500 CST
Expiration: 2019-04-08 00:43:41 -0400 CDT
Validation errors: <none>
Persistent Volumes: <none included>
but on AWS:

yea there is a bug in the cli, I just tried a full restore and everything worked good, but I always see:
Persistent Volumes: <none included>
velero version output
Client:
Version: 0.11.0
Git commit: -
Server:
Version: v0.11.0
@phil-lgr can you show the output of velero backup get nginx-backup-4 -o yaml, and also the content of nginx-backup-4-volumesnapshots.json.gz? Thanks.
@skriss I talked too fast when I said that the backup sequence was working鈥擨 was using a Digital Ocean Kubernetes cluster with the ark plugin... and that plugin is outdated
Now I switched over to AWS EKS and everything worked first time, I'm pretty sure this can be closed again.
I'd want DO support from the DO team, so I opened a ticket with them. Thanks again, I'm new to Kubernetes and I was glad to discover Velero!
OK great. I'll re-close this, but feel free to reopen or file new issues if needed.
So, i was facing the same issue in DO K8s, so in case to get it working i needed to remove the token into the secret leaving it this way:
apiVersion: v1
kind: Secret
stringData:
digitalocean_token: ""
type: Opaque
and then, patching it empty, and after that, reinsert the token:
apiVersion: v1
kind: Secret
stringData:
digitalocean_token: "<YOUR_TOKEN>"
type: Opaque
and patching it again!
SO , testing again, just found out in Snapshot Configuration part,that doing step 3 of before step 2 make it work.
Most helpful comment
OK great. I'll re-close this, but feel free to reopen or file new issues if needed.