Velero: GCP: Ability to restore snapshots from different projects

Created on 10 Mar 2019  路  18Comments  路  Source: vmware-tanzu/velero

Describe the solution you'd like
I'm trying to migrate a cluster from one Google Cloud Project to another, but the restoration fails because it looks like velero derives the project ID for the disk snapshots from the generated service credentials, which references the project ID the cluster is in, as opposed to the source project.

Ideally, if there was a way to specify the source project ID in either the VolumeSnapshotLocation resource, or as a CLI flag, then this would be great.

Anything else you would like to add:

Environment:

  • Velero version (use velero version): 0.11.0
  • Kubernetes version (use kubectl version): 1.12
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): Google container optimised OS
AreClouGCP EnhancemenUser

All 18 comments

@yoitsro what kind of IAM setup do you need to be able to handle this case? Can a service account in Project B directly restore a disk from a snapshot in Project A?

It can as long as the permissions are set correctly. In this case, as long as the snapshots created in Project A allow read access to a service account in Project B, you're good to go.

OK - then sounds like this wouldn't be too bad to implement. Like you said, we can add an optional project key to the VolumeSnapshotLocation config, and use that as the place to store/retrieve snapshots if it's specified. If it's not specified, we'll still get the project from the credentials file. Sound right?

Sounds good to me! Thank you!

Please could you tag me in the working branch for this when you do your initial commit? I tried to figure out how I could add the project key myself, but my knowledge of Go is very limited, so I'd love to see how it _should_ be done!

Will do!

Just ran into this issue as well. Thanks for picking this one up @skriss!

Any update about this? I would also be interested :)

I wrote code that I think will enable this:

https://github.com/skriss/velero/tree/gcp-project

So you'd edit your VolumeSnapshotLocation YAML to look something like:

apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: gcp-default
  namespace: velero
spec:
  provider: gcp
  config:
    project: <SNAPSHOT_PROJECT_ID>

I pushed a test container image, steveheptio/velero:v1.0.0-alpha.2-gcp-project, that contains this fix. If someone could test it, that'd be awesome!

I pushed an update to this and rebuilt the image, if anyone can take a look!

@skriss I should be able to try it out this week...hopefully within a day or two. Thanks a lot for that.

This works for me - thanks!

Thanks for testing @vxnick!

would be great if we could get one more verification on this before merging.

I wrote code that I think will enable this:

https://github.com/skriss/velero/tree/gcp-project

So you'd edit your VolumeSnapshotLocation YAML to look something like:

apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: gcp-default
  namespace: velero
spec:
  provider: gcp
  config:
    project: <SNAPSHOT_PROJECT_ID>

I pushed a test container image, steveheptio/velero:v1.0.0-alpha.2-gcp-project, that contains this fix. If someone could test it, that'd be awesome!

can you share what are the required steps.
I want to backup and restore k8s cluster from dev project to prod project.

@xUmaRix when you configure your VolumeSnapshotLocation in the prod project, you just need to specify the project config key and give it the name of your dev project. This will enable it to restore snapshots from there. Your service account will need to have permission to access the dev project's snapshots as well.

Once you start taking new backups in your prod project, you'll probably want a separate VSL that doesn't specify the project key, since you'll (presumably) want those snapshots to stay in the prod project.

Thank you @skriss

Turn out my issue is related to service account permission.

I tried to get this working but restore shows the following errors

  Cluster:  error restoring persistentvolumes/pvc-XXXXXXX: persistentvolumes "pvc-XXXXXXX" is forbidden: error querying GCE PD volume restore-XXXXXX: GCE persistent disk not found: diskName="restore-XXXXXX" zone="us-east1-b"

Perhaps it's a permissions issue. I'm looking at the Sharing Images and Snapshots documentation from GCP.

For example, assume that Project A wants to create managed instance groups using images owned by Project B. The owner of Project B must grant the Google APIs service account of Project A the compute.imageUser role on Project B. This grants the account the ability to use the images from Project B to create managed instance groups in Project A.

This part was the most useful part. I was using the wrong service account. I needed to create a service account in Project A, provide it snapshot role permission and bucket permissions on project B, and use that ServiceAccount on the velero instance running in Project A.
Don't forget to apply the new SA credentials yaml and restart your velero instance.

After getting right, it just works!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

concaf picture concaf  路  3Comments

Marki4711 picture Marki4711  路  3Comments

abh picture abh  路  4Comments

akgunjal picture akgunjal  路  3Comments

Berndinox picture Berndinox  路  3Comments