Velero: Add `velero install` command

Created on 18 Aug 2017  路  12Comments  路  Source: vmware-tanzu/velero

It would be great to have E2E scripts for each cloud provider that go through a full install of Ark including pre-reqs and server installation. We can have the user fill in the necessary options at the beginning. The docs are great for understanding what's going on step-by-step but it takes a lot of copy-pasting to get a working install going.

EnhancemenUser P1 - Important

All 12 comments

Ksonnet might help for this---I can look into this actually (unless anyone has dibs) because it will simplify my doc writing life so so much ><

@skriss moved to v0.5.0

Some thoughts, so I don't forget them, for what ark install might look like:

  • specify objectstore name (e.g. aws): --object-store aws
  • specify objectstore bucket: --object-store-bucket BUCKET
  • specify objectstore config: --object-store-config region=foo,color=green (comma-separated KV pairs)
  • specify blockstore name (e.g. aws): --block-store aws
  • specify blockstore config: --block-store-config region=foo,color=green (comma-separated KV pairs)
  • optional: specify cloud provider credentials file: --cloud-credentials FILE. if specified, add a volume/volumeMount. otherwise, don't.
  • optional: --backup-sync-period DURATION, default to ?
  • optional: --gc-sync-period DURATION, default to ?
  • optional: --schedule-sync-period DURATION, default to 1m
  • optional: --restore-only-mode

This would be a great usability improvement I think, especially if we have a --dry-run -o yaml option that can be used to spit out files to be checked into CI/CD.

It seems like some of the work from #437 #506 may be usable as a basis of this.

Parroting Ross' suggestion, having a dry-run to render those out would be highly desirable for us as well.

I am so onboard with this. I was thinking of coding my own toy cli to point to a directory of yaml files and run those as well as everything else needed to boot it up.

Here's a first pass at what the required and optional arguments may be on the ark install command.

  • specify backupstoragelocation name (e.g. aws): --backup-location-name aws
  • specify backupstoragelocation bucket: --backup-location-bucket BUCKET
  • specify backupstoragelocation objectstore config: --backup-location-config region=foo,color=green (comma-separated KV pairs)
  • specify volumesnapshotlocation name (e.g. aws): --snapshot-location-name aws
  • specify volumesnapshotlocation provider (e.g. aws): --snapshot-location-provider=aws
  • specify volumesnapshotlocaton config: --snapshot-location-config region=foo,color=green (comma-separated KV pairs)
  • optional: specify cloud provider credentials file: --cloud-credentials FILE. if specified, add a volume/volumeMount. otherwise, don't.
  • optional: --dry-run, outputs generated YAML of all CRDs/instances + deployment
  • optional: -o DIR, where to save YAML from --dry-run
  • optional: --namespace, where CRDs and locations will be created. Also provided to the ark server command in the deployment

The rest are just sent straight to ark server, using its defaults

  • optional: --backup-sync-period DURATION, default to 1m
  • optional: --schedule-sync-period DURATION, default to 1m
  • optional: --restore-only-mode
  • optional: --client-burst
  • optional: --client-qps
  • optional: --log-level
  • optional: --metrics-address
  • optional: --pprof-address
  • optional: --restic-timeout

Assumptions:

  • only allow 1 backupstoragelocation and volumesnapshotlocation to be created with install. Anything after must be added with the relevant location command
  • plugin-dir isn't exposed since it's largely a local development option

Questions:

  • can the backupstoragelocation name be optional? We currently assume that there will be at least one location, named default.
  • should we handle installing plugins, or state that they must be added after installation?

@heptio/ark-team what do you think?

I think having the * backupstoragelocation be optional is fine, since there's a default name.

I think we should install the plugins too as part of the install.

Lastly, do you think it'd be useful to alternatively have a config file that can be loaded during install?

We'll get the in-tree plugins for AWS, GCP, and Azure already, but installing something like Portworx would be mean running (as an example) velero plugin add portworx/velero-plugin:0.6 after the fact, or specifying a list of plugin images with the install command.

I think I'll defer on the plugin install for now, partially for the issue you raise - there's so many options there already that a config file might be warranted.

I'll think about that some more, though. In the case where someone is making heavy modifications to the YAML as produced by the -o command, it seems a little odd to have a config file to 'just' spit out YAML, but would also be a head start.

I think the starting point should be the config file, and the flags would override those values.

What do you think for format? Part of why we wanted to go to an install command was to remove most or all of the example YAML from the repo...a config file, while certainly smaller, wouldn't necessarily alleviate that issue. And by the same token, a long command copy/pasted out of the docs every time would be burdensome, too.

I would prefer yaml. It would be an improvement over the current setup because it would all be in the same place, and the install would choose the order of execution. And you could make it so the user could overwrite the config by using the cmd flags, not having to mess with it for quick one offs.

Was this page helpful?
0 / 5 - 0 ratings