while installing velero on openshift 3.11 using aws as provider im getting following error with release v.1.3.2
CustomResourceDefinition/backups.velero.io: attempting to create resource
An error occurred:
Error installing Velero. Use kubectl logs deploy/velero -n velero to check the deploy logs: Error creating resource CustomResourceDefinition/backups.velero.io: CustomResourceDefinition.apiextensions.k8s.io "backups.velero.io" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[labelSelector].properties[matchLabels].additionalProperties: Forbidden: additionalProperties cannot be set to false, spec.validation.openAPIV3Schema.properties[spec].properties[hooks].properties[resources].items.properties[labelSelector].properties[matchLabels].additionalProperties: Forbidden: additionalProperties cannot be set to false]
CustomResourceDefinition.apiextensions.k8s.io "backups.velero.io" is invalid this is the issue , @skriss can you throw some light on this please ?
velero version on dry run , my yaml partly looks as below :
Client:
Version: v1.4.0
Git commit: 5963650c9d64643daaf510ef93ac4a36b6483392
1 apiVersion: v1
2 items:
3 - apiVersion: apiextensions.k8s.io/v1beta1
4 kind: CustomResourceDefinition
5 metadata:
6 annotations:
7 controller-gen.kubebuilder.io/version: v0.2.4
8 creationTimestamp: null
9 labels:
10 component: velero
11 name: backups.velero.io
12 spec:
13 group: velero.io
14 names:
15 kind: Backup
16 listKind: BackupList
17 plural: backups
18 singular: backup
19 preserveUnknownFields: false
20 scope: Namespaced
21 validation:
22 openAPIV3Schema:
23 description: Backup is a Velero resource that respresents the capture of Kubernetes
24 cluster state at a point in time (API objects and associated volume state).
25 properties:
26 apiVersion:
27 description: 'APIVersion defines the versioned schema of this representation
28 of an object. Servers should convert recognized schemas to the latest
29 internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md #resources'
30 type: string
31 kind:
32 description: 'Kind is a string value representing the REST resource this
33 object represents. Servers may infer this from the endpoint the client
34 submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions. md#types-kinds'
35 type: string
36 metadata:
37 type: object
38 spec:
39 description: BackupSpec defines the specification for a Velero backup.
40 properties:
any lead on above please @skriss @nrb
I believe since the version of Kubernetes is pretty old, you'll need to use kubectl's --validate=false option. See https://velero.io/docs/v1.4/customize-installation/#generate-yaml-only for more info.
You can generate and apply the CRD definitions like this:
velero install --crds-only --dry-run -o yaml | kubectl apply --validate=false -f -
If that works successfully, you could try running your original full velero install command again (it'll just skip over the CRDs if they already exist in the cluster), or if you run into further errors, you could try:
velero install <FLAGS> --dry-run -o yaml | kubectl apply --validate=false -f -
Let us know if those suggestions work for you!
Thanks for the reply , but I ended up with same error as below :
The CustomResourceDefinition "backups.velero.io" is invalid:
Current Version:
velero version
Client:
Version: v1.3.2
Git commit: 55a9914a3e4719fb1578529c45430a8c11c28145
Server:
Version: v1.3.2
On openshift using AWS S3
oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
openshift v3.10.0+d4ca19f-164
kubernetes v1.10.0+b81c8f8
Hmm, OK. I tried this on vanilla Kubernetes 1.11.x and it worked OK for me. Can you clarify which version of Kubernetes you're running? I see both 1.11 and 1.10 listed above
I am running it on Openshift 3.11 , with kubectl client version being 1.18
I think you'll probably need to use an older version of Velero (probably https://github.com/vmware-tanzu/velero/releases/tag/v1.1.0, since we added CRD structural schemas in v1.2 and that looks like the thing you're having an issue with), since we're not attempting to maintain backwards-compatibility with Kubernetes versions that far back in newer versions of Velero.
yes I tried that too , but since I am using AWS as storage provide , v1.1.0 is unable to backup PV's . backup partially fails , on inspecting logs I get following error:
Error getting volume snapshotter for volume snapshot location" backup=velero/sampv2 error="rpc error: code = Unknown desc = *missing region in aws configuration" error.file="/go/src/github.com/vmware-tanzu/velero-plugin-for-aws/velero-plugin-for-aws/volume_snapshotter.go:78" error.function="main.(VolumeSnapshotter).Init" group=v1 logSource="pkg/backup/item_backupper.go:444" name=pvc-a305a1e9-a567-11ea-8df0-0252c91c821f namespace= persistentVolume=pvc-a305a1e9-a567-11ea-8df0-0252c91c821f resource=persistentvolumes volumeSnapshotLocation=default
I have not used any aws config as such , went with default setting and followed the aws plugin document https://github.com/vmware-tanzu/velero-plugin-for-aws#setup
I made sure , the plugin is v1.0.0 since velero is v1.1.0
for v1.1 you don't need to use an external AWS plugin, it was still in-tree. Please make sure you're following the correct docs for the Velero version you're using: https://velero.io/docs/v1.1.0/aws-config/
ok fine will re-do again and check , but as I am using AWS s3 for storage, could you please tell me if the usage of Restic a mandatory for v1.1.0 to take pv backup ?
If you're using EBS volumes, then you can get EBS snapshots of them and you don't need to use restic.
ok Thank you for the inputs, would try and update
I'm going to close this out for now, but feel free to reach out again as needed.
@skriss I downgraded velero version to v1.1.0 , deleted existing velero project , and installed velero v1.1.0 , the pod is running without errors but I get this :
Client:
Version: v1.1.0
Git commit: a357f21aec6b39a8244dd23e469cc4519f1fe608
status.plugins in body must be of type array: "null"
status.processedTimestamp in body must be of type string: "null">
when I creating a backup :
An error occurred: DownloadRequest.velero.io "new1-20200604090001" is invalid: []: Invalid value: map[string]interface {}{"kind":"DownloadRequest", "apiVersion":"velero.io/v1", "metadata":map[string]interface {}{"name":"new1-20200604090001", "namespace":"velero", "creationTimestamp":"2020-06-04T09:00:21Z", "generation":1, "uid":"d20d073a-a641-11ea-8df0-0252c91c821f", "selfLink":"", "clusterName":""}, "spec":map[string]interface {}{"target":map[string]interface {}{"kind":"BackupLog", "name":"new1"}}, "status":map[string]interface {}{"expiration":interface {}(nil), "phase":"", "downloadURL":""}}: validation failure list:
status.phase in body should be one of [New Processed]
status.expiration in body must be of type string: "null"
Hi @skriss any advise on the above please !
did you delete the velero CRDs (see the second command in https://velero.io/docs/v1.4/uninstalling/) before reinstalling with v1.1? It looks like you probably still have newer versions of the CRDs installed in cluster that are incompatible with v1.1
yes kriss it worked thank you but the backup creation has following error :
velero backup logs sample | grep error
time="2020-06-04T17:39:08Z" level=info msg="1 errors encountered backup up item" backup=velero/sample group=v1 logSource="pkg/backup/resource_backupper.go:260" name=pvc-mysql namespace=sample resource=persistentvolumeclaims
time="2020-06-04T17:39:08Z" level=error msg="Error backing up item" backup=velero/sample error="error getting volume info: rpc error: code = Unknown desc = InvalidVolume.NotFound: The volume 'vol-0febe2431f523486d' does not exist.\ntstatus code: 400, request id: 95dc8f5e-1bed-45f4-a94f-2458bfc14a0d" group=v1 logSource="pkg/backup/resource_backupper.go:264" name=pvc-mysql namespace=sample resource=persistentvolumeclaims
OK, that's a different error that means the AWS volume snapshotter can't find the EBS volume (vol-0febe2431f523486d) referenced by the pvc-mysql PVC and underlying PV. It could be that (a) that volume truly doesn't exist in AWS; or (b) the Velero IAM account doesn't have the right permissions to see it; or (c) the BackupStorageLocation is configured to point to the wrong region so it can't find it; or (d) something else.
ok , i have created iam n assigned policy to access objects with iam user as unit53
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": ""
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::unit53/"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::unit53"
but will invetigate further on it .
@skriss probably this info below would help clear this . please let me know if this gives the root cause for pv n pvc not backing up :
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
creationTimestamp: 2020-06-03T06:58:32Z
finalizers:
------ snapshot location yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
creationTimestamp: 2020-06-04T17:38:02Z
generation: 1
labels:
component: velero
name: default
namespace: velero
resourceVersion: "1719782"
selfLink: /apis/velero.io/v1/namespaces/velero/volumesnapshotlocations/default
uid: 9df0-0252c91c821f
spec:
config:
region: us-east-2
provider: aws
status: {}
Looks like the PV is in us-east-1 but your VolumeSnapshotLocation has a spec.config.region of us-east-2. I think you need to update your VolumeSnapshotLocation to be us-east-1.
Thanks alot @skriss , It is supposed be working now but when pv is described i still get this as in above pv yaml:
labels:
failure-domain.beta.kubernetes.io/region: us-east-1
failure-domain.beta.kubernetes.io/zone: us-east-1a
If the PV is in us-east-1 as the labels indicate, then you need to change your velero VolumeSnapshotLocation's config to also have a region of us-east-1.
@skriss , thanks alot, the issues is now resolved , after ensuring that PV volume and volumesnapshotLocation region are same .