I am trying to figure out how I can have Ark send backups to a Minio endpoint with our own SSL CA. Right now I am getting the x509 error. I am hoping there is either a ignoreSSL option or even a way to provide a cert bundle to ark.
To support this, we will have to add the ability to use a custom CA bundle, and then construct a custom *http.Client to use for aws calls.
@rosskukulinski
@ncdc ack, thanks for adding to Needs Product backlog
I am setting up Velero and Minio and has the same problem "x509: certificate signed by unknown authority". Is there an option to point the ca certificate in the works?
@ranjujohn we're not actively working on this - if you're interested in working on it, let us know!
@skriss Oh yes we are very interested by this feature. We are behind corporate proxy so it's ok with env var on deployment but we use a compatible s3 storage based on Ceph and we have the same x509 certificate problem.
Only one solution found with init container and update ca-cert
I wouldn't recommend insecureSkipVerify - would you consider supporting custom CA bundles instead?
I will try to explain why is important in our case.
We have 20 different clients with 20 different DNS in 20 different networks
We have 20 clusters each in a network configured with the client's dirty DNS
We have one S3. All networks are open on this S3 and in theory all DNS can resolve the address of S3.
The reality is quite different and if half are correctly configured it's not so bad.
So yes the solution is to act on the DNS. Many war were shorter...
In the meantime it doesn't solve, so we need to put hardcored IP and therefore the certificate can't be validated (it has no SAN with IP)
The workarounds we use:
So for our sad usecase it's can be nice
In addition the networks are isolated and at our hand so the security impact is less
This must be super frustrating. We are concerned that adding that flag could potentially create an unintended security issue for all users of Velero. And adding the flag to then remove it would cause the Velero interface to break, so we would rather not go there for this purpose.
We will work on defining how to implement this feature using a CA bundle so it's as safe as possible for everyone.
@Acronys1 @s12chung and others - how do you see this working in an ideal world? Would you want to provide your CA root cert in a secret that the Velero pod consumes? Or provide it some other way? What about on the client (I believe you have the same need for the velero client for e.g. velero backup logs but I'm not 100% sure)?
think providing a secret to the pod works. the client would also need to these certs though, which is what I wanted to avoid. to do things like: velero backup logs and velero backup download
the client would also need to these certs though, which is what I wanted to avoid. to do things like:
velero backup logsandvelero backup download
I understand that you'd prefer not to have to do this, but is it an option for you, or is this not viable at all?
thinking it over, if the ux for downloading these certs was super simple:
velero certs download # download certs from the secret and save it in a default location
velero backup download --local-certs # use the certs from the default location
it'd be great.
self signed certs are a temporary solution for us though. so to clarify, I wanted to avoid scope creep for our work more than downloading certs. sorry, I used the wrong words.
Hi @Acronys1 could you please share your solution ?
Only one solution found with init container and update ca-cert
https://github.com/vmware-tanzu/velero/issues/1027#issuecomment-491264046
just a heads up for this thread, #1793 is merged now and should handle the self-signed cert error, but this issue is left to Support custom CA bundles for object storage connections
@vishnoisuresh @Acronys1 was the init solution basically to copy a cert into /etc/ssl/certs and then run update-ca-certificates?
Any news on this topic? Have the same problem here. All ssl traffic to S3 is inspected (ssl decrypt) and no way to import our certificate or skip validation :-(
We are going to talk about it on our community meeting tomorrow. If anyone would like to join, info is here: https://velero.io/community/.
As per the community meeting discussion, the AWS SDK which we use for Minio communication uses the AWS_SDK_BUNDLE environment variable for using custom CA bundles. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
@dymurray has mentioned that his team may have some bandwidth to help on this feature, as well.
I think a design doc would be useful for this before implementation so we can get an idea of UX for server and client side.
Ok, with this information I got velero running with our certificate :-).
I created a new ca-certificates-mycompany.crt, based on the "default" ca-certificates.crt and added our certificates. Then I created a secret from ca-certificates-mycompany.crt and modified the velero deployment:
- name: AWS_CA_BUNDLE
value: /etc/ssl/certs/ca-certificates-mycompany.crt
@heikocane That's great news!
I'm assuming that there are still issues with the client side with commands such as velero download, velero backup logs and velero restore logs, though - have you tested those? I would also assume they could be fixed by adding the custom certs to your local machine in a similar manner.
My current thinking is that for short term, we can document this environment variable, but longer term perhaps we can think about designing some convenience commands around it. What do others think?
Agreed this is great news! However, I think there is still more to be done on the Velero side. IMO a BackupStorageLocation should contain all of the relevant information to talk to the S3 endpoint. Since it is possible for me to create multiple BSLs on a single Velero instance, then it means that I would need to include all of the relevant certs in a single file to be processed by AWS_CA_BUNDLE.
If the BSL contains a secret ref which contains the certs then it can be consumed by Restic+Velero without having to depend on an env var. Thoughts?
@dymurray Yeah, I agree that that sounds like a sensible approach. I think that means expanding the BackupStorageLocation definition to include the secret ref, then expanding the AWS plugin (since S3 is the most-used API for on-premises deployments) and restic support to be able to use that secret.
I don't think restic uses the AWS_CA_BUNDLE environment variable, but it does have a command line flag for pointing to a certificate file.
hey @dymurray - just checking in to see if you had been able to make any progress on this - I know you mentioned that you might be able to find someone to work on it. thanks!
hey @skriss yup sorry I meant to bring it up in the community meeting this week and unfortunately got caught up in other meetings. I have begun digging into it along with another engineer on my team. We have gotten Noobaa working with custom certs in a PoC, I just need to circle back and get a design submitted for review here. I will assign this issue to myself for now and will get a design put together.
Tracking this issue via smaller issues that breaks down the work:
I'm going to close this parent issue out as all the sub-issues are done; we still have one remaining issue for documentation (#2396)
Most helpful comment
Ok, with this information I got velero running with our certificate :-).
I created a new ca-certificates-mycompany.crt, based on the "default" ca-certificates.crt and added our certificates. Then I created a secret from ca-certificates-mycompany.crt and modified the velero deployment:
- name: AWS_CA_BUNDLE value: /etc/ssl/certs/ca-certificates-mycompany.crt