It looks like for nodes/masters EBS boot volume it is possible to do this manually by passing an encrypted image when you create the cluster.
Steps I followed:
282335181503 /k8s-1.2-debian-jessie-amd64-hvm-2016-03-16-ebs).aws ec2 copy-image --source-region=us-east-1 --source-image-id=<my-private-ami> --name=foobar --encrypted
kops create cluster --name=blah.meh.com --image=<encrypted-ami-id> --zones=us-east-1a --state=<state>
It would be nice if this could be automatically done as part of the kops process though.
Additionally, the etcd volumes are still unencrypted with this approach.
I think we can do this in imagebuilder when we build the image, or at least document how to enable encryption.
@yissachar do you know of any downsides to doing this? Also, can we just encrypt the "master" AMI, or does every user need to create their own copy?
The downside is that building an image is slow, so will add time to the cluster turn-up. However, so long as this opt-in and we document that enabling this will cause the process to take longer, it should be fine.
We can't just encrypt the AMI once, it is a per account process.
By the way, I think there is something not setup with the base k8s image that causes us to do the extra work of creating my own version of the image (the first step from above). According to this AWS blog post:
The process of creating an encrypted EBS boot volume begins with an existing AMI (either Linux or Windows). If you own the AMI, or if it is both public and free you can use it directly. Otherwise, you will need to launch the AMI, create an image from it, and then use that image to create the encrypted EBS boot volume (this applies, for example, to Windows AMIs). The resulting encrypted AMI will be private; you cannot share it with another AWS account.
Since the k8s image appears to be both public and free, I expected to be able to copy it directly. But when I tried to do that, I got this error:
A client error (InvalidRequest) occurred when calling the CopyImage operation: You do not have permission to access the storage of this ami
So I'm wondering if there is something about the k8s image that is not "public and free".
From: https://support.bioconductor.org/p/88223/
The owner of the account must grant read permissions on the storage that backs the AMI, whether it is an associated EBS snapshot (for an Amazon EBS-backed AMI) or an associated Amazon S3 bucket (for an instance-store-backed AMI). To allow other accounts to copy your AMIs, you must grant read permissions on your associated snapshot or bucket using the Amazon EBS or Amazon S3 access management tools.
When an AMI is copied, the owner of the source AMI is charged standard Amazon EBS or Amazon S3 transfer fees, and the owner of the target AMI is charged for storage in the destination region.
This will need to be done first before support can be added for kops.
@druidsbane Thanks for pointing this out. That would work for images that kops has control over, but we want to support a wide range of images that kops doesn't have control over, that we can't assume have set this.
At this point I think it's probably best to leave this step outside of kops and have kops just deal with the passed in image - it would be up to the user to manually encrypt that first if they want to. We could add a docs page outlining the process (the steps I listed above).
@justinsb Thoughts?
Oh, and regardless of what we decide to do here, whoever has control over the k8s AMI should probably grant read permissions as @druidsbane pointed out.
@yissachar I agree mostly, but one of the draws of kops and kubernetes is that it "just works". Having a manual step of creating the images or creating a machine, snapshotting it and creating an image in each region for example would be a huge hassle, if it could be integrated with a flag that just says create an ami off of this image or something that would be ideal.
Can someone propose a design?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Is there any update on this issue?
/remove-lifecycle rotten
/reopen
@qrevel: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/remove-lifecycle rotten
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@justinsb is there any plan for this issue?
Can we now get a flag to turn on encryption?
/reopen
@ryan-dyer: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@mikesplain: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/reopen
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Not stale
/remove-lifecycle stale
This would be really helpful.
This would be great
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Most helpful comment
Is there any update on this issue?