Kops: Support real ssl certs for api server

Created on 28 Sep 2016  Â·  29Comments  Â·  Source: kubernetes/kops

It would be nice to be able to provide real certificates for the api server.

Another option would be to support letsencrypt.

aresecurity lifecyclrotten

Most helpful comment

This issue is too important to be left open this long.
Also too important for hacky workarounds.

It is especially annoying because dashboard is proxied through k8s api url too.

All 29 comments

@feniix we are discussing the additions of various plugins within kops. Will let you know

I mentioned this on slack, but reposting here:

The way that kube-up and kops has set up k8s to date has been with a single CA certificate, which then signs everything, including the API server certificate used for https. So it's not trivial trivial to just swap out a single keypair.

But....
I was able to run with ingress in front of the API server, using a kube-lego (letsencrypt) certificate - so any certificate would work. The trick was that ingress re-encrypts the traffic - i.e. it talks https to the API

This does _not_ work with client certificate authentication. But it will work with token auth or basic auth. We do actually enable basic auth today (you can delete the client cert from your kubeconfig and it'll still work). And external auth systems tend to use token auth or JWT I believe, both of which should also work

Here's the ingress I used:

kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/secure-backends: "true"
    kubernetes.io/tls-acme: "true"
  name: kubernetes
  namespace: default
spec:
  rules:
  - host: api.cluster.example.com
    http:
      paths:
      - backend:
          serviceName: kubernetes
          servicePort: 443
        path: /
  tls:
  - hosts:
    - api.cluster.example.com
    secretName: tls-cluster.example.com

The Ingress approach worked for me. The only issue was I had to name my ingress something other than *api*.cluster.example.com, because kops had already created an ELB and Route 53 alias for that domain, and the auto-created ELB only forwards port 443. This was preventing kube-lego's self-check that looks for a response on port 80. I am now using *api-external*.cluster.example.com with its own ELB (redirecting 80 to 443) and corresponding Route 53 record and kube-lego was able to issue a certificate.

@anurag @justinsb any idea how we could do this using a non-let's encrypt cert? for example, using an aws certificate manager cert, or a non-aws cert?

To me it sounds reasonable to add optional support for a user provided ca cert on cluster create to solve this. Not sure if there is a reason not to allow that

@anurag I'm relatively new to kops, would you mind describing the details on how to use kops with kube-lego? Do I still use the auto generated ELB from kops?

@mad01 that would be useful for how I'm using kops and I'd like to help make it happen

@rklhui I ran into some other issues with kube-lego so switched to AWS Cert Manager with SSL termination at the ELB.

@anurag I had an issue with the ELB not supporting a websocket upgrade required to for example exec a shell inside containers (and probably to stream logs as well). Is that working now?

Yeah, I can tail logs and run kubectl exec for k8s clusters behind an ELB.

Great! I'm going to try again, which k8s version are you on @anurag?

1.6.4
On Wed, Jul 12, 2017 at 1:37 PM Steinn Eldjárn Sigurðarson <
[email protected]> wrote:

Great! I'm going to try again, which k8s version are you on @anurag
https://github.com/anurag?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kops/issues/530#issuecomment-314889231,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADqATus2JGa_wW2Jo7ZPQPcMzFwr4u8ks5sNS54gaJpZM4KIbaW
.

@anurag do you have any code / templates to show your approach?

Our approach was to:

  1. Add an AWS-provided cert to the kops-provisioned Kube API ELB.
  2. Set said ELB's listeners to SSL mode on both the front- and back-ends (this gets around the issue with kubectl exec needing to upgrade the connection).
  3. Comment out the cert/key data in kubeconfig (if you don't, you'll get "Certificate signed by unknown authority" errors.)

And that's all working fine for us. Would still like to see more kops support for it so that we wouldn't need to make a special setup note regarding that third point.

Hey @woodlee,

I've been trying it your way, but unfortunately now all my non-master nodes are in an Unknown state when I run kubectl get nodes. The masters are okay. Anything I might have missed?

@hrzbrg Sorry, nothing I'm aware of. We never encountered such an issue and I'm not sure what could be causing it since the process I describe should really only affect access to the master nodes. When we set that up we were using kops/k8s 1.6 ... possibly something has changed in later versions?

This issue is too important to be left open this long.
Also too important for hacky workarounds.

It is especially annoying because dashboard is proxied through k8s api url too.

When we scanned a test cluster whilst evaluating KOPS we came up with this:

Medium Strength Ciphers (> 64-bit and < 112-bit key, or 3DES)

ECDHE-RSA-DES-CBC3-SHA Kx=ECDH Au=RSA Enc=3DES-CBC(168) Mac=SHA1
DES-CBC3-SHA Kx=RSA Au=RSA Enc=3DES-CBC(168) Mac=SHA1 

It was also self-certified. We'd love to be able to use KOPS and provide our own certificate.

I can verify that the approach @woodlee took seems to work. However, it appears that the ELB Listeners are not currently configurable in the _cluster.spec_, so there's no way to make this change in a way that kops is aware of.

Any updates on supporting something like this?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

/remove-lifecycle rotten

This is still unresolved.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

I believe https://github.com/kubernetes/kops/pull/5414 addresses this for AWS.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings