K3s: Secrets should be encrypted at rest while using experimental --secrets-encryption flag

Created on 20 Mar 2020  ·  8Comments  ·  Source: k3s-io/k3s

Version:
k3s -v
k3s version v1.17.4-alpha1+k3s1 (e5e7617a)

K3s arguments:
--secrets-encryption

Describe the bug
Secrets should have been encrypted

To Reproduce
Install k3s passing --secrets-encryption flag
List the secrets and decode it with base64

kubectl get secrets --all-namespaces  -o json 

Confirm it is encrypted.

Expected behavior
Secrets should have been encrypted

Actual behavior
Secrets are not encrypted

kinbug

All 8 comments

Notice, this is an experimental flag and the CLI help indicates so. We have plans post v1.17.4+k3s1 release to resolve this issue.

kubectl decrypts the secret when it fetches it from the API, on verifying the secret from the database you can see that the secret has been encrypted correctly:

sqlite> select value from kine where name like "/registry/secrets/default/db-user-pass";
k8s:enc:aescbc:v1:aescbckey:�q�=x����51R"�O(,�-���d������A֚�����DĐ#A�}�Z�t

Does K3S supports secret encryption with external KMS ?
I checked RKE documentation and confirmed that secrets encryption with external KMS is supported but not sure about K3S. Please confirm. Thanks!!

@amitkatyal no, it uses a static encryption key
https://github.com/rancher/k3s/blob/master/pkg/daemons/control/server.go#L1033

@brandond , My requirement is to use "Custom configuration for at-rest data encryption" so that encryption key is not stored in clear on disk. Is there a way I can do with K3s ?. OR is it something planned for future release ? OR Can I modify the encryption configuration file generated by K3s ?

Since kubernetes (kube_apiserver) supports integration with external KMS if provided with below configuration using.
Let us say if I provide the below configuration file using --kube-apiserver-arg.
Will secrets encryption with external KMS work ? The reason I am asking this is that there has not been any changes done in the kube-api server.

services:
kube-api:
extra_binds:
- "/var/run/kmsplugin/:/var/run/kmsplugin/"
secrets_encryption_config:
enabled: true
custom_config:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- kms:
name: aws-encryption-provider
*_endpoint: unix:///var/run/kmsplugin/socket.sock_*
cachesize: 1000
timeout: 3s
- identity: {}

I haven't tested it and can't make any guarantees, but we haven't removed anything from the apiserver, so what you're describing should work.

I haven't tested it and can't make any guarantees, but we haven't removed anything from the apiserver, so what you're describing should work.

@brandond , I applied the configration as per above but after that k3s is not starting and throwing below error.

"failed to create connection to unix socket: /var/run/kmsplugin/socket.sock, error: dial unix /var/run/kmsplugin/socket.sock: connect: connection refused"

Does that mean kube-apiserver is expecting kms-plugin to be up and running ?. Any idea ?

Above issue is fixed if I am running aws-encryption-provider separately before starting K3s.
However, k3s is still not starting and stuck somewhere else. K3s logs, Any idea ?

Oct 16 17:29:53 amitka-VirtualBox k3s[31693]: I1016 17:29:53.202961 31693 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Oct 16 17:30:50 amitka-VirtualBox k3s[31693]: time="2020-10-16T17:30:50.998417505+05:30" level=fatal msg="starting kubernetes: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\n[+]log ok\n[+]etcd ok\n[-]_### *_kms-provider-0 failed: reason withheld_\n_[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n### _### _healthz check failed\") has prevented the request from succeeding_"_*
Oct 16 17:30:51 amitka-VirtualBox systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE

This is all just upstream Kubernetes stuff - it does look like you are responsible for ensuring that the kms plugin is already running.

https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#deploying-the-kms-plugin

Was this page helpful?
0 / 5 - 0 ratings