K3s: Help! k3s v0.7.0 generate userKey with ca, authenticate in failure

Created on 28 Jul 2019  ยท  10Comments  ยท  Source: k3s-io/k3s

I generated a userKey by ca, with cluster-admin rbac permission.

but got this:

kc --kubeconfig=koper.kubeconfig get pod
error: You must be logged in to the server (Unauthorized)

k3s server's log:

server_1    | E0728 19:15:27.324625       8 authentication.go:65] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority

my userKey generate scripts:

ws=/opt/sec-rbac
day=3650

clus_name="t1.k3s"
clus_ns="default"
user="koper"
clus_url="https://10.200.100.183:7442"
ca_path=$ws

ctx=gen && mkdir -p $ws/$ctx/{kube,keys} && cd $ws/$ctx
#############
generate="keys/u-"$user
echo -e "\033[32m#>>GEN-KEY\033[0m"
openssl genrsa -out $generate.key 2048
#openssl ecparam -name prime256v1 -genkey -noout -out $generate.key
openssl req -new -key $generate.key -out $generate.csr -subj "/CN=${user}@${clus_name}/O=key-gen"
openssl x509 -req -in $generate.csr -CA $ca_path/ca.crt -CAkey $ca_path/ca.key -CAcreateserial -out $generate.crt -days $day

#-----------
ctx2="$user@$clus_name"
config="kube/$user.kubeconfig"
echo -e "\033[32m#>>KUBE-CONFIG\033[0m" 
kubectl --kubeconfig=$config config set-cluster $clus_name --embed-certs=true --server=$clus_url --certificate-authority=$ca_path/ca.crt
kubectl --kubeconfig=$config config set-credentials $user --embed-certs=true --client-certificate=$generate.crt  --client-key=$generate.key
kubectl --kubeconfig=$config config set-context $ctx2 --cluster=$clus_name --namespace=$clus_ns --user=$user
kubectl --kubeconfig=$config config set current-context $ctx2
kubectl --kubeconfig=$config --context=$ctx2 get pods
statumore-info

Most helpful comment

The scripts and env current i use:

[root@(โŽˆ |default:default) sec-rbac]$ tree
.
โ”œโ”€โ”€ gen
โ”‚ย ย  โ”œโ”€โ”€ keys
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ u-koper.crt
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ u-koper.csr
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ u-koper.key
โ”‚ย ย  โ””โ”€โ”€ kube
โ”‚ย ย      โ””โ”€โ”€ koper.kubeconfig
โ”œโ”€โ”€ server
โ”‚ย ย  โ”œโ”€โ”€ cred
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ admin.kubeconfig
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ api-server.kubeconfig
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ controller.kubeconfig
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ node-passwd
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ passwd
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ scheduler.kubeconfig
โ”‚ย ย  โ”œโ”€โ”€ db
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ state.db
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ state.db-shm
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ state.db-wal
โ”‚ย ย  โ”œโ”€โ”€ manifests
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ coredns.yaml
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ rolebindings.yaml
โ”‚ย ย  โ”œโ”€โ”€ node-token
โ”‚ย ย  โ”œโ”€โ”€ static
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ charts
โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ traefik-1.64.0.tgz
โ”‚ย ย  โ””โ”€โ”€ tls
โ”‚ย ย      โ”œโ”€โ”€ client-admin.crt
โ”‚ย ย      โ”œโ”€โ”€ client-admin.key
โ”‚ย ย      โ”œโ”€โ”€ client-auth-proxy.crt
โ”‚ย ย      โ”œโ”€โ”€ client-auth-proxy.key
โ”‚ย ย      โ”œโ”€โ”€ client-ca.crt
โ”‚ย ย      โ”œโ”€โ”€ client-ca.key
โ”‚ย ย      โ”œโ”€โ”€ client-ca.srl
โ”‚ย ย      โ”œโ”€โ”€ client-controller.crt
โ”‚ย ย      โ”œโ”€โ”€ client-controller.key
โ”‚ย ย      โ”œโ”€โ”€ client-kube-apiserver.crt
โ”‚ย ย      โ”œโ”€โ”€ client-kube-apiserver.key
โ”‚ย ย      โ”œโ”€โ”€ client-kube-proxy.crt
โ”‚ย ย      โ”œโ”€โ”€ client-kube-proxy.key
โ”‚ย ย      โ”œโ”€โ”€ client-kubelet.key
โ”‚ย ย      โ”œโ”€โ”€ client-scheduler.crt
โ”‚ย ย      โ”œโ”€โ”€ client-scheduler.key
โ”‚ย ย      โ”œโ”€โ”€ request-header-ca.crt
โ”‚ย ย      โ”œโ”€โ”€ request-header-ca.key
โ”‚ย ย      โ”œโ”€โ”€ server-ca.crt
โ”‚ย ย      โ”œโ”€โ”€ server-ca.key
โ”‚ย ย      โ”œโ”€โ”€ service.key
โ”‚ย ย      โ”œโ”€โ”€ serving-kube-apiserver.crt
โ”‚ย ย      โ”œโ”€โ”€ serving-kube-apiserver.key
โ”‚ย ย      โ”œโ”€โ”€ serving-kubelet.key
โ”‚ย ย      โ””โ”€โ”€ temporary-certs
โ”‚ย ย          โ”œโ”€โ”€ apiserver-loopback-client__.crt
โ”‚ย ย          โ””โ”€โ”€ apiserver-loopback-client__.key
โ””โ”€โ”€ t2.sh

11 directories, 44 files
[root@(โŽˆ |default:default) sec-rbac]$ cat t2.sh 
ws=/opt/sec-rbac
day=3650

clus_name="t1.k3s"
clus_ns="default"
user="koper"
#clus_url="https://10.200.100.183:7442"
clus_url="https://server:6443"  ##
ca_path=$ws/server/tls
rm -f $ca_path/*-ca.srl

ctx=gen && mkdir -p $ws/$ctx/{kube,keys} && cd $ws/$ctx
#############
ca1=client-ca
generate="keys/u-"$user
echo -e "\033[32m#>>GEN-KEY\033[0m"
#openssl genrsa -out $generate.key 2048
openssl ecparam -name prime256v1 -genkey -noout -out $generate.key
openssl req -new -key $generate.key -out $generate.csr -subj "/CN=${user}@${clus_name}/O=key-gen"
openssl x509 -req -in $generate.csr -CA $ca_path/$ca1.crt -CAkey $ca_path/$ca1.key -CAcreateserial -out $generate.crt -days $day

#-----------
#generate=$ca_path/client-admin  ##test
ca2=server-ca
embed=false
ctx2="$user@$clus_name"
config="kube/$user.kubeconfig"
echo -e "\033[32m#>>KUBE-CONFIG\033[0m" 
kubectl --kubeconfig=$config config set-cluster $clus_name --embed-certs=$embed --server=$clus_url --certificate-authority=$ca_path/$ca2.crt
kubectl --kubeconfig=$config config set-credentials $user --embed-certs=$embed --client-certificate=$generate.crt  --client-key=$generate.key
kubectl --kubeconfig=$config config set-context $ctx2 --cluster=$clus_name --namespace=$clus_ns --user=$user
kubectl --kubeconfig=$config config set current-context $ctx2
kubectl --kubeconfig=$config --context=$ctx2 get pods

ca1=client-ca
ca2=server-ca

All 10 comments

the generated file:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUyTkRNd05qWXlNekFlRncweE9UQTNNamd3T1RNM01ETmFGdzB5T1RBM01qVXdPVE0zTUROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUyTkRNd05qWXlNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkpkQkZ2a0dtR1dmYXJhZndJOGVRWDdneWRGdWxYaGMyQU1NbHdLYm5XOXIKZ0k5NFdPbmN2OUdKQzNzTitiK3dtWDJOTEJNWXFLNGZWNjUvam1aZkZRK2pJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDZjlSaFJ4cXhHCnZVdGpsOHhIcDFyQ1MrbS96WW5HNnlnQVZ2ZzZpU2djVHdJZ1BUVzhGY0ZKUWdpdU9kN0c4N1I3Zi9HTmdZV24KQWhGdDlTVUZ5OEdHVmFFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://10.200.100.183:7442
  name: t1.k3s
contexts:
- context:
    cluster: t1.k3s
    namespace: default
    user: koper
  name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: koper
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNCakNDQWEwQ0NRRDl4QVMrUVJZbERUQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXQKYzJWeWRtVnlMV05oUURFMU5qUXpNRFkyTWpNd0hoY05NVGt3TnpJNE1USXdNRE01V2hjTk1qa3dOekkxTVRJdwpNRE01V2pBcE1SVXdFd1lEVlFRRERBeHJiM0JsY2tCME1TNXJNM014RURBT0JnTlZCQW9NQjJ0bGVTMW5aVzR3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUQ3UmdhMmpDUjBrRUlIS21jN1pNMGYKVUlBYnNaVURsTTZob3ROQkNQK29YWFFHeFBwYXNGaWgrTzhLc0JnZHFrTkFUL2s4bUkrTHM3SEhBelVkSnFFbgpKeFAvdFR0WG5LZjQvNno5OFlJa0JsK2k1OUdQS0VuNCtNK1RIYy92eGRnT21PVCtaZ05qWW4zMWMwN21VOVdtCkVhS0xlOHM2RlB5UzZiOCtDVTMrUWsyNHVPMlZ3dTA5SUJJTnc1L3o3V2daSHplM1RJS3Jkd0hKVHVZQThZNnoKL2VnVWRKb01JcFJkMGZxTDVCRSsxa0JqT2t2d2thVFhoY0lCMmV6bk5RLy82K2JsbUMwbjdBYm1tSTJJblRHQwprV25naEZjakE2d29lUWw4bUZwTml0b3R1aWMvaGhGRnI0LzNKbWZFSUpJaG80MEl2bWZ0ak1Sdk45aEo3ZUhUCkFnTUJBQUV3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnRnMreHFXYzNzVjg1dmVpSlhpTjJmaTNmRm9xQ3QyUm4KbngwNzlKNmhWRTBDSUhFU2M3dS8yeHlGZzhqc0VIWGwyMlhGTmdWUmx5bW5STW1wM0tLK1lUaDcKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBKzBZR3Rvd2tkSkJDQnlwbk8yVE5IMUNBRzdHVkE1VE9vYUxUUVFqL3FGMTBCc1Q2CldyQllvZmp2Q3JBWUhhcERRRS81UEppUGk3T3h4d00xSFNhaEp5Y1QvN1U3VjV5bitQK3MvZkdDSkFaZm91ZlIKanloSitQalBreDNQNzhYWURwamsvbVlEWTJKOTlYTk81bFBWcGhHaWkzdkxPaFQ4a3VtL1BnbE4va0pOdUxqdApsY0x0UFNBU0RjT2Y4KzFvR1I4M3QweUNxM2NCeVU3bUFQR09zLzNvRkhTYURDS1VYZEg2aStRUlB0WkFZenBMCjhKR2sxNFhDQWRuczV6VVAvK3ZtNVpndEord0c1cGlOaUoweGdwRnA0SVJYSXdPc0tIa0pmSmhhVFlyYUxib24KUDRZUlJhK1A5eVpueENDU0lhT05DTDVuN1l6RWJ6ZllTZTNoMHdJREFRQUJBb0lCQVFEVUFrR254SmI5d3JuegpVZFBJU1VUSkp5THdPdVdBSUE0NFV5bnJ0YXdBWXRtQzNMQmYxR3IwUHhWeDd5SnA1VDdaQktGR2YzS2ViUCtTCjZ5SGxkcktDVm5hSlNtREhpMll1c1l0RXVJRVY1RXJOS011bi9sWnJ1NE5vbmI3VWtCbThOMFQvWVJONng1OS8KZWNzWWk2TzRleWlxaDhqeE9NUGpNVllyQWE3TTEzaTlkTi9OT1BSa2F0aG5rL2txeGNMaFYxK3diODVGNVNPbAp5dVJ5WU03OGtYakxHUGkxY3dKcHhScU9KcWNxMDAzQjFrN3U5VG9uSjdKOFR6MjJyc2tUdFRSN3o1eGR2VDRvCkFLSWVCNnpmMko0K0o5N3piN3lYUVFxV0ZrZk9EaTJSSlY2SXVZbFFQSkFUZVcwaVJ2RjR1dnJhTGtYZllnU1cKbmtRaGRFQUJBb0dCQVA1MzgyVTB6YUx4MVcxT2xhbWVod3dqV2JES3k4ZEN6VEVwR0ZmZGlhUFVOaUhqVXBOMgpGaExndS9LS05SWmNPZlVBbnhMSTNjWUg1UWQ1ZUlDZnh5UGtlek1RdzQ3TnRwbUgvN3VXenhtbXFodHBCdVN1ClVLQ1QwOGV0SlJEY3JET3lNMHJkQXM5U1htcW9mVGZqT2VwQnJSMnBCWEtkbi9iQWJGcTl1V1RIQW9HQkFQekoKSnl6UVFEN21IQ1BGeWZWelZLcVlCVG5adzdOMDNxamZHOWRXREZzSTBMWmJIKzhjcVVqRFJnTGhZYkkyTVJITwpTbzJUUFNsWXFBUndxenA1VENWSDVJUXljUE41eXpsWVlWMm5pWmtvVXBNYUlONHVhTCt5U1QraEUzSGRnam5rCm45TklVOFJHakkwN3AyZ0w2MDRaUFFMVTNZeno0M09ranpmcldmYVZBb0dCQUtEVWxVdjQ5S013NzdDM1ExWkMKTUo2V1ZTQ3MrK0NEc3dhSUw2K1JBR1pBUUxwb1g0OTl5ZlBDZ0dlSnZJWFdZbmNjSG00VDhEOHlUQ25PTjBBcwpQQVBPYTZOWnpBK2NxdlVjaEtBK2I4U0psdWZlR0pJK0xnMWZnVEdwbUV5dy9GRnNKb2tCYUw0NkZCeWJReEVvCmx6a2NxMXFjc2ltL3dCT0hpTFJOUnppUEFvR0JBUHJ6SXYzOUc5cVZqSmdDMGZUbThzV010NXR2MFRXRnIwb00KZTlJeHJZQnVadXl4MkNrRDFoYlRMTnpOTExURDBjRHdmOWkrdERnb3VGdjRFalN4bUdObVZMamNibjkzaU1XOApOS1RLSHZLNk1nZXhKN0lLZHBqZ0FKRzNjZHRYWU9IaVVyeG9rQ2hKTlYwOFBId3hZUDhlVlJCTGpFcFRFSm1NClkxWExRbnRsQW9HQVhGNU5ianJZNklCMmc2cDJvNkNjNVNmbnN2S2JrZm9XNDFQaUFlZkxPUjhmU2E5TkZ1UnEKcnNHT3FpWGdzYnBPTHQ0b1VqbmNmYXRJUXBnSG5tQ1QvZmZLUEdObEE4NUt5VVBhOGtpVDEzbXZjT28xT0JaRApQL0JELzcvNGY3allGK3RqUExYZXA0NzNvLzlSYVQyWGMzUXZZdTBlZExTQm5qb2szV3UrSTk4PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

then i change the user credential to admin with username and password, this works fine:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUyTkRNd05qWXlNekFlRncweE9UQTNNamd3T1RNM01ETmFGdzB5T1RBM01qVXdPVE0zTUROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUyTkRNd05qWXlNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkpkQkZ2a0dtR1dmYXJhZndJOGVRWDdneWRGdWxYaGMyQU1NbHdLYm5XOXIKZ0k5NFdPbmN2OUdKQzNzTitiK3dtWDJOTEJNWXFLNGZWNjUvam1aZkZRK2pJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDZjlSaFJ4cXhHCnZVdGpsOHhIcDFyQ1MrbS96WW5HNnlnQVZ2ZzZpU2djVHdJZ1BUVzhGY0ZKUWdpdU9kN0c4N1I3Zi9HTmdZV24KQWhGdDlTVUZ5OEdHVmFFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://10.200.100.183:7442
  name: t1.k3s
contexts:
- context:
    cluster: t1.k3s
    namespace: default
    user: koper
  name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: koper
  user:
    password: 3f43d55928f3d7fc7277d28d519b4ec8
    username: admin

can you confirm that your ca.crt is consistent with apiserver's --client-ca-file?

We are using a separate server-ca and client-ca, it may be that the cluster certificate-authority-data needs to be the server-ca, while user client-certificate-data/key needs to be the generated cert/key signed by the client-ca cert/key.

We are using a separate server-ca and client-ca, it may be that the cluster certificate-authority-data needs to be the server-ca, while user client-certificate-data/key needs to be the generated cert/key signed by the client-ca cert/key.

thx. it works for me now:

[root@(โŽˆ |default:default) kube]$ kc --kubeconfig=koper.kubeconfig get pod -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-b7464766c-g27sh         1/1     Running   2          9m57s
kube-system   rbac-manager-79bdb8757d-9p2f6   1/1     Running   0          100s
[root@(โŽˆ |default:default) sec-rbac]$ rbac-lookup k3s
SUBJECT         SCOPE          ROLE
[email protected]    cluster-wide   ClusterRole/cluster-admin
[root@(โŽˆ |default:default) kube]$ cat koper.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/sec-rbac/server/tls/server-ca.crt
    server: https://server:6443
  name: t1.k3s
contexts:
- context:
    cluster: t1.k3s
    namespace: default
    user: koper
  name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: koper
  user:
    client-certificate: /opt/sec-rbac/gen/keys/u-koper.crt
    client-key: /opt/sec-rbac/gen/keys/u-koper.key

The scripts and env current i use:

[root@(โŽˆ |default:default) sec-rbac]$ tree
.
โ”œโ”€โ”€ gen
โ”‚ย ย  โ”œโ”€โ”€ keys
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ u-koper.crt
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ u-koper.csr
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ u-koper.key
โ”‚ย ย  โ””โ”€โ”€ kube
โ”‚ย ย      โ””โ”€โ”€ koper.kubeconfig
โ”œโ”€โ”€ server
โ”‚ย ย  โ”œโ”€โ”€ cred
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ admin.kubeconfig
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ api-server.kubeconfig
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ controller.kubeconfig
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ node-passwd
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ passwd
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ scheduler.kubeconfig
โ”‚ย ย  โ”œโ”€โ”€ db
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ state.db
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ state.db-shm
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ state.db-wal
โ”‚ย ย  โ”œโ”€โ”€ manifests
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ coredns.yaml
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ rolebindings.yaml
โ”‚ย ย  โ”œโ”€โ”€ node-token
โ”‚ย ย  โ”œโ”€โ”€ static
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ charts
โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ traefik-1.64.0.tgz
โ”‚ย ย  โ””โ”€โ”€ tls
โ”‚ย ย      โ”œโ”€โ”€ client-admin.crt
โ”‚ย ย      โ”œโ”€โ”€ client-admin.key
โ”‚ย ย      โ”œโ”€โ”€ client-auth-proxy.crt
โ”‚ย ย      โ”œโ”€โ”€ client-auth-proxy.key
โ”‚ย ย      โ”œโ”€โ”€ client-ca.crt
โ”‚ย ย      โ”œโ”€โ”€ client-ca.key
โ”‚ย ย      โ”œโ”€โ”€ client-ca.srl
โ”‚ย ย      โ”œโ”€โ”€ client-controller.crt
โ”‚ย ย      โ”œโ”€โ”€ client-controller.key
โ”‚ย ย      โ”œโ”€โ”€ client-kube-apiserver.crt
โ”‚ย ย      โ”œโ”€โ”€ client-kube-apiserver.key
โ”‚ย ย      โ”œโ”€โ”€ client-kube-proxy.crt
โ”‚ย ย      โ”œโ”€โ”€ client-kube-proxy.key
โ”‚ย ย      โ”œโ”€โ”€ client-kubelet.key
โ”‚ย ย      โ”œโ”€โ”€ client-scheduler.crt
โ”‚ย ย      โ”œโ”€โ”€ client-scheduler.key
โ”‚ย ย      โ”œโ”€โ”€ request-header-ca.crt
โ”‚ย ย      โ”œโ”€โ”€ request-header-ca.key
โ”‚ย ย      โ”œโ”€โ”€ server-ca.crt
โ”‚ย ย      โ”œโ”€โ”€ server-ca.key
โ”‚ย ย      โ”œโ”€โ”€ service.key
โ”‚ย ย      โ”œโ”€โ”€ serving-kube-apiserver.crt
โ”‚ย ย      โ”œโ”€โ”€ serving-kube-apiserver.key
โ”‚ย ย      โ”œโ”€โ”€ serving-kubelet.key
โ”‚ย ย      โ””โ”€โ”€ temporary-certs
โ”‚ย ย          โ”œโ”€โ”€ apiserver-loopback-client__.crt
โ”‚ย ย          โ””โ”€โ”€ apiserver-loopback-client__.key
โ””โ”€โ”€ t2.sh

11 directories, 44 files
[root@(โŽˆ |default:default) sec-rbac]$ cat t2.sh 
ws=/opt/sec-rbac
day=3650

clus_name="t1.k3s"
clus_ns="default"
user="koper"
#clus_url="https://10.200.100.183:7442"
clus_url="https://server:6443"  ##
ca_path=$ws/server/tls
rm -f $ca_path/*-ca.srl

ctx=gen && mkdir -p $ws/$ctx/{kube,keys} && cd $ws/$ctx
#############
ca1=client-ca
generate="keys/u-"$user
echo -e "\033[32m#>>GEN-KEY\033[0m"
#openssl genrsa -out $generate.key 2048
openssl ecparam -name prime256v1 -genkey -noout -out $generate.key
openssl req -new -key $generate.key -out $generate.csr -subj "/CN=${user}@${clus_name}/O=key-gen"
openssl x509 -req -in $generate.csr -CA $ca_path/$ca1.crt -CAkey $ca_path/$ca1.key -CAcreateserial -out $generate.crt -days $day

#-----------
#generate=$ca_path/client-admin  ##test
ca2=server-ca
embed=false
ctx2="$user@$clus_name"
config="kube/$user.kubeconfig"
echo -e "\033[32m#>>KUBE-CONFIG\033[0m" 
kubectl --kubeconfig=$config config set-cluster $clus_name --embed-certs=$embed --server=$clus_url --certificate-authority=$ca_path/$ca2.crt
kubectl --kubeconfig=$config config set-credentials $user --embed-certs=$embed --client-certificate=$generate.crt  --client-key=$generate.key
kubectl --kubeconfig=$config config set-context $ctx2 --cluster=$clus_name --namespace=$clus_ns --user=$user
kubectl --kubeconfig=$config config set current-context $ctx2
kubectl --kubeconfig=$config --context=$ctx2 get pods

ca1=client-ca
ca2=server-ca

can you confirm that your ca.crt is consistent with apiserver's --client-ca-file?

thx, resolved.

Same problem, create a script to help create new client cert.

The problem is that the CA that should sign the certificates for the user is not the same as the server uses for it's API
what needs to be done is copy 2 files from the server/tls directoy and use them to sign :

 # mkdir ~/tls && ~/tls
 # cp /var/lib/rancher/k3s/server/tls/client-ca.{crt,key} ~/tls

Generate the key + certificate request

 # openssl genrsa -out user.key 4096
 #  openssl req -new -key user.key -out user.csr -subj "/CN=user@default/O=admins"

sign the certificate
# openssl x509 -req -in user.csr -CA client-ca.crt -CAkey client-ca.key -CAcreateserial -out user.crt -days 3650
Now your user01.kubeconfig should look like this :

 apiVersion: v1
 clusters:     
 - cluster:
     certificate-authority: /root/server-ca.crt
     server: https://server:6443
   name: default
 contexts:
 - context:
     cluster: default
     namespace: ns01
     user: user
   name: user@default
 current-context: user@default
 kind: Config
 preferences: {}
 users:
 - name: user
   user:
     client-certificate: /root/tls/user.crt
     client-key: /root/tls/user.key

Thank you for the explanation! Two questions:

  • The subject of the certificate should contains some specific values or I can be anything?
  • How you map the client with API server role?
Was this page helpful?
0 / 5 - 0 ratings

Related issues

e-nikolov picture e-nikolov  ยท  3Comments

davidnuzik picture davidnuzik  ยท  3Comments

ubergeek801 picture ubergeek801  ยท  3Comments

davidnuzik picture davidnuzik  ยท  3Comments

pierreozoux picture pierreozoux  ยท  4Comments