I generated a userKey by ca, with cluster-admin rbac permission.
but got this:
kc --kubeconfig=koper.kubeconfig get pod
error: You must be logged in to the server (Unauthorized)
k3s server's log:
server_1 | E0728 19:15:27.324625 8 authentication.go:65] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
my userKey generate scripts:
ws=/opt/sec-rbac
day=3650
clus_name="t1.k3s"
clus_ns="default"
user="koper"
clus_url="https://10.200.100.183:7442"
ca_path=$ws
ctx=gen && mkdir -p $ws/$ctx/{kube,keys} && cd $ws/$ctx
#############
generate="keys/u-"$user
echo -e "\033[32m#>>GEN-KEY\033[0m"
openssl genrsa -out $generate.key 2048
#openssl ecparam -name prime256v1 -genkey -noout -out $generate.key
openssl req -new -key $generate.key -out $generate.csr -subj "/CN=${user}@${clus_name}/O=key-gen"
openssl x509 -req -in $generate.csr -CA $ca_path/ca.crt -CAkey $ca_path/ca.key -CAcreateserial -out $generate.crt -days $day
#-----------
ctx2="$user@$clus_name"
config="kube/$user.kubeconfig"
echo -e "\033[32m#>>KUBE-CONFIG\033[0m"
kubectl --kubeconfig=$config config set-cluster $clus_name --embed-certs=true --server=$clus_url --certificate-authority=$ca_path/ca.crt
kubectl --kubeconfig=$config config set-credentials $user --embed-certs=true --client-certificate=$generate.crt --client-key=$generate.key
kubectl --kubeconfig=$config config set-context $ctx2 --cluster=$clus_name --namespace=$clus_ns --user=$user
kubectl --kubeconfig=$config config set current-context $ctx2
kubectl --kubeconfig=$config --context=$ctx2 get pods
the generated file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUyTkRNd05qWXlNekFlRncweE9UQTNNamd3T1RNM01ETmFGdzB5T1RBM01qVXdPVE0zTUROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUyTkRNd05qWXlNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkpkQkZ2a0dtR1dmYXJhZndJOGVRWDdneWRGdWxYaGMyQU1NbHdLYm5XOXIKZ0k5NFdPbmN2OUdKQzNzTitiK3dtWDJOTEJNWXFLNGZWNjUvam1aZkZRK2pJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDZjlSaFJ4cXhHCnZVdGpsOHhIcDFyQ1MrbS96WW5HNnlnQVZ2ZzZpU2djVHdJZ1BUVzhGY0ZKUWdpdU9kN0c4N1I3Zi9HTmdZV24KQWhGdDlTVUZ5OEdHVmFFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.200.100.183:7442
name: t1.k3s
contexts:
- context:
cluster: t1.k3s
namespace: default
user: koper
name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: koper
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNCakNDQWEwQ0NRRDl4QVMrUVJZbERUQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXQKYzJWeWRtVnlMV05oUURFMU5qUXpNRFkyTWpNd0hoY05NVGt3TnpJNE1USXdNRE01V2hjTk1qa3dOekkxTVRJdwpNRE01V2pBcE1SVXdFd1lEVlFRRERBeHJiM0JsY2tCME1TNXJNM014RURBT0JnTlZCQW9NQjJ0bGVTMW5aVzR3CmdnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUQ3UmdhMmpDUjBrRUlIS21jN1pNMGYKVUlBYnNaVURsTTZob3ROQkNQK29YWFFHeFBwYXNGaWgrTzhLc0JnZHFrTkFUL2s4bUkrTHM3SEhBelVkSnFFbgpKeFAvdFR0WG5LZjQvNno5OFlJa0JsK2k1OUdQS0VuNCtNK1RIYy92eGRnT21PVCtaZ05qWW4zMWMwN21VOVdtCkVhS0xlOHM2RlB5UzZiOCtDVTMrUWsyNHVPMlZ3dTA5SUJJTnc1L3o3V2daSHplM1RJS3Jkd0hKVHVZQThZNnoKL2VnVWRKb01JcFJkMGZxTDVCRSsxa0JqT2t2d2thVFhoY0lCMmV6bk5RLy82K2JsbUMwbjdBYm1tSTJJblRHQwprV25naEZjakE2d29lUWw4bUZwTml0b3R1aWMvaGhGRnI0LzNKbWZFSUpJaG80MEl2bWZ0ak1Sdk45aEo3ZUhUCkFnTUJBQUV3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnRnMreHFXYzNzVjg1dmVpSlhpTjJmaTNmRm9xQ3QyUm4KbngwNzlKNmhWRTBDSUhFU2M3dS8yeHlGZzhqc0VIWGwyMlhGTmdWUmx5bW5STW1wM0tLK1lUaDcKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBKzBZR3Rvd2tkSkJDQnlwbk8yVE5IMUNBRzdHVkE1VE9vYUxUUVFqL3FGMTBCc1Q2CldyQllvZmp2Q3JBWUhhcERRRS81UEppUGk3T3h4d00xSFNhaEp5Y1QvN1U3VjV5bitQK3MvZkdDSkFaZm91ZlIKanloSitQalBreDNQNzhYWURwamsvbVlEWTJKOTlYTk81bFBWcGhHaWkzdkxPaFQ4a3VtL1BnbE4va0pOdUxqdApsY0x0UFNBU0RjT2Y4KzFvR1I4M3QweUNxM2NCeVU3bUFQR09zLzNvRkhTYURDS1VYZEg2aStRUlB0WkFZenBMCjhKR2sxNFhDQWRuczV6VVAvK3ZtNVpndEord0c1cGlOaUoweGdwRnA0SVJYSXdPc0tIa0pmSmhhVFlyYUxib24KUDRZUlJhK1A5eVpueENDU0lhT05DTDVuN1l6RWJ6ZllTZTNoMHdJREFRQUJBb0lCQVFEVUFrR254SmI5d3JuegpVZFBJU1VUSkp5THdPdVdBSUE0NFV5bnJ0YXdBWXRtQzNMQmYxR3IwUHhWeDd5SnA1VDdaQktGR2YzS2ViUCtTCjZ5SGxkcktDVm5hSlNtREhpMll1c1l0RXVJRVY1RXJOS011bi9sWnJ1NE5vbmI3VWtCbThOMFQvWVJONng1OS8KZWNzWWk2TzRleWlxaDhqeE9NUGpNVllyQWE3TTEzaTlkTi9OT1BSa2F0aG5rL2txeGNMaFYxK3diODVGNVNPbAp5dVJ5WU03OGtYakxHUGkxY3dKcHhScU9KcWNxMDAzQjFrN3U5VG9uSjdKOFR6MjJyc2tUdFRSN3o1eGR2VDRvCkFLSWVCNnpmMko0K0o5N3piN3lYUVFxV0ZrZk9EaTJSSlY2SXVZbFFQSkFUZVcwaVJ2RjR1dnJhTGtYZllnU1cKbmtRaGRFQUJBb0dCQVA1MzgyVTB6YUx4MVcxT2xhbWVod3dqV2JES3k4ZEN6VEVwR0ZmZGlhUFVOaUhqVXBOMgpGaExndS9LS05SWmNPZlVBbnhMSTNjWUg1UWQ1ZUlDZnh5UGtlek1RdzQ3TnRwbUgvN3VXenhtbXFodHBCdVN1ClVLQ1QwOGV0SlJEY3JET3lNMHJkQXM5U1htcW9mVGZqT2VwQnJSMnBCWEtkbi9iQWJGcTl1V1RIQW9HQkFQekoKSnl6UVFEN21IQ1BGeWZWelZLcVlCVG5adzdOMDNxamZHOWRXREZzSTBMWmJIKzhjcVVqRFJnTGhZYkkyTVJITwpTbzJUUFNsWXFBUndxenA1VENWSDVJUXljUE41eXpsWVlWMm5pWmtvVXBNYUlONHVhTCt5U1QraEUzSGRnam5rCm45TklVOFJHakkwN3AyZ0w2MDRaUFFMVTNZeno0M09ranpmcldmYVZBb0dCQUtEVWxVdjQ5S013NzdDM1ExWkMKTUo2V1ZTQ3MrK0NEc3dhSUw2K1JBR1pBUUxwb1g0OTl5ZlBDZ0dlSnZJWFdZbmNjSG00VDhEOHlUQ25PTjBBcwpQQVBPYTZOWnpBK2NxdlVjaEtBK2I4U0psdWZlR0pJK0xnMWZnVEdwbUV5dy9GRnNKb2tCYUw0NkZCeWJReEVvCmx6a2NxMXFjc2ltL3dCT0hpTFJOUnppUEFvR0JBUHJ6SXYzOUc5cVZqSmdDMGZUbThzV010NXR2MFRXRnIwb00KZTlJeHJZQnVadXl4MkNrRDFoYlRMTnpOTExURDBjRHdmOWkrdERnb3VGdjRFalN4bUdObVZMamNibjkzaU1XOApOS1RLSHZLNk1nZXhKN0lLZHBqZ0FKRzNjZHRYWU9IaVVyeG9rQ2hKTlYwOFBId3hZUDhlVlJCTGpFcFRFSm1NClkxWExRbnRsQW9HQVhGNU5ianJZNklCMmc2cDJvNkNjNVNmbnN2S2JrZm9XNDFQaUFlZkxPUjhmU2E5TkZ1UnEKcnNHT3FpWGdzYnBPTHQ0b1VqbmNmYXRJUXBnSG5tQ1QvZmZLUEdObEE4NUt5VVBhOGtpVDEzbXZjT28xT0JaRApQL0JELzcvNGY3allGK3RqUExYZXA0NzNvLzlSYVQyWGMzUXZZdTBlZExTQm5qb2szV3UrSTk4PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
then i change the user credential to admin with username and password, this works fine:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUyTkRNd05qWXlNekFlRncweE9UQTNNamd3T1RNM01ETmFGdzB5T1RBM01qVXdPVE0zTUROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUyTkRNd05qWXlNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkpkQkZ2a0dtR1dmYXJhZndJOGVRWDdneWRGdWxYaGMyQU1NbHdLYm5XOXIKZ0k5NFdPbmN2OUdKQzNzTitiK3dtWDJOTEJNWXFLNGZWNjUvam1aZkZRK2pJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDZjlSaFJ4cXhHCnZVdGpsOHhIcDFyQ1MrbS96WW5HNnlnQVZ2ZzZpU2djVHdJZ1BUVzhGY0ZKUWdpdU9kN0c4N1I3Zi9HTmdZV24KQWhGdDlTVUZ5OEdHVmFFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.200.100.183:7442
name: t1.k3s
contexts:
- context:
cluster: t1.k3s
namespace: default
user: koper
name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: koper
user:
password: 3f43d55928f3d7fc7277d28d519b4ec8
username: admin
can you confirm that your ca.crt is consistent with apiserver's --client-ca-file?
We are using a separate server-ca and client-ca, it may be that the cluster certificate-authority-data needs to be the server-ca, while user client-certificate-data/key needs to be the generated cert/key signed by the client-ca cert/key.
We are using a separate server-ca and client-ca, it may be that the cluster certificate-authority-data needs to be the server-ca, while user client-certificate-data/key needs to be the generated cert/key signed by the client-ca cert/key.
thx. it works for me now:
[root@(โ |default:default) kube]$ kc --kubeconfig=koper.kubeconfig get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-b7464766c-g27sh 1/1 Running 2 9m57s
kube-system rbac-manager-79bdb8757d-9p2f6 1/1 Running 0 100s
[root@(โ |default:default) sec-rbac]$ rbac-lookup k3s
SUBJECT SCOPE ROLE
[email protected] cluster-wide ClusterRole/cluster-admin
[root@(โ |default:default) kube]$ cat koper.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/sec-rbac/server/tls/server-ca.crt
server: https://server:6443
name: t1.k3s
contexts:
- context:
cluster: t1.k3s
namespace: default
user: koper
name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: koper
user:
client-certificate: /opt/sec-rbac/gen/keys/u-koper.crt
client-key: /opt/sec-rbac/gen/keys/u-koper.key
The scripts and env current i use:
[root@(โ |default:default) sec-rbac]$ tree
.
โโโ gen
โย ย โโโ keys
โย ย โย ย โโโ u-koper.crt
โย ย โย ย โโโ u-koper.csr
โย ย โย ย โโโ u-koper.key
โย ย โโโ kube
โย ย โโโ koper.kubeconfig
โโโ server
โย ย โโโ cred
โย ย โย ย โโโ admin.kubeconfig
โย ย โย ย โโโ api-server.kubeconfig
โย ย โย ย โโโ controller.kubeconfig
โย ย โย ย โโโ node-passwd
โย ย โย ย โโโ passwd
โย ย โย ย โโโ scheduler.kubeconfig
โย ย โโโ db
โย ย โย ย โโโ state.db
โย ย โย ย โโโ state.db-shm
โย ย โย ย โโโ state.db-wal
โย ย โโโ manifests
โย ย โย ย โโโ coredns.yaml
โย ย โย ย โโโ rolebindings.yaml
โย ย โโโ node-token
โย ย โโโ static
โย ย โย ย โโโ charts
โย ย โย ย โโโ traefik-1.64.0.tgz
โย ย โโโ tls
โย ย โโโ client-admin.crt
โย ย โโโ client-admin.key
โย ย โโโ client-auth-proxy.crt
โย ย โโโ client-auth-proxy.key
โย ย โโโ client-ca.crt
โย ย โโโ client-ca.key
โย ย โโโ client-ca.srl
โย ย โโโ client-controller.crt
โย ย โโโ client-controller.key
โย ย โโโ client-kube-apiserver.crt
โย ย โโโ client-kube-apiserver.key
โย ย โโโ client-kube-proxy.crt
โย ย โโโ client-kube-proxy.key
โย ย โโโ client-kubelet.key
โย ย โโโ client-scheduler.crt
โย ย โโโ client-scheduler.key
โย ย โโโ request-header-ca.crt
โย ย โโโ request-header-ca.key
โย ย โโโ server-ca.crt
โย ย โโโ server-ca.key
โย ย โโโ service.key
โย ย โโโ serving-kube-apiserver.crt
โย ย โโโ serving-kube-apiserver.key
โย ย โโโ serving-kubelet.key
โย ย โโโ temporary-certs
โย ย โโโ apiserver-loopback-client__.crt
โย ย โโโ apiserver-loopback-client__.key
โโโ t2.sh
11 directories, 44 files
[root@(โ |default:default) sec-rbac]$ cat t2.sh
ws=/opt/sec-rbac
day=3650
clus_name="t1.k3s"
clus_ns="default"
user="koper"
#clus_url="https://10.200.100.183:7442"
clus_url="https://server:6443" ##
ca_path=$ws/server/tls
rm -f $ca_path/*-ca.srl
ctx=gen && mkdir -p $ws/$ctx/{kube,keys} && cd $ws/$ctx
#############
ca1=client-ca
generate="keys/u-"$user
echo -e "\033[32m#>>GEN-KEY\033[0m"
#openssl genrsa -out $generate.key 2048
openssl ecparam -name prime256v1 -genkey -noout -out $generate.key
openssl req -new -key $generate.key -out $generate.csr -subj "/CN=${user}@${clus_name}/O=key-gen"
openssl x509 -req -in $generate.csr -CA $ca_path/$ca1.crt -CAkey $ca_path/$ca1.key -CAcreateserial -out $generate.crt -days $day
#-----------
#generate=$ca_path/client-admin ##test
ca2=server-ca
embed=false
ctx2="$user@$clus_name"
config="kube/$user.kubeconfig"
echo -e "\033[32m#>>KUBE-CONFIG\033[0m"
kubectl --kubeconfig=$config config set-cluster $clus_name --embed-certs=$embed --server=$clus_url --certificate-authority=$ca_path/$ca2.crt
kubectl --kubeconfig=$config config set-credentials $user --embed-certs=$embed --client-certificate=$generate.crt --client-key=$generate.key
kubectl --kubeconfig=$config config set-context $ctx2 --cluster=$clus_name --namespace=$clus_ns --user=$user
kubectl --kubeconfig=$config config set current-context $ctx2
kubectl --kubeconfig=$config --context=$ctx2 get pods
ca1=client-ca
ca2=server-ca
can you confirm that your ca.crt is consistent with apiserver's --client-ca-file?
thx, resolved.
Same problem, create a script to help create new client cert.
The problem is that the CA that should sign the certificates for the user is not the same as the server uses for it's API
what needs to be done is copy 2 files from the server/tls directoy and use them to sign :
# mkdir ~/tls && ~/tls
# cp /var/lib/rancher/k3s/server/tls/client-ca.{crt,key} ~/tls
Generate the key + certificate request
# openssl genrsa -out user.key 4096
# openssl req -new -key user.key -out user.csr -subj "/CN=user@default/O=admins"
sign the certificate
# openssl x509 -req -in user.csr -CA client-ca.crt -CAkey client-ca.key -CAcreateserial -out user.crt -days 3650
Now your user01.kubeconfig should look like this :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/server-ca.crt
server: https://server:6443
name: default
contexts:
- context:
cluster: default
namespace: ns01
user: user
name: user@default
current-context: user@default
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate: /root/tls/user.crt
client-key: /root/tls/user.key
Thank you for the explanation! Two questions:
Most helpful comment
The scripts and env current i use:
ca1=client-ca
ca2=server-ca