Automatic proxy injection is not working at all, proxy sidecar is not injected when deployment is created in the namespace with the proper "linkerd.io/inject: enabled" annotation.
Configuration was done as described here -> https://linkerd.io/2/tasks/automating-injection
Admission regitration is enabled on the cluster, linkerd was installed with --proxy-auto-inject flag.
Anntotation "linkerd.io/inject: enabled" was set on both namespace and deployment (also tested multiple times, with different configurations, namespace only, deployment only, etc.)
I have tested our apps and also tried helloworld example from linkerd tutorial.
No luck in any case.
Output of: kubectl -n linkerd get deploy/linkerd-proxy-injector svc/linkerd-proxy-injector
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.extensions/linkerd-proxy-injector 1/1 1 1 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/linkerd-proxy-injector ClusterIP 10.101.230.48 <none> 443/TCP 20m
Proxy injector pod logs:
time="2019-05-24T08:37:12Z" level=info msg="running version stable-2.3.0"
time="2019-05-24T08:37:12Z" level=info msg="deleting existing webhook configuration"
time="2019-05-24T08:37:12Z" level=info msg="created webhook configuration: /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/linkerd-proxy-injector-webhook-config"
time="2019-05-24T08:37:12Z" level=info msg="waiting for caches to sync"
time="2019-05-24T08:37:12Z" level=info msg="caches synced"
time="2019-05-24T08:37:12Z" level=info msg="starting admin server on :9995"
time="2019-05-24T08:37:12Z" level=info msg="listening at :8443"
Helloword example pod (notice only one containter):
NAME READY STATUS RESTARTS AGE
helloworld-fdb7dc65f-7k2wl 1/1 Running 0 9s
Deployment details:
Name: helloworld
Namespace: crow
CreationTimestamp: Fri, 24 May 2019 09:06:21 +0000
Labels: run=helloworld
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=helloworld
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=helloworld
Containers:
helloworld:
Image: buoyantio/helloworld
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: helloworld-fdb7dc65f (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set helloworld-fdb7dc65f to 1
Namespace configuration:
Name: crow
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"linkerd.io/inject":"enabled"},"name":"crow"}}
linkerd.io/inject: enabled
Status: Active
No resource quota.
No resource limits.
Output of: kubectl -n crow get po -l run=helloworld -o jsonpath='{.items[0].spec.containers[*].name}'
helloworld
linkerd check outputkubernetes-api
--------------
โ can initialize the client
โ can query the Kubernetes API
kubernetes-version
------------------
โ is running the minimum Kubernetes API version
โ is running the minimum kubectl version
linkerd-existence
-----------------
โ control plane namespace exists
โ controller pod is running
โ can initialize the client
โ can query the control plane API
linkerd-api
-----------
โ control plane pods are ready
โ control plane self-check
โ [kubernetes] control plane can talk to Kubernetes
โ [prometheus] control plane can talk to Prometheus
โ no invalid service profiles
linkerd-version
---------------
ร can determine the latest version
Get https://versioncheck.linkerd.io/version.json?version=stable-2.3.0&uuid=unknown&source=cli: dial tcp: lookup versioncheck.linkerd.io on <IP>:53: no such host
see https://linkerd.io/checks/#l5d-version-latest for hints
โผ cli is up-to-date
unsupported version channel: stable-2.3.0
see https://linkerd.io/checks/#l5d-version-cli for hints
control-plane-version
---------------------
โผ control plane is up-to-date
unsupported version channel: stable-2.3.0
see https://linkerd.io/checks/#l5d-version-control for hints
โ control plane and cli versions match
Status check results are ร
It doesn't look like your control plane workloads are injected with the Linkerd proxy. What is the output of linkerd check --proxy -n linkerd?
Also, what are you using to install the Linkerd control plane? If you happen to be using the Helm charts in the repo, note that those charts don't have the proxies injected. They are read by the CLI to produce the full YAML. You can use linkerd install (without piping it to kubectl) to get the complete YAML of the control plane.
As per proxy auto inject tutorial, I am installing linkerd using linux binary, with the following command:
linkerd install --proxy-auto-inject | kubectl apply -f -
Output of: linkerd check --proxy -n linkerd
kubernetes-api
--------------
โ can initialize the client
โ can query the Kubernetes API
kubernetes-version
------------------
โ is running the minimum Kubernetes API version
โ is running the minimum kubectl version
linkerd-existence
-----------------
โ control plane namespace exists
โ controller pod is running
โ can initialize the client
โ can query the control plane API
linkerd-api
-----------
โ control plane pods are ready
โ control plane self-check
โ [kubernetes] control plane can talk to Kubernetes
โ [prometheus] control plane can talk to Prometheus
โ no invalid service profiles
linkerd-version
---------------
ร can determine the latest version
Get https://versioncheck.linkerd.io/version.json?version=stable-2.3.0&uuid=unknown&source=cli: dial tcp: lookup versioncheck.linkerd.io on 43.194.55.13:53: no such host
see https://linkerd.io/checks/#l5d-version-latest for hints
โผ cli is up-to-date
unsupported version channel: stable-2.3.0
see https://linkerd.io/checks/#l5d-version-cli for hints
linkerd-data-plane
------------------
โ data plane namespace exists
โ data plane proxies are ready
โ data plane proxy metrics are present in Prometheus
โผ data plane is up-to-date
linkerd/linkerd-web-6b75cb8f77-rjn57: unsupported version channel: stable-2.3.0
see https://linkerd.io/checks/#l5d-data-plane-version for hints
โ data plane and cli versions match
Status check results are ร
List of all pods running in linkerd namespace:
NAME READY STATUS RESTARTS AGE
linkerd-controller-599b8f8585-tglnz 4/4 Running 0 11h
linkerd-grafana-785f9979b6-h2cdc 2/2 Running 0 11h
linkerd-identity-5c9864d4f7-55pgk 2/2 Running 0 11h
linkerd-prometheus-6bdff98f8b-h9t6t 2/2 Running 0 11h
linkerd-proxy-injector-6b9558b695-tsgpz 2/2 Running 0 11h
linkerd-sp-validator-56599d4d8b-hdmh6 2/2 Running 0 11h
linkerd-web-6b75cb8f77-rjn57 2/2 Running 0 11h
Strange. The tutorial works for me, using stable-2.3. The proxy injector logs usually provide information on whether it receives the workload YAML or not. E.g., on my Minikube instance, I see something like
$ ./linkerd2-cli-stable-2.3.0-linux logs --control-plane-component proxy-injector
linkerd linkerd-proxy-injector-64d5d58d47-rmswm proxy-injector time="2019-05-24T21:34:57Z" level=info msg="received admission review request c6ec563b-7e6b-11e9-b0e7-f8be54ac57b8"
linkerd linkerd-proxy-injector-64d5d58d47-rmswm proxy-injector time="2019-05-24T21:34:57Z" level=info msg="received pod/helloworld-fdb7dc65f-"
linkerd linkerd-proxy-injector-64d5d58d47-rmswm proxy-injector time="2019-05-24T21:34:57Z" level=info msg="patch generated for: pod/helloworld-fdb7dc65f-"
...
linkerd linkerd-proxy-injector-64d5d58d47-rmswm proxy-injector time="2019-05-24T21:33:45Z" level=info msg="received admission review request 9c1fe86a-7e6b-11e9-b0e7-f8be54ac57b8"
linkerd linkerd-proxy-injector-64d5d58d47-rmswm proxy-injector time="2019-05-24T21:33:45Z" level=info msg="received pod/storage-provisioner"
linkerd linkerd-proxy-injector-64d5d58d47-rmswm proxy-injector time="2019-05-24T21:33:45Z" level=info msg="skipped pod/storage-provisioner"
Notice how it picked up both the helloworld and storage-provisioner pods, but generated JSON patch only for the helloworld workload, because its namespace has the linkerd.io/enabled: true annotation. Are you seeing similar logs? If not, then it's likely that the k8s API Server can't find your proxy injector, via the mutating webhook configuration.
Can you share the output of
kubectl describe mutatingwebhookconfigurations.admissionregistration.k8s.io linkerd-proxy-injector-webhook-config
Are there any webhook TLS-related errors on your k8s API server?
I have deployed once again app with no luck, below are logs of proxy injector (linkerd --control-plane-component proxy-injector)
+ linkerd linkerd-proxy-injector-6b9558b695-tsgpz โบ linkerd-proxy
+ linkerd linkerd-proxy-injector-6b9558b695-tsgpz โบ proxy-injector
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="running version stable-2.3.0"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="deleting existing webhook configuration"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="created webhook configuration: /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/linkerd-proxy-injector-webhook-config"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="waiting for caches to sync"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="caches synced"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="starting admin server on :9995"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz proxy-injector time="2019-05-24T08:37:12Z" level=info msg="listening at :8443"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy time="2019-05-24T08:37:13Z" level=info msg="running version dev-undefined"
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002359s] linkerd2_proxy::app::main using destination service at Some(ControlAddr { addr: Name(NameAddr { name: DNSName("linkerd-destination.linkerd.svc.cluster.local"), port: 8086 }), identity: Some(DNSName("linkerd-controller.linkerd.serviceaccount.identity.linkerd.cluster.local")) })
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002423s] linkerd2_proxy::app::main using identity service at Name(NameAddr { name: DNSName("linkerd-identity.linkerd.svc.cluster.local"), port: 8080 })
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002433s] linkerd2_proxy::app::main routing on V4(127.0.0.1:4140)
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002441s] linkerd2_proxy::app::main proxying on V4(0.0.0.0:4143) to None
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002446s] linkerd2_proxy::app::main serving admin endpoint metrics on V4(0.0.0.0:4191)
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002450s] linkerd2_proxy::app::main protocol detection disabled for inbound ports {25, 3306}
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 0.002459s] linkerd2_proxy::app::main protocol detection disabled for outbound ports {25, 3306}
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy WARN [ 0.515751s] linkerd-identity.linkerd.svc.cluster.local:8080 linkerd2_proxy::proxy::reconnect connect error to ControlAddr { addr: Name(NameAddr { name: DNSName("linkerd-identity.linkerd.svc.cluster.local"), port: 8080 }), identity: Some(DNSName("linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local")) }: operation timed out after 500ms
linkerd linkerd-proxy-injector-6b9558b695-tsgpz linkerd-proxy INFO [ 7.575958s] linkerd2_proxy::app::main Certified identity: linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local
Webhook details:
Name: linkerd-proxy-injector-webhook-config
Namespace:
Labels: <none>
Annotations: <none>
API Version: admissionregistration.k8s.io/v1beta1
Kind: MutatingWebhookConfiguration
Metadata:
Creation Timestamp: 2019-05-24T08:37:12Z
Generation: 1
Resource Version: 10719227
Self Link: /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/linkerd-proxy-injector-webhook-config
UID: 20987808-7dff-11e9-bc47-00505698025b
Webhooks:
Admission Review Versions:
v1beta1
Client Config:
Ca Bundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkRENDQVJtZ0F3SUJBZ0lCQVRBS0JnZ3Foa2pPUFFRREFqQWhNUjh3SFFZRFZRUURFeFpzYVc1clpYSmsKTFhCeWIzaDVMV2x1YW1WamRHOXlNQjRYRFRFNU1EVXlOREE0TXpjd01sb1hEVEl3TURVeU16QTRNemN5TWxvdwpJVEVmTUIwR0ExVUVBeE1XYkdsdWEyVnlaQzF3Y205NGVTMXBibXBsWTNSdmNqQlpNQk1HQnlxR1NNNDlBZ0VHCkNDcUdTTTQ5QXdFSEEwSUFCRUhEVXVwZ1Nxc1JnRnMzdEdMaHAraFo5T0MyTWUzWnNVUU9Hb3lnWm9rcGJRVWYKUmN0YzFtbGMzN29hYy8zY2cxYldKZTRXczJYTFlHemp5bWFoRFRLalFqQkFNQTRHQTFVZER3RUIvd1FFQXdJQgpCakFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXdBd0VCCi96QUtCZ2dxaGtqT1BRUURBZ05KQURCR0FpRUEraEM0V3NEciswZmduQy9zZnVQQ215UzN3TytsWVdIc0dhUi8KWlVFN2pxTUNJUUNHMXhZQ0JnVkMvVUQ5NXpWK1pnNE9teU1WeVBMb2VCYXVSN1RDdjlUUW1nPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Service:
Name: linkerd-proxy-injector
Namespace: linkerd
Path: /
Failure Policy: Ignore
Name: linkerd-proxy-injector.linkerd.io
Namespace Selector:
Rules:
API Groups:
API Versions:
v1
Operations:
CREATE
UPDATE
Resources:
pods
Scope: *
Side Effects: Unknown
Timeout Seconds: 30
Events: <none>
My api-server logs are flooded every second with:
OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
It's hard to search for other errors, but I didn't notice any tls related ones.
I will destroy and build cluster with most recent 1.14.2 anyway.
Got it, certificate issue.
W0525 12:38:22.461740 1 dispatcher.go:70] Failed calling webhook, failing open linkerd-proxy-injector.linkerd.io: failed calling webhook "linkerd-proxy-injector.linkerd.io": Post https://linkerd-proxy-injector.linkerd.svc:443/?timeout=30s: x509: certificate signed by unknown authority
E0525 12:38:22.461775 1 dispatcher.go:71] failed calling webhook "linkerd-proxy-injector.linkerd.io": Post https://linkerd-proxy-injector.linkerd.svc:443/?timeout=30s: x509: certificate signed by unknown authority
I0525 12:38:22.467141 1 trace.go:81] Trace[759612571]: "Create /api/v1/namespaces/crow/pods" (started: 2019-05-25 12:38:21.867963774 +0000 UTC m=+4299174.359556153) (total time: 599.159502ms):
Trace[759612571]: [593.847043ms] [593.780607ms] About to store object in database
@ihcsim Shouldn't API server trust Linkerd CA? Isn't it a purpose of CA bundle field in webhook configuration? Should I run "linkerd install --identity-issuer-certificate-file ..." with my custom CA that kubernetes was build with ?
Yes, the k8s api server uses the CA bundle in the webhook configuration to determine if the self-signed cert generated by your proxy injector pod is trustworthy. Try deleting the proxy injector pod, and let it recreate its webhook configuration with a new CA bundle. Is this reproducible on another k8s cluster; just want to rule out the possibility that the way you use kubeadm to install k8s isn't a factor?
fwiw, the service profile validator follows the same code path to generate the CA bundle in its webhook configuration. If there are no errors there, then it's likely that your MWC CA bundle is stale.
The --identity-issuer-certificate-file is an mTLS option; it's irrelevant here.
Is it possible that clock skew is at play? We've seen this sort of problem in the past when the host that generated the key has diverged substantially from the hosts running the api server or control plane... This is especially common in virtualized environments like minikube.
We are running on vmware, we are using ntp to sync time on all nodes.
Doesn't look like clock skew is an issue here.
euegiskubtsta04v:
Local time: Wed 2019-05-29 08:22:49 UTC
euegiskubtstm01v:
Local time: Wed 2019-05-29 08:22:49 UTC
euegiskubtstm03v:
Local time: Wed 2019-05-29 08:22:49 UTC
euegiskubtsta02v:
Local time: Wed 2019-05-29 08:22:49 UTC
euegiskubtsta03v:
Local time: Wed 2019-05-29 08:22:49 UTC
euegiskubtstm02v:
Local time: Wed 2019-05-29 08:22:49 UTC
euegiskubtsta01v:
Local time: Wed 2019-05-29 08:22:49 UTC
All our clusters are build with the same flags with use of kubeadm.
Only custom setting we have is ldap auth with dex (oidc flags in api-server config below).
I have destroyed and rebuild cluster with 1.14.2.
I have deleted injector pod, it was respawned, still no luck:
W0529 08:07:00.802982 1 dispatcher.go:70] Failed calling webhook, failing open linkerd-proxy-injector.linkerd.io: failed calling webhook "linkerd-proxy-injector.linkerd.io": Post https://linkerd-proxy-injector.linkerd.svc:443/?timeout=30s: x509: certificate signed by unknown authority
I have also compared certificates and it looks ok, should be accepted:
root@dockerapp-6bdfb4759d-jm5dt:/app# openssl s_client -showcerts -connect linkerd-proxy-injector.linkerd.svc:443
CONNECTED(00000003)
depth=1 CN = linkerd-proxy-injector
verify error:num=19:self signed certificate in certificate chain
verify return:0
---
Certificate chain
0 s:/CN=linkerd-proxy-injector.linkerd.svc
i:/CN=linkerd-proxy-injector
-----BEGIN CERTIFICATE-----
MIIBnDCCAUOgAwIBAgIBAjAKBggqhkjOPQQDAjAhMR8wHQYDVQQDExZsaW5rZXJk
LXByb3h5LWluamVjdG9yMB4XDTE5MDUyOTA3NTkzMVoXDTIwMDUyODA3NTk1MVow
LTErMCkGA1UEAxMibGlua2VyZC1wcm94eS1pbmplY3Rvci5saW5rZXJkLnN2YzBZ
MBMGByqGSM49AgEGCCqGSM49AwEHA0IABCV4ndpE6Zl6Vsv5YmwSftyHOXiq40bX
6BczHIcwmipGsUekAKe9THQ6J+C2oDXXxtT8XacGbr+wgz27FUp27UqjYDBeMA4G
A1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwLQYD
VR0RBCYwJIIibGlua2VyZC1wcm94eS1pbmplY3Rvci5saW5rZXJkLnN2YzAKBggq
hkjOPQQDAgNHADBEAiAd/68toxpsxDD0SzZu9FTPO0ChWXyJpkOY0AJ3Ws0u5AIg
PW2Aapa0KPJzkDvgx2t27D8PW7k1lflPY4jzvTZgY/M=
-----END CERTIFICATE-----
1 s:/CN=linkerd-proxy-injector
i:/CN=linkerd-proxy-injector
-----BEGIN CERTIFICATE-----
MIIBdDCCARmgAwIBAgIBATAKBggqhkjOPQQDAjAhMR8wHQYDVQQDExZsaW5rZXJk
LXByb3h5LWluamVjdG9yMB4XDTE5MDUyOTA3NTkzMVoXDTIwMDUyODA3NTk1MVow
ITEfMB0GA1UEAxMWbGlua2VyZC1wcm94eS1pbmplY3RvcjBZMBMGByqGSM49AgEG
CCqGSM49AwEHA0IABFBAl+Ai36ExdxaAbDenU5HZ2Fpd3lKQWb90TlzE4V3Y/qdE
yFT1xN2dq8eNKUhq41RlNNPSloS22qCOQHNDHHqjQjBAMA4GA1UdDwEB/wQEAwIB
BjAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/BAUwAwEB
/zAKBggqhkjOPQQDAgNJADBGAiEAvS/KtYUqbLPadSgsS54JXaYqArtsgWX1Jj2B
AzE1UhcCIQCgR9LK1muWgKPaJQ8LvxCpB6QCyH2T1bEXyPBniXSc2A==
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=linkerd-proxy-injector.linkerd.svc
issuer=/CN=linkerd-proxy-injector
---
No client certificate CA names sent
---
SSL handshake has read 1217 bytes and written 415 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-ECDSA-AES128-GCM-SHA256
Server public key is 256 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-ECDSA-AES128-GCM-SHA256
Session-ID: 9444E4649AF8E5FDD50A69E98C428E738CE272F105631B99C075527AAD8CC60B
Session-ID-ctx:
Master-Key: D6FA63F02E5C8E44E91B8566AD2AD98F6DC8DDC185CE054AF67384215544AC962557A6C07BEAD93C5441B02C873B91BE
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket:
0000 - 9b 59 93 57 5a 65 ad 1f-f2 2c b4 69 49 fa 49 5c .Y.WZe...,.iI.I\
0010 - 6f 7f fc b4 23 4e 51 ad-54 98 37 85 81 55 88 2c o...#NQ.T.7..U.,
0020 - 85 3a 48 5b 6e fc fd 5a-85 c4 7d 9d 1c a9 f0 17 .:H[n..Z..}.....
0030 - 94 27 39 62 35 d0 a2 29-f9 1c 30 48 28 ed 89 00 .'9b5..)..0H(...
0040 - b2 5f 19 0c 61 c7 ba 88-64 03 de be 1c b4 f3 ab ._..a...d.......
0050 - 9c 73 02 ce c7 bf 4f ee-07 b0 47 fa 5b 2a c9 6c .s....O...G.[*.l
0060 - d2 78 1d 15 eb d0 37 29-8c 18 6c 01 17 53 83 09 .x....7)..l..S..
0070 - 41 55 2b 86 3f 2a fe c9- AU+.?*..
Start Time: 1559117384
Timeout : 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad Requestclosed
CA Bundle entry in webhook configuration matches CA received from injector service (base64 decoded):
-----BEGIN CERTIFICATE-----
MIIBdDCCARmgAwIBAgIBATAKBggqhkjOPQQDAjAhMR8wHQYDVQQDExZsaW5rZXJk
LXByb3h5LWluamVjdG9yMB4XDTE5MDUyOTA3NTkzMVoXDTIwMDUyODA3NTk1MVow
ITEfMB0GA1UEAxMWbGlua2VyZC1wcm94eS1pbmplY3RvcjBZMBMGByqGSM49AgEG
CCqGSM49AwEHA0IABFBAl+Ai36ExdxaAbDenU5HZ2Fpd3lKQWb90TlzE4V3Y/qdE
yFT1xN2dq8eNKUhq41RlNNPSloS22qCOQHNDHHqjQjBAMA4GA1UdDwEB/wQEAwIB
BjAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/BAUwAwEB
/zAKBggqhkjOPQQDAgNJADBGAiEAvS/KtYUqbLPadSgsS54JXaYqArtsgWX1Jj2B
AzE1UhcCIQCgR9LK1muWgKPaJQ8LvxCpB6QCyH2T1bEXyPBniXSc2A==
-----END CERTIFICATE-----
api-server config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=<IP>
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --endpoint-reconciler-type=lease
- --etcd-cafile=/etc/kubernetes/pki/ca.pem
- --etcd-certfile=/etc/kubernetes/pki/kubernetes-tst.pem
- --etcd-keyfile=/etc/kubernetes/pki/kubernetes-tst-key.pem
- --etcd-servers=https://<IP1>:2379,https://<IP2>:2379,https://<IP3>:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --oidc-client-id=oidc-auth-client
- --oidc-groups-claim=groups
- --oidc-issuer-url=https://<auth.fqdn.goes.here>/
- --oidc-username-claim=email
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
env:
- name: NO_PROXY
value: 127.0.0.1,10.0.0.0/8,172.16.0.0/12
- name: HTTPS_PROXY
value: http://user:password@proxy-host:10080
- name: HTTP_PROXY
value: http://user:password@proxy-host:10080
image: k8s.gcr.io/kube-apiserver:v1.14.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: <IP>
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}
Kublet config:
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --streaming-connection-idle-timeout=5m --keep-terminated-pod-volumes=false --event-qps=0"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_ENABLE_DEBUG_HANDLERS=--enable-debugging-handlers=true"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_ENABLE_DEBUG_HANDLERS $KUBELET_EXTRA_ARG
@voki Thanks for the helpful details. Yeah, definitely running out of ideas on this one. It looks to me that the API server is targeting the right proxy injector Service, using the right mutating webhook configuration resource. At least based on the doc, I don't think it is possible to configure the API server to reject self-signed CA and cert, is it?
Also, since you are already on k8s 1.14, one thing you can try is to leave the CA bundle empty in the mutating webhook configuration object, it will automatically fall back to the API server trust root, per the k8s doc. If this works, then it confirms that somehow the API server doesn't like the self-signed CA bundle.
I did little research and it doesn't look like there is a flag for not allowing self-signed CA.
I have removed ca bundle field by editing webhook configuration with kubectl edit command.
No luck, with or without restart of proxy-injector pod, I am still getting the same error.
Looks like k8s is ignoring ca bundle field.
I wanted to do the full fresh deployment without ca bundle field, but looks like MutatingWebhookConfiguration resource part is missing in deployment output from:
linkerd install --proxy-auto-inject
Is this webhook configuration generated dynamically by proxy-injector pod ?
I have followed https://banzaicloud.com/blog/k8s-admission-webhooks/ tutorial and got the same results.
What more, for this test webhooks I am using my own internal CA.
Unfortunatelly I am also getting: x509: certificate signed by unknown authority.
Looks like I am missing some flag in the api-server, or there is a bug in k8s.
I will double check all the admission plugins flags, although in docs it states that it should be enabled in 1.14 by default.
I have solved this issue, it was caused by proxy env vars passed to api server by kubeadm.
Because I am behind corporate proxy I have to run kubeadm with proxy exported.
Kubeadm is passing this env vars later to kube-apiserver config as :
env:
- name: NO_PROXY
value: 127.0.0.1,10.0.0.0/8,172.16.0.0/12,<custer_range>, .xxx.com
- name: HTTPS_PROXY
value: http://user:password@proxy-host:10080
- name: HTTP_PROXY
value: http://user:password@proxy-host:10080
I have removed this entries from api server config and it works.
I have to figure out, what more should be added to no_proxy env to make it work with proxy.
Probably ".svc"
I have a similar issue and the Linkerd checks show no errors. This was working properly until I installed metric server on my K8s using Kops add-on, which does disable some tls checks. Did that cause any issue if yes, how I do I work with both metric server and Linkerd.
I would recommend looking in the api-server logs for what's going on @ashish235.
Most helpful comment
Got it, certificate issue.