Linkerd proxy returns 503 Service Unavailable but should return answer from external service instead.
apiVersion: v1
kind: Namespace
metadata:
name: repro
annotations:
linkerd.io/inject: enabled
---
apiVersion: v1
kind: Service
metadata:
name: some-external-svc
namespace: repro
spec:
ports:
- port: 9753
protocol: TCP
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: some-external-svc
namespace: repro
subsets:
- addresses:
- ip: 1.1.1.1
ports:
- port: 80
Attach a shell in an alpine repro pod with the following cmd: kubectl run repro -n repro --rm -it --image=alpine --restart=Never --generator=run-pod/v1
Install curl in pod with apk add curl
Request some-external-svc with curl: curl -IXGET some-external-svc:9753
Response from linkerd proxy: 503 Service Unavailable
Expect answer with status code <500 from external server.
linkerd check outputkubernetes-api
--------------
โ can initialize the client โ can query the Kubernetes API
kubernetes-version
------------------
โ is running the minimum Kubernetes API version โ is running the minimum kubectl version
linkerd-config
--------------
โ control plane Namespace exists โ control plane ClusterRoles exist
โ control plane ClusterRoleBindings exist
โ control plane ServiceAccounts exist
โ control plane CustomResourceDefinitions exist
โ control plane MutatingWebhookConfigurations exist
โ control plane ValidatingWebhookConfigurations exist
โ control plane PodSecurityPolicies exist
linkerd-existence
-----------------
โ 'linkerd-config' config map exists โ control plane replica sets are ready
โ no unschedulable pods
โ controller pod is running
โ can initialize the client
โ can query the control plane API
linkerd-api
-----------
โ control plane pods are ready โ control plane self-check
โ [kubernetes] control plane can talk to Kubernetes โ [prometheus] control plane can talk to Prometheus โ no invalid service profiles
linkerd-version
---------------
โ can determine the latest version โ cli is up-to-date
control-plane-version
---------------------
โ control plane is up-to-date โ control plane and cli versions match
Status check results are โ
It is possible to get it to work if I pass a host header along with the target ip (eg. Host:1.1.1.1). However this would negate the need for an headless service. The point of which is to provide a common cluster internal name for the external service, so that the other cluster deployments does't need to know about the external ip.
This scenario works just fine without the linkerd proxy.
You can use type: ExternalName as a workaround for now.
Thanks for catching this @JohannesEH.
You can use
type: ExternalNameas a workaround for now.
@grampelberg thanks for the suggestion. You are absolutely right, and in most setups this is probably fairly easy to configure. In the setup I'm working with it is somewhat cumbersome. So in the spirit of "it just works" I would rather wait till linkerd supports our kind of setup, which I'm sure it will. :)
@JohannesEH agreed, I wish that the k8s API's handling of service/endpoint combos was a little more robust. Shouldn't be a hard fix if you're interested in taking a look =)
@grampelberg My schedule is pretty full, but I might be able to find some time to look at it on sunday. However, I must confess I only have little experience with go and zero experience with Rust, so I might be a little slow to get started. And I worry if I can find enough time to really dig in and get the job done.
Do you have any pointers on where I should start looking?
@adleong any pointers?
I think this is caused by logic in the destination service:
The destination service will only return endpoints which are pods. In this case, none of the endpoints are pods and so it returns an empty address set.
Any chances to get this in next release? It looks like blocker for us to use linkerd2
@StupidScience it won't make the next stable (2.6) but it should be in the edge immediately after merging.
In the meantime, could anyone explain how the type: ExternalName workaround mentioned can be configured?
@robertgates55 2.6 went out awhile ago. The workaround is to just have an external name service instead of managing endpoints manually.