NGINX Ingress controller version:
nginx-ingress-controller:0.17.1
Kubernetes version (use kubectl version):
1.10
Environment:
EKS
uname -a):What happened:
When sending a GRPC stream request, straight from a mobile client to a grpc backend service, nginx ingress logs show the following
2018/08/29 10:07:44 [error] 6848#6848: *485437 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: myip, server: mything.com, request: "POST /mything.thing/Mything HTTP/2.0", upstream: "http://myip:30100/Mything.Mything/Mything", host: "myhost:443"
What you expected to happen:
That nginx would not be sending to a http upstream, i.e., why do the log show a http: upstream call, this is not as expected when setting the annotation for backend-protocol to GRPCS
How to reproduce it (as minimally and precisely as possible):
Make a grpc call using a client to a backend service
Anything else we need to know:
Ingress for this resource has nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
Pod has a valid SSL certificate , mounted as a secret and in use by the service (i.e., the service register the certificate is available and uses it)
The documentation https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#grpc-backend-deprecated-since-0180 is not entirely clear about which annotation works
Got somewhat further after updating to 0.18.0 (perhaps configuration never reloaded after messing with ingress resource annotations?)
mything - [mything] - - [29/Aug/2018:11:35:52 +0000] "POST /mything.mything/mything HTTP/2.0" 400 51 "-" "Steamer/1.0 grpc-objc/1.11.0 grpc-c/6.0.0 (ios; chttp2; gorgeous)" 219 0.070 [mything-30100] myIP:30100 73 0.068 400 7c82c2e4fd13563a4745f22ab33cadbe
Reading this, still getting 400 response though , which does not seem right. Backend service is up and in ready state.
only option was to use ssl-passthrough: true , without anything else on the ingress resource.
@timm088 I believe this is the same issue raised in a couple other issues:
Yeh I realise TLS is required, my ingress has TLS enabled with a valid cert from letsencrypt.
The pod has that same cert mounted and the service uses it also (double TLS but it should not really matter).
The only way I can get this to function as I mentioned above is to use pssthrough, which is not ideal if we wanted to use internally signed certs in services.
@timm088
May u let me know how you mount your certificate to ingress pod? I need do the same
@jasonwangnanjing https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod is how we do it, however, we do this via a helm chart also.
[nginx-ingress-controller-fd55b8f5-cxh2t] 2018/09/08 22:06:38 [warn] 9309#9309: *411165 a client request body is buffered to a temporary file
When using GRPC or GRPCS i am getting some buffering of the client body to a file ... giving this is a streaming grpc call, possibly has something to do with why my requests are failing ..? (in particular, it looks like server reflections are failing, but these are supported and working in my service, tested when proxying direct to the pod via kubectl port-forward) https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#grpc-backend-deprecated-since-0180
When i set nginx.ingress.kubernetes.io/ssl-passthrough: "true" , and do not use the backend-protocol annotations, no dramas and requests succeed (no buffering either), which is expected as this sets nginx into TCP forwarding mode. Working, but not ideal.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
[nginx-ingress-controller-fd55b8f5-cxh2t] 2018/09/08 22:06:38 [warn] 9309#9309: *411165 a client request body is buffered to a temporary fileWhen using GRPC or GRPCS i am getting some buffering of the client body to a file ... giving this is a streaming grpc call, possibly has something to do with why my requests are failing ..? (in particular, it looks like server reflections are failing, but these are supported and working in my service, tested when proxying direct to the pod via kubectl port-forward) https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#grpc-backend-deprecated-since-0180
When i set
nginx.ingress.kubernetes.io/ssl-passthrough: "true", and do not use the backend-protocol annotations, no dramas and requests succeed (no buffering either), which is expected as this sets nginx into TCP forwarding mode. Working, but not ideal.
I have the same issue, i need to expose my grpc service through ingress, even i tried to remove backend-protocol and set ssl-passthrough to true, it still doesn't work. and always i can see the logs inside ingress with "a client request body is buffered to a temporary file". do you have some suggestions? or anyone else can help with it?
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@JamesYaoh did you find a solution? I'm struggling with the same thing. Have tried a ton of different configurations without any luck. Following the example in the repo, don't seem to work either. I'm on 0.21.0.
@JamesYaoh did you find a solution? I'm struggling with the same thing. Have tried a ton of different configurations without any luck. Following the example in the repo, don't seem to work either. I'm on 0.21.0.
Yes I did, just using
nginx.ingress.kubernetes.io/ssl-passthrough: "true" instead. Actually I've tried a lot with backend-protocol: "GRPC" but not working indeed, so I switched the plan to tcp transfering with ssl-passthrough: "true". I'm on 0.22.0.
@JamesYaoh Thank you for the quick reply. Then you terminate TLS at the pod instead? I've read a couple of places that the problem could be because of unsupported HTTP/2 in the classic load balancers on AWS that we currently use. The Argo-cd project seems to be using the grpc-web project to handle this by switching to HTTP/1.1 instead. But can't get their example to work either.
@JamesYaoh Thank you for the quick reply. Then you terminate TLS at the pod instead? I've read a couple of places that the problem could be because of unsupported HTTP/2 in the classic load balancers on AWS that we currently use. The Argo-cd project seems to be using the grpc-web project to handle this by switching to HTTP/1.1 instead. But can't get their example to work either.
The AWS forums suggest trying to configure the ELB listener as SSL, instead of HTTP/HTTPS. This change still hasn't worked for me, grpcurl responds with transport: loopyWriter.run returning. connection error: desc = "transport is closing and nginx returns 400. Is it not possible to terminate SSL at a classic ELB with a grpc backend? Does a spec.tls need to be defined in the ingress even if there is a valid cert on the ELB?
@JamesYaoh Thank you for the quick reply. Then you terminate TLS at the pod instead? I've read a couple of places that the problem could be because of unsupported HTTP/2 in the classic load balancers on AWS that we currently use. The Argo-cd project seems to be using the grpc-web project to handle this by switching to HTTP/1.1 instead. But can't get their example to work either.
I don't use AWS but another virtual cloud. Also, I turned tls on on pod program, and my apiVersion for ingress is "extensions/v1beta1". Incoming request (which was outside of kubernetes) was directly routed to ingress. ELB may not support HTTP/2, i think you need to contact AWS to figure it out.
Hi people. I am facing this issue on my local kubernetes cluster, which has a grpc-server service listening on port 50053, and an ingress object with nginx.ingress.kubernetes.io/backend-protocol: "GRPC" annotation . I am using nginx-ingress-controller with the configuration use-http2: "true". When I try connecting from a grpc-client written in go to the localhost:443 , then I am getting response rpc error: code = Unavailable desc = transport is closing , with ingress logs as follows: 192.168.65.3 - [192.168.65.3] - - [30/Sep/2019:13:43:43 +0000] "PRI * HTTP/2.0" 400 163 "-" "-" 0 0.004 [] [] - - - - ad5422a7b6d023b9257b770d4a3edcee
What's SSL Passthrough and How it works, please refer to: https://avinetworks.com/glossary/ssl-passthrough/
Most helpful comment
only option was to use
ssl-passthrough: true, without anything else on the ingress resource.