Ingress-nginx: Support grpc keep alive server parameters

Created on 6 Aug 2019  路  5Comments  路  Source: kubernetes/ingress-nginx

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

NGINX Ingress controller version:

rancher/nginx-ingress-controller:0.21.0-rancher3

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:
AWS, Rancher

What happened:

  1. I have grpc backend written in go and mobile client written in swift which uses swift-grpc.
    On go backend I have keep alive policy
keepalivePolicy = keepalive.EnforcementPolicy{
    MinTime:             5 * time.Second, // If a client pings more than once every x duration, terminate the connection.
    PermitWithoutStream: false,           // Allow pings even when there are no active streams
}

keepaliveParams = keepalive.ServerParameters{
    MaxConnectionIdle:     1 * time.Hour,    // If a client is idle for given duration, send a GOAWAY.
    MaxConnectionAge:      1 * time.Hour,    // If any connection is alive for more than given duration, send a GOAWAY.
    MaxConnectionAgeGrace: 10 * time.Second, // Allow given duration for pending RPCs to complete before forcibly closing connections
    Time:                  10 * time.Second, // Ping the client if it is idle for given duration to ensure the connection is still active.
    Timeout:               5 * time.Second,  // Wait given duration for the ping ack before assuming the connection is dead.
}
  1. Ngnix ingress is used to load balance and terminate TLS traffic.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/server-snippet: |
      grpc_read_timeout 600s;
      grpc_send_timeout 600s;
      client_body_timeout 600s;

My goal is to have long lived bidi streaming rpc so client can accept incoming updates from the backend. Also if client is disconnected (let's say internet connection is disabled) I want server to determine this as fast as possible (ideally 10 seconds). Currently my grpc server is doing keep alive ping each 10 seconds and ngnix proxy is doing ack of the ping but ngnix itself is not pinging client.

What you expected to happen:
I expect to have settings on ngnix ingress to allow setup grpc keep alive policy, something like

grpc_keepalive_time 10s;
grpc_keepalive_timeout 5s;

Or even better to just forward grpc ping frames to the client.

I found similar issue on envoy proxy but it seems to be fixed now https://github.com/envoyproxy/envoy/issues/2086

kinfeature lifecyclrotten

Most helpful comment

nginx.ingress.kubernetes.io/server-snippet: |
grpc_read_timeout 600s;
grpc_send_timeout 600s;
client_body_timeout 600s;

this save my day, thank you guy!

All 5 comments

nginx.ingress.kubernetes.io/server-snippet: |
grpc_read_timeout 600s;
grpc_send_timeout 600s;
client_body_timeout 600s;

this save my day, thank you guy!

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

briananstett picture briananstett  路  3Comments

geek876 picture geek876  路  3Comments

kfox1111 picture kfox1111  路  3Comments

whereisaaron picture whereisaaron  路  3Comments

natemurthy picture natemurthy  路  3Comments