What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
What happened:
We had a single application that does not handle nginx's keepalives well. It looks like you can only disable keepalives globally via the configmap approach.
What you expected to happen:
We expected to be able to set or disable keepalive_timeout via annotation on each individual ingress.
Anything else we need to know:
We've got a workaround in place, but it seems like this would be a nice thing to make available.
@gtaylor yes, this makes sense. Included in the TODO list for the next release.
馃憤
I'm hoping for an annotation variant of "upstream-keepalive-connections",
e.g.
"nginx.ingress.kubernetes.io/upstream-keepalive-connections" : "0"
0.18 was released but still see this not yet implemented.
@missedone pull requests welcome!
but I have to sign the CLA, i'm not sure it's OK with my manager...
I am waiting for this annotation. So currently, any way to config keepalive (default is 32 for now) value? I would like to increase this value.
@huychau
if you can't wait for the new annotation, you can try configuration-snippet which works for me:
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
keepalive_timeout 600s;
send_timeout 600s;
...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
Hoping someone will pick this up ...
@mindw: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Hoping someone will pick this up ...
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@huychau
if you can't wait for the new annotation, you can try
configuration-snippetwhich works for me: