Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature request.
I'm using ingress-nginx to proxy TCP services, everything is working fine, but I'd like to have the option to disable the access logs for one (or all) of the TCP backends.
Nginx is creating lots of logs like this one: [23/Apr/2018:12:58:49 +0000]TCP200000.001. As they are not very informative I'd like to remove them completely. That does not seem to be possible at the moment.
The enable-access-log annotation is available for HTTP backends, but it is not available to TCP or UDP backends.
NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
I'm willing to contribute this feature. How would you suggest handling configuration options for L4 services?
I'm not a fan of the :PROXY:PROXY option, as it doesn't scale very well with the number of options. Using :PROXY:PROXY:NOLOG / ::NOLOG or something similar to disable access logging is an easy way to solve the problem, but it's not a very nice way.
Could we use some sort of key/value system for tagging L4 services with? Like this:
"9000": "foo/bar:80,accessLog=false,upstreamProxy=true,downstreamProxy=false"
Switching the format, while keeping backwards compatibility with the existing format should be easy.
What do you say @aledbf? Do you have a plan for how to handle this in the future?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
@aledbf Hey! :) We would appreciate if you could remove the lifecycle/stale label and give your feedback on this issue, since it looks like several people are involved :)
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Closing. The TCP and UDP features are being removed in the next release
I don't understand... It is a feature that is being used by a lot of people, why remove it? Is there any place where we can discuss about it?
I don't understand... It is a feature that is being used by a lot of people, why remove it? Is there any place where we can discuss about it?
Please check my comment https://github.com/kubernetes/ingress-nginx/pull/3197#issuecomment-427823416
@aledbf: Could we re-open as the TCP and UDP features removal has been reverted?
I would also like to disable stream logs. I don't have any TCP or UDP services specified, but I still get some of these stream logs (not sure why it happens). They end up uncategorized in my ELK-stack. I could always just drop them in my Logstash pipeline, but I'd much rather not have them in the logs at all.
@anton-johansson sure but someone needs to works on this :)
@aledbf: Of course, but it's a start. :) Seems like a fairly small change, maybe suited for a first contribution?
What about just adding a 2nd setting:
{{ if $cfg.DisableAccessLog OR $cfg.DisableStreamAccessLog }}
access_log off;
{{ else }}
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@zegl I have also experienced problems with default log format:
[09/Jul/2019:07:13:59 +0000]TCP20063440432430.738
It is absolutely useless and "unsearchable" (in Elasticsearch, for example).
This can be changed by option log-format-stream in ConfigMap.
I use the following format (note escaped quotes!):
"log-format-stream": "\"[$time_local] $protocol $status $upstream_addr $upstream_bytes_sent $upstream_bytes_received $upstream_connect_time $upstream_first_byte_time $upstream_session_time\""
This leads to the following output:
[09/Jul/2019:07:17:22 +0000] TCP 200 10.240.0.112:22 12606 3829557 0.000 0.004 0.352
_Possible fix for default format is to add quotes into this string._
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.