Is this a request for help? Yes , bug with possible solution.
What keywords did you search in NGINX Ingress controller issues before filing this one? missing metrics (#3053 looked promising, but wrong version. according to git commit this bug was introduced in 0.20.0)
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version: 0.22.0 (bug introduced in 0.20.0)
Kubernetes version (use kubectl version): 1.7.14
Environment:
uname -a): 4.14.96-coreosWhat happened: ingresses without specific host does not report metrics.
What you expected to happen: All ingresses should have metrics.
How to reproduce it (as minimally and precisely as possible):
Create an ingress without host set.
Anything else we need to know:
Hi we just upgraded to 0.22.0 from 0.15.0 and now have the issue with dissapearing metrics.
I looked a the code and found this commit which introduces the problem:
https://github.com/kubernetes/ingress-nginx/commit/9766ad8f4be7432354b30e6be6ade730751d1207
If you have an ingress that does not specify host it will never find a match and not increase the metric counter.
So all ingresses without host will be without metrics now :(
I'm not 100% familiar with the codebase and not quite sure how to solve it since we cannot know if ingress is missing host at that time. Perhaps hosts field on SocketCollector could be a
map[ingressName]struct{
Wildcard bool
Hosts sets.String
}
In that case we can know when not to skip because its a wildcard?
msg in handleMessage in SocketCollector looks like on a request on an ingress that _does not have_ host. It looks korrect.
[{
"requestLength": 1190,
"ingress": "kafka-http-ingress",
"status": "201",
"service": "kafka-http",
"requestTime": 0.024,
"namespace": "default",
"host": "route-utv.fnox.se",
"method": "POST",
"upstreamResponseTime": 0.024,
"upstreamResponseLength": 4,
"upstreamLatency": 0.014,
"path": "\/internalapi\/kafka",
"responseLength": 213
}]
@jonaz actually this change was intentional. Please check https://github.com/kubernetes/ingress-nginx/issues/3116 . How we can differentiate a valid request from a DOS?
@aledbf Perhaps we could match the prefix of the request path against the paths on the ingress? In addition to the host filtering. In that case we will not have more metrics series than host+path combinations of all ingresses?
Right now we have lost our ability to monitor/alert on a majority of our services request failuere rate... which is really bad for us in the ops department. We use a single domain (for CORS reasons) and just have ingresses routing on path for about 120 microservices.
@aledbf would you accept PR with proposed fix or do we need to maintain a fork or move to another ingress controller?
@jonaz Did you find a solution for this?
@komljen no, still waiting for answer frmo @aledbf if my proposed solution would be accepted.
I guess we have to go back to Traefik in the mean time.
Just wanted to thank @jonaz for opening this issue, as this was the reason Grafana wasn't showing anything for me.
It might be worth mentioning this in the monitoring section of the docs. I'd imagine others might assume a fan-out without a host would generate metrics.
Would it make sense to change the code here to export all ingress metrics when per-host metrics are disabled though? We're losing a lot of metrics that are essential to monitoring this in production because the metrics are just not passed on.
Here's the flag I'm talking about: https://github.com/kubernetes/ingress-nginx/pull/3594
Here's the code in question I'm suggesting we change: https://github.com/kubernetes/ingress-nginx/blob/ddedd165b2a457607e70e37d3d7ce613d1aa5307/internal/ingress/metric/collectors/socket.go#L224-L227
My solution above would protect against DDOS and also support metrics on all paths+hosts. But no word from maintainers here yet.
My solution above would protect against DDOS and also support metrics on all paths+hosts. But no word from maintainers here yet.
I like your solution in terms of host labels being present on the metrics, as it definitely protects against high cardinality in case of a DDOS. Having said that though, when metrics-per-host=false, the benefit IMO is that there's no longer a DDOS risk in terms of metrics label cardinality. At least from my point of view, in that case anyway, I would like to get _all_ metrics, including those from any potential DDOS; In fact, a DDOS would become identifiable by a big increase of 404s on the metrics with this capability.
My thinking is that maybe we need both; if metrics-per-host=true, regex check to determine whether to export or not. If metrics-per-host=false, export everything. Alternatively, even if the path doesn't match and metrics-per-host=true, maybe we could remove the host label for things that don't match a known host (this way, we get all the metrics in all cases). Thoughts? Did I understand your suggestion properly?
It would certainly be useful to get some thoughts from some of the maintainers here; I'm interested in submitting at least one PR to do one or the other approach, but I don't want to do that work if there's no interest in accepting such a change.
Are there any concrete plans on the timeline for this? We also have the issue, that we basically are blind in terms of nginx metrics, as we give every customer a custom subdomain (so we use a wildcard domain and hence have not metrics). IMHO it would be a good solution, to just record the wildcard host. For example:
We have an ingress for host *.domain.com. If a customer accesses our system via customer1.domain.com, then the nginx could just record this request for *.domain.com. The same for every other xxx.domain.com. All requests handled by the one wildcard ingress go into one bucket. Hence there would be no problem with unbounded cardinality in the metrics and so no DDOS problem. But we could still distinguish requests for the different ingresses.
I would try to help improve this, but I have no experience with Go :(
I created a PR #4139 to fix that one gets metrics at least when running with --metrics-per-host=false.
I totally agree with @choffmeister - there are quite a few use cases where you have to use wildcards and still get metrics (without tagging them with host).
There must be a way to get metrics even for ingress with a wildcard, without getting a DOS of the Prometheus/etc.
However, I think that this behavior must be controlled via a separate option/flag for several reasons:
metrics-per-host, changing its behavior is a breaking changeMy suggestion would be a flag that disables filtering metrics for ingress with wildcard and keep the metrics-per-host with its current behavior.
@aledbf What do you think about this approach?
Hi @jonaz Any news with this issue?
It's very important for our use-case...
Shame the PR seems to have gone quiet.... its also blocking us from upgrading from 0.18 so looking forward to it being merged
@jonaz @aledbf any progress on this? We're essentially flying blind with our ingress right now because of this bug...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@jonaz @aledbf any progress on this? We're essentially flying blind with our ingress right now because of this bug...
@hairyhenderson
since no reply from @aledbf regarding my implementation suggestion we are migrating to traefik 2.0 instead.
@aledbf I also just ran into this issue (today as a matter-of-fact). IMO the linked PR (https://github.com/kubernetes/ingress-nginx/pull/4139) is a reasonable way to go. In the case that someone has a wildcard ingress IMO it makes sense to just exclude the host label -- since then I can still get metrics based on the ingress name (which is also a fixed cardinality).
Is there anything remaining to get the PR merged in?
@aledbf I also just ran into this issue (today as a matter-of-fact). IMO the linked PR (#4139) is a reasonable way to go. In the case that someone has a wildcard ingress IMO it makes sense to just exclude the
hostlabel -- since then I can still get metrics based on the ingress name (which is also a fixed cardinality).Is there anything remaining to get the PR merged in?
Very unlikely to ever happen. I'd recommend you name all of your ingresses (this is the approach I took to get around this issue). It isn't that hard, especially when using some form of automation to deploy Kubernetes apps (Helm or in my case Terraform). Sticky Session also starts working again when doing this :)
Long term, traefik might be the way to go, depending on if or not ingress-nginx picks up a new maintainer with aledbf stepping down.
Disclaimer: this isn't a dig, ingress-nginx is a great piece of work and I highly appreciate the work aledbf has done on it. Just trying to point out a workaround and the fact things are bleak right now but hopefully someone/some others will step up.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I am also experiencing the same issue not getting nginx_ingress_controller_requests metrics if I don't define the _host_
e.g.
This doesn't generate metrics for nginx_ingress_controller_requests
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: service-nginx
name: observability-es
namespace: capabilities
spec:
rules:
- http:
paths:
- backend:
serviceName: elasticsearch-es-http
servicePort: 9200
path: /
This generate metrics for nginx_ingress_controller_requests
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: service-nginx
name: observability-es
namespace: capabilities
spec:
rules:
- host: dev.microservices.test.test
http:
paths:
- backend:
serviceName: elasticsearch-es-http
servicePort: 9200
path: /
Would be great if we could get the fix for this.
I even lose my metrics when I made my host a wildcard.
So the host exists it just changed from e.g. "prod.something.com" to "*.something.com".
I also cannot see metrics of hosts with wildcard, there is a workaround?
@DanOfir no, this has effectively stalled, and caused more than a few people to simply switch to a different ingress controller. See https://github.com/kubernetes/ingress-nginx/pull/4139#issuecomment-508585084 for some related conversation.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Closing. Fixed in master #4139
Most helpful comment
Hi @jonaz Any news with this issue?
It's very important for our use-case...