The metrics generated by the HTTP source are not correctly tagged with component metadata information.
timberio/vector:nightly-2020-11-05-debian
[sources.my_source]
type = "http"
address = "0.0.0.0:80"
[sources.internal_metrics]
type = "internal_metrics"
[sinks.my_sink]
type = "console"
inputs = ["my_source", "internal_metrics"]
encoding.codec = "json"
{
"counter" : {
"value" : 1
},
"kind" : "absolute",
"timestamp" : "2020-11-06T18:32:56.443154Z",
"name" : "events_processed_total"
}
If you compare it to for example stdin, with this config
[sources.my_source]
type = "stdin"
[sources.internal_metrics]
type = "internal_metrics"
[sinks.my_sink]
type = "console"
inputs = ["my_source", "internal_metrics"]
encoding.codec = "json"
generated output shows
{
"name" : "events_processed_total",
"tags" : {
"component_name" : "my_source",
"component_kind" : "source",
"component_type" : "stdin"
},
"counter" : {
"value" : 1
},
"kind" : "absolute",
"timestamp" : "2020-11-06T18:36:55.024678Z"
}
Thanks for reporting @drunkirishcoder, we'll get someone on this.
I'll take a look - I think the internal metrics are created but not wired into the source itself from first glance.
internal_metrics source shouldn't generate the events_processed_total counter. It just grubs the state and feeds the metric event into the topology. Either way, this is really odd, looks like a bug with tracing::span or sth.
@MOZGIII I am wondering if it is something to do with the metrics being emitted from util/http.rs?
Ooh, right! The HTTP server is probably launched from a different span than the topology unit. The fix should be to pass the nested span context along with the server upon boot, to make it so that the request handler is also wrapped with the span chain containing the component labels.
This bug must be common to all the HTTP-based sources.