Telegraf: Add network stats in the procstat plugin

Created on 21 Jul 2017  路  12Comments  路  Source: influxdata/telegraf

Hi, currently the procstat plugin includes a number of useful metrics. However, I would like to track the network bytes read and written for a given process which isn't handled by the plugin. Is it possible to add this feature?

areprocstat enhancement

Most helpful comment

it seems that per process network stats are available on gopsutil now.
https://godoc.org/github.com/shirou/gopsutil/net

All 12 comments

I don't believe per process network stats are available via gopsutil, which is the library we are using to get the existing information for procstat, so it wouldn't be a trivial improvement. Having per process network stats would be a nice feature.

it seems that per process network stats are available on gopsutil now.
https://godoc.org/github.com/shirou/gopsutil/net

Is there any development on this? Thanks a lot!

This would be really good to have!

Sent with GitHawk

@ytzelf to my knowledge, there hasn't been any implementation of this in development yet

Hello, does anyone need per-nic process network stats?
If I there's a need, I'll make this as configurable, but if not, I'll make to provide only sum of all nics.

There is a PR open that implements this feature #3895

@danielnelson I think PR https://github.com/influxdata/telegraf/pull/3895 needs a long time for merging. Do you think it's better I wait for #3895 to be merged or make small new PR only adding those network stats?

For me, I need this stat information in a short time and surely I can implement and build my own version of telegraf and use it but I think it's not a good approach.

Yes it probably makes sense to split out this change. One concern I have though is when I look at a normal process on my system the data used by the function is the same as my global network data:

$ cat /proc/net/dev /proc/2988/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
 wlan0: 6850834040 4863691    0    0    0     0          0         0 390624168 2386332    0    0    0     0       0          0
    lo: 4797754   45841    0    0    0     0          0         0  4797754   45841    0    0    0     0       0          0
 bond0: 9170967420 6680114    0   19    0     0          0       103 534043778 3080461    0    0    0     0       0          0
dummy0:       0       0    0    0    0     0          0         0  1057122   20168    0    0    0     0       0          0
virbr0: 102641590  389757    0   86    0     0          0         0 1681561174  510684    0    0    0     0       0          0
 vnet1:  195144    1677    0    0    0     0          0         0   485654    8620    0    0    0     0       0          0
  eth0: 2320133662 1816425    0   19    0     0          0       103 143419898  694131    0    0    0     0       0          0
 vnet0: 107813918  387700    0    0    0     0          0         0 1682074039  521679    0    0    0     0       0          0

Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
 wlan0: 6850834040 4863691    0    0    0     0          0         0 390624168 2386332    0    0    0     0       0          0
    lo: 4797754   45841    0    0    0     0          0         0  4797754   45841    0    0    0     0       0          0
 bond0: 9170967420 6680114    0   19    0     0          0       103 534043778 3080461    0    0    0     0       0          0
dummy0:       0       0    0    0    0     0          0         0  1057122   20168    0    0    0     0       0          0
virbr0: 102641590  389757    0   86    0     0          0         0 1681561174  510684    0    0    0     0       0          0
 vnet1:  195144    1677    0    0    0     0          0         0   485654    8620    0    0    0     0       0          0
  eth0: 2320133662 1816425    0   19    0     0          0       103 143419898  694131    0    0    0     0       0          0
 vnet0: 107813918  387700    0    0    0     0          0         0 1682074039  521679    0    0    0     0       0          0

Is this data only useful for processes ran in a network namespace?

@danielnelson I think so. In my knowledge, Linux does not provide real statistics data (amount of data read/wrote by the specific process) and the data gathered from /proc/{pid}/net/dev just contains data of interfaces filter by namespace and those values are not different with that in /proc/net/dev.

So I think those metrics gathered might give different information than expected. And I think one of the reason people are not implementing this issue is that there's no way to get information what people really want/need.

I don't have a need for this but do have a need for polling to be "fast", based on @danielnelson and @wingsof comments, I think it might make sense to be able to turn on/off the network stats for different processes being monitored.

Given that this data is essentially the same as the data reported by the net input, perhaps there is a way we could just report the processes network namespace, and then add support for collecting per network namespace metrics in the net plugin.

When using the Flux query language in InfluxDB 1.7 and newer it is possible to join on tags.

Was this page helpful?
0 / 5 - 0 ratings