Telegraf: LVM plugin

Created on 5 Nov 2015  路  27Comments  路  Source: influxdata/telegraf

Is there a LVM plugin available? The disk plugin does not seem to be able to figure out the sizes of LVM volumes correctly.

I am using version 0.2.0

feature request help wanted

Most helpful comment

Hi all

I'm much interesting in such volume. I'm using Docker CE on CentOS 7 using the devicemapper (aka LVM) storage backend.

What Docker does is use a thin provisioning LV which is never mounted (so it is not monitored by the disk plugin, same problem as @m00dawg basically) but from which it then creates some block devices (type dm when using lsblk) by default each is 10GB and is mounted as an XFS partition for the container to use. This behaviour might explain the outputs from @kotarusv discussed last year.

I'm not too much interested in monitoring the file size of each block devices created by Docker for each container, because I'm using volumes for where data are written, this is not too interesting.

BUT I am very interested in monitoring the Docker dedicated thin provision LV (which I can check with lvs). It is currently monitored by syslog, but this will work once there is an alert when the thin provision LV will reach a certain usage threshold. I would like to be able to see this LV usage live even when it is under this threshold.

So it would be nice if the disk plugin would be extended to monitor VG and LV usage. Or if there would be a dedicated plugin for monitoring DM devices usage even when they are not mounted.

All 27 comments

There is no LVM plugin, but that would be good to have

Thanks for the info; I'll see whether I can add one.

  • 1 This is very important plugin. I desperately need to measure my docker volume size where it hosted on LVM block storage on per host. I'm wondering how come these basic plug-ins are missing.

any idea what is current status?

My logical volumes seem to work fine, can someone expand on the issue?

disk,path=/home,device=mapper/xyzzy-home,fstype=ext4,host=loaner used_percent=41.12732200862653,inodes_total=6553600i,inodes_free=6071308i,inodes_used=482292i,total=105555197952i,free=58972594176i,used=41197121536i 1493325713000000000

SELECT max("total") FROM "disk" WHERE "hostname" =~ /^$hostname$/ AND "device" = 'mapper/docker-8:1-50333733-000df53ce4e5fd58fccdbb7f7c991c5292d49dc2e214d3679f5419d9d6864e36' AND "fstype" = 'xfs' AND $timeFilter GROUP BY time($interval), "fstype" fill(null)

my docker LVM size is 300 GB. but it is just showing just 10 GB.

2017-04-27_15-39-41

Can you paste the output of df?

As per my knowledge df command wont give output of docker thin volume.

docker info

Server Version: 1.12.5
Storage Driver: devicemapper
Pool Name: docker--vg-docker--pool
Pool Blocksize: 524.3 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 35.4 GB
Data Space Total: 128.7 GB
Data Space Available: 93.31 GB
Metadata Space Used: 14.2 MB
Metadata Space Total: 323 MB
Metadata Space Available: 308.8 MB
Thin Pool Minimum Free Space: 12.87 GB

$ df -PH
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 11G 7.9G 2.9G 74% /
devtmpfs 34G 4.1k 34G 1% /dev
tmpfs 34G 0 34G 0% /dev/shm
tmpfs 34G 3.5G 31G 11% /run
tmpfs 34G 0 34G 0% /sys/fs/cgroup
/dev/mapper/vg_root-var_tmp_lv 4.3G 37M 4.3G 1% /var/tmp
/dev/mapper/vg_root-users_lv 4.3G 42M 4.3G 1% /users
/dev/mapper/vg_root-tmp_lv 4.3G 35M 4.3G 1% /tmp
17G 24% /auto/usrcisco-noarch
tmpfs 6.8G 0 6.8G 0% /run/user/0
tmpfs 6.8G 0 6.8G 0% /run/user/209969

Okay, I guess the disk doesn't work for the same reason df doesn't, since they both use the same system call AFAIK. Can you add output of lvs too?

$ sudo lvs
LV          VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
docker-pool docker-vg twi-aot--- 119.88g             27.51  4.39
swap_lv     vg_root   -wi-ao----   7.90g
tmp_lv      vg_root   -wi-ao----   4.00g
users_lv    vg_root   -wi-ao----   4.00g
var_tmp_lv  vg_root   -wi-ao----   4.00g
$ sudo pvs
PV         VG        Fmt  Attr PSize   PFree
/dev/sdc1  docker-vg lvm2 a--  300.00g 179.52g
/dev/sdd   vg_root   lvm2 a--   20.00g  96.00m
$ sudo vgs
VG        #PV #LV #SN Attr   VSize   VFree
docker-vg   1   1   0 wz--n- 300.00g 179.52g
vg_root     1   4   0 wz--n-  20.00g  96.00m

guys any progress or update on this? i ran into similar issue today. /var/log is mounted in our systems as LVM. i can see this volume from df -PH output( unlike docker storage where I can't see in df output) but still not able to see LVM's in device list

df -t xfs

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 10473900 9785120 688780 94% /
/dev/mapper/vg_root-users_lv 4184064 41028 4143036 1% /users
/dev/mapper/vg_root-var_tmp_lv 4184064 35908 4148156 1% /var/tmp
/dev/mapper/vg_root-tmp_lv 4184064 33508 4150556 1% /tmp
/dev/mapper/vg_root-varlog 52403200 1737572 50665628 4% /var/log

df -PH
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 11G 11G 704M 94% /
devtmpfs 68G 4.1k 68G 1% /dev
tmpfs 68G 0 68G 0% /dev/shm
tmpfs 68G 4.5G 64G 7% /run
tmpfs 68G 0 68G 0% /sys/fs/cgroup
/dev/mapper/vg_root-users_lv 4.3G 43M 4.3G 1% /users
/dev/mapper/vg_root-var_tmp_lv 4.3G 37M 4.3G 1% /var/tmp
/dev/mapper/vg_root-tmp_lv 4.3G 35M 4.3G 1% /tmp
tmpfs 14G 0 14G 0% /run/user/0
/dev/mapper/vg_root-varlog 54G 1.8G 52G 4% /var/log
tmpfs 14G 0 14G 0% /run/user/351308
tmpfs 14G 0 14G 0% /run/user/209969

I would be able to view only / file system or fstype is xfs but only root file system or device is only for sda1. Unable to view any info about other volumes

any idea? is it a bug or intentional

Srinivas Kotaru

telegraf -config /etc/telegraf/telegraf.conf -input-filter disk -test

  • Plugin: inputs.disk, Collection 1
    > disk,device=rootfs,fstype=rootfs,datacenter=rcdn,cluster=cae-ga-rcdn,path=/ inodes_total=5238784i,inodes_free=5224757i,inodes_used=14027i,total=10718543872i,free=10437115904i,used=281427968i,used_percent=2.6256175405987086 1497297498000000000
    > disk,datacenter=rcdn,path=/,device=mapper/docker-8:1-34210753-cd03da496c4b7b6dfceca4a0d1c84b266f52806ad63639cb39c73d4626dc8b5f,fstype=xfs,cluster=cae-ga-rcdn free=10437115904i,used=281427968i,used_percent=2.6256175405987086,inodes_total=5238784i,inodes_free=5224757i,inodes_used=14027i,total=10718543872i 1497297498000000000
    > disk,path=/dev/termination-log,device=sda1,fstype=xfs,datacenter=rcdn,cluster=cae-ga-rcdn inodes_free=10368060i,inodes_used=116036i,total=10725273600i,free=2659074048i,used=8066199552i,used_percent=75.20740125454701,inodes_total=10484096i 1497297498000000000
    > disk,device=sda1,fstype=xfs,datacenter=rcdn,cluster=cae-ga-rcdn,,path=/run/secrets inodes_free=10368060i,inodes_used=116036i,total=10725273600i,free=2659074048i,used=8066199552i,used_percent=75.20740125454701,inodes_total=10484096i 1497297498000000000
    > disk,datacenter=rcdn,cluster=cae-ga-rcdn,,path=/etc/resolv.conf,device=sda1,fstype=xfs inodes_free=10368060i,inodes_used=116036i,total=10725273600i,free=2659074048i,used=8066199552i,used_percent=75.20740125454701,inodes_total=10484096i 1497297498000000000
    > disk,device=sda1,fstype=xfs,datacenter=rcdn,cluster=cae-ga-rcdn,,path=/etc/hostname inodes_free=10368060i,inodes_used=116036i,total=10725273600i,free=2659074048i,used=8066199552i,used_percent=75.20740125454701,inodes_total=10484096i 1497297498000000000
    > disk,path=/etc/hosts,device=sda1,fstype=xfs,cluster=cae-ga-rcdn,,datacenter=rcdn free=2659074048i,used=8066199552i,used_percent=75.20740125454701,inodes_total=10484096i,inodes_free=10368060i,inodes_used=116036i,total=10725273600i 1497297498000000000
    > disk,path=/etc/telegraf/telegraf.conf,device=sda1,fstype=xfs,datacenter=rcdn,cluster=cae-ga-rcdn,used_percent=75.20740125454701,inodes_total=10484096i,inodes_free=10368060i,inodes_used=116036i,total=10725273600i,free=2659074048i,used=8066199552i 1497297498000000000

if you look at test command output, it is clearely giving info about only / or root file system.

@danielnelson : Can you comment on this behaviour?

Did you edit that output? I'm asking because I see ,, in the tags section. Also can you add the disk plugin config?

I just removed the hostname which gives our identify.

Okay, just double checking that there isn't an issue with the output. Can you paste you disk input config?

[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false

[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs"]

[[inputs.diskio]]

[[inputs.kernel]]

[[inputs.mem]]

[[inputs.processes]]

[[inputs.swap]]

[[inputs.system]]

[[inputs.net]]

[[inputs.netstat]]

[[inputs.docker]]
endpoint = "unix:///var/run/docker.sock"
timeout = "30s"
perdevice = true
total = true
docker_label_exclude = ["*"]

[[inputs.procstat]]
exe = "dockerd-current"
prefix = "docker"

[[inputs.procstat]]
exe = "openshift"
prefix = "openshift"

Thanks, the thing I was most curious about was the path tags such as path=/etc/resolv.conf, this is pretty weird as I would expect these to be mount points.

image

is it due to my telegraph running as a container and can see my container file system? but container is running a privileged container. if it getting other metrics like cpu, disk, then it should get this metrics also. am not sure why it missing

FYI

in telegraf docker file, am not mounting /proc, /usr or /dev or /lib/modules from home file system. I am only exporting docker socket inside telegraf container and telegraf has read only access to docker socket.

Since every container also its own /proc, /usr , /dev, /lib/modules, i didn't exported due to security and don't want telegraf container crash host since it running as a privileged container.

The list of paths comes from /etc/mtab, can you add the output of this file?

from telegraf container. But am seeing correct mounts from host where container is running including LVM's

cat /etc/mtab

rootfs / rootfs rw 0 0
/dev/mapper/docker-8:1-34210753-cd03da496c4b7b6dfceca4a0d1c84b266f52806ad63639cb39c73d4626dc8b5f / xfs rw,seclabel,relatime,nouuid,attr2,inode64,sunit=1024,swidth=1024,noquota 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev tmpfs rw,seclabel,nosuid,mode=755 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666 0 0
sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
tmpfs /sys/fs/cgroup tmpfs rw,seclabel,nosuid,nodev,noexec,relatime,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/net_prio,net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
cgroup /sys/fs/cgroup/cpuacct,cpu cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
mqueue /dev/mqueue mqueue rw,seclabel,nosuid,nodev,noexec,relatime 0 0
/dev/sda1 /dev/termination-log xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
shm /dev/shm tmpfs rw,seclabel,nosuid,nodev,noexec,relatime,size=65536k 0 0
/dev/sda1 /run/secrets xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/resolv.conf xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/hostname xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/hosts xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/telegraf/telegraf.conf xfs ro,seclabel,relatime,attr2,inode64,noquota 0 0
tmpfs /run/docker.sock tmpfs ro,seclabel,relatime,mode=755 0 0
tmpfs /run/secrets/kubernetes.io/serviceaccount tmpfs ro,rootcontext=system_u:object_r:svirt_sandbox_file_t:s0,seclabel,relatime 0 0

I guess the weird paths are the config files you are binding into the container.

/dev/sda1 /run/secrets xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/resolv.conf xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/hostname xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/hosts xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /etc/telegraf/telegraf.conf xfs ro,seclabel,relatime,attr2,inode64,noquota 0 0

Okay, that's just a distraction, we are trying to figure out why the varlog partition is not found.

It seems like you would need the host mtab file to be reachable for it to know about them. Honestly I'm not sure exactly how this is supposed to work, but the HOST_ETC environment variable can be used to change the path to /etc https://github.com/shirou/gopsutil/blob/master/internal/common/common.go#L316. This is discussed in more depth over on #1544

Most of the container specific agents ( datadog, sysdig etc) mount host file systems inside the container for easy identification between container file systems and host file systems

https://stackoverflow.com/questions/33397382/host-monitoring-from-a-docker-container

can we use telegraph is like that? i knew telegraf can be used as a container and standalone agent. If we using as a container but still want to monitor host where it is running, we need to export host file systems to containers and interpreted as host data

I believe this is the subject of the discussion on the issue I linked, I would try following these steps https://github.com/influxdata/telegraf/issues/1544#issuecomment-278955406

Just to add my particular situation, I'm wanting to monitoring LVM volume group usage for an OpenStack CInder server/node. Since none of the logical volumes are even mounted on the Cinder node itself, I don't have a way to look at the space used via telegraf directly (as far as I am aware).

Hi all

I'm much interesting in such volume. I'm using Docker CE on CentOS 7 using the devicemapper (aka LVM) storage backend.

What Docker does is use a thin provisioning LV which is never mounted (so it is not monitored by the disk plugin, same problem as @m00dawg basically) but from which it then creates some block devices (type dm when using lsblk) by default each is 10GB and is mounted as an XFS partition for the container to use. This behaviour might explain the outputs from @kotarusv discussed last year.

I'm not too much interested in monitoring the file size of each block devices created by Docker for each container, because I'm using volumes for where data are written, this is not too interesting.

BUT I am very interested in monitoring the Docker dedicated thin provision LV (which I can check with lvs). It is currently monitored by syslog, but this will work once there is an alert when the thin provision LV will reach a certain usage threshold. I would like to be able to see this LV usage live even when it is under this threshold.

So it would be nice if the disk plugin would be extended to monitor VG and LV usage. Or if there would be a dedicated plugin for monitoring DM devices usage even when they are not mounted.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chench00 picture chench00  路  3Comments

robert-gomes picture robert-gomes  路  3Comments

timhallinflux picture timhallinflux  路  3Comments

fahimeh2010 picture fahimeh2010  路  3Comments

IxDay picture IxDay  路  3Comments