Add Ceph Cluster Performance Statistics
The ceph plugin is only concerned with a small set of data available from the admin socket.
Things like ceph status, ceph df and ceph osd pg stat gives a far richer set of performance metrics, e.g. placement group states, IOPs/read/writes on a global and per pool basis.
We rely on this for road map capacity planning, maintenance window schedule planning, performance profiling, spotting errors etc.

FYI that image was a live capture of a real CRUSH map update, and exactly what we want to see!
I made a similar thing here if it is of use: https://github.com/Buhrietoe/ceph-metrics
It just uses the exec plugin. We use it here in production in a container. It can be easily extended and supports multiple clusters (although serially). Dump mycluster.conf and mycluster.keyring into /etc/ceph/clusters/
Using the supported ceph python library really makes things a lot easier on the implementation side. It would be nice if someone could do a pure go implementation but @Buhrietoe's solution is completely functional in the meantime.
@spjmurray Very nice plugin. Could you share your grafana dashboard ?
@aderumier Sorry this took so long!
https://gist.github.com/spjmurray/41a6cf650a725ae21729af9e9b12697e