__System info:__ InfluxDB 1.4.3 (latest Docker image on arm)
__Steps to reproduce:__
Hello, I am seeing a continous high CPU usage running this container on a Raspberry Pi 3

Also, the memory usage is a bit odd
I am hosting grafana, influxdb and home-assistant here. I have noticed this when I started to get monitoring from this containers using telegraf.

I have tried to stop grafana and home-assistant, and the spikes of CPU usage continue. What can I do?
__Expected behavior:__ Normal CPU usage, and some CPU spikes.
__Actual behavior:__ High CPU usage.
__Additional info:__ [Include gist of relevant config, logs, etc.]
I've discovered now this debug commands, so in a few hours I'll post the result of some of this commands.
Also, if this is an issue of for performance, locking, etc the following commands are useful to create debug information for the team.
curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"
curl -o vars.txt "http://localhost:8086/debug/vars"
iostat -xd 1 30 > iostat.txt
I have no iostat installed in the Raspberry Pi, but I can install it if needed.
@alexdrl could you retest this with the latest 1.5.1?
I am using the 1.5.1 version, coming from the influxdb:latest docker image, and I am seeing the same behaviour... Do you want me to reupload some logs?
@alexdrl I checked internally and we have at least one person running 1.4.2 on an RPi 3 and seeing ~6% cpu usage but he's not running in docker. Could you test InfluxDB outside of docker?
@dgnorton this weekend I'll check that. I'll install the .deb package. Thanks!
@alexdrl two other ideas brought up during our internal discussion were:
influxd with Go 1.10 might help. One of the team looked at the profile you posted (thanks for that by the way) and noticed it was spending a lot of time in HLL Count. Go 1.10 added support for more ARM instructions, which may improve perf on the RPi.[monitor]
store-enabled = false
Another idea was to disable the internal monitor because that appears, from the profile, to be creating a lot of load. The monitor can be disabled in the config:
Wow, this has changed the stable at 100% CPU usage to a stable 0% CPU with usage spikes :D
What does that component do? I'll post another graph of the CPU and memory utilisation with more hours of logging with telegraf.
PD: Improving the general speed of some ARM instructions using go 1.10 also seem like a good idea.
Wow, this has changed the stable at 100% CPU usage to a stable 0% CPU with usage spikes :D
That's great news!
What does that component do?
Monitoring keeps track of InfluxDB's internal stats like number of points written, estimate of number of series, etc. Useful for diagnosing problems sometimes. Some of these internal stats can be expensive to compute with 100% accuracy. However, that level of accuracy isn't always needed. E.g., if influxd were OOMing, it might be helpful to know about how many series have been written. If the estimate says there are 200M series but there's really only 195M, that's close enough. Either of those is likely to cause a problem with an in-memory index. As mentioned in my earlier comment, InfluxDB uses HLL to compute that estimate. It's efficient on Intel hardware but hasn't had much testing on ARM. It will be interesting to see if building influxd with Go 1.10 improves ARM performance for internal monitoring.
Thanks for reporting. I'm going to close this issue since that config change seems to have fixed your problem. If you feel there's still an issue with this, we can reopen the issue.
Thank for you for looking and solving the issue, and for the complete explanation. Ping me if you want me to test a Go 1.10 compiled version with that option enabled.
Not sure if the bug should be re-opened, but I experienced the same behaviour on a Raspberry Pi Zero. I am running an instance of Influxdb with only 2 writes each 11 seconds. I saw lots of empty values so I started investigating. Turns out influxdb was running on close to full cpu most of the time.
The above config change seems to have fixed my problems partly. I am seeing 4-6% cpu utilization on writes, rather than 97% continously.
I am not a developer but this smells buggy to me? Thanks a lot for the solution though! =)
For some strange reason my solution was to stop service, start influxd by hand, let it run for a while, and high cpu load went. :D
Most helpful comment
@alexdrl two other ideas brought up during our internal discussion were:
influxdwith Go 1.10 might help. One of the team looked at the profile you posted (thanks for that by the way) and noticed it was spending a lot of time in HLL Count. Go 1.10 added support for more ARM instructions, which may improve perf on the RPi.