Influxdb: InfluxDB refusing connections

Created on 4 Oct 2017  ·  36Comments  ·  Source: influxdata/influxdb

Bug report

__System info:__ [Include InfluxDB version, operating system name, and other relevant details]
InfluxDB 1.3.5 on Ubuntu 16.04.3 LTS

__Steps to reproduce:__

  1. sudo influxd run -config /etc/influxdb/influxdb.conf
  2. influx

__Expected behavior:__ [What you expected to happen]
I can connect to InfluxDB.

__Actual behavior:__ [What actually happened]

Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: getsockopt
Please check your connection settings and ensure 'influxd' is running.

__Additional info:__
This issue crops up very randomly. It usually happens after I write a large set of data and the write fails partway through and I restart the database. Deleting the databases doesn't fix it. The only solution I've found to consistently work so far is reinstalling InfluxDB.

When I run sudo service influxdb start then check sudo service influxdb status, it checks out as active (running). No error messages show up in the output, just typical database reads.

curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"

curl: (7) Failed to connect to localhost port 8086: Connection refused
curl -o vars.txt "http://localhost:8086/debug/vars"

curl: (7) Failed to connect to localhost port 8086: Connection refused
iostat -xd 1 30 > iostat.txt
Linux 4.4.0-72-generic (bcaa-maas-rc)   10/03/2017      _x86_64_        (48 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.39     3.64    1.34    6.16    33.31   382.12   110.75     0.02    3.24    6.63    2.50   0.47   0.35

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    2.00   42.00     8.00   192.00     9.09     0.01    0.18    4.00    0.00   0.18   0.80

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    3.00   74.00    12.00   316.00     8.52     0.02    0.21    5.33    0.00   0.21   1.60

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    3.00   47.00    20.00   224.00     9.76     0.00    0.08    1.33    0.00   0.08   0.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    18.00    1.00   57.00     4.00   336.00    11.72     0.14    2.48    0.00    2.53   0.07   0.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    3.00   87.00   224.00   360.00    12.98     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00   25.00    0.00  2484.00     0.00   198.72     0.08    3.04    3.04    0.00   2.56   6.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    11.00    0.00    2.00     0.00    52.00    52.00     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    5.00   79.00   488.00   368.00    20.38     0.09    1.14    0.00    1.22   0.10   0.80

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    16.00  101.00  167.00   676.00   760.00    10.72     0.74    2.76    5.78    0.93   0.30   8.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    44.00   32.00   29.00   440.00  5796.00   204.46     0.22    3.67    5.12    2.07   0.66   4.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     3.00  110.00  122.00   724.00   624.00    11.62     0.58    2.47    5.13    0.07   0.83  19.20

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    20.00  182.00  135.00  1796.00   648.00    15.42     1.85    5.87   10.22    0.00   0.92  29.20

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    15.00   24.00   44.00   120.00   828.00    27.88     0.10    1.53    4.17    0.09   0.76   5.20

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00   49.00     0.00   204.00     8.33     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00   22.00     0.00   108.00     9.82     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    1.00   43.00     4.00   200.00     9.27     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00   80.00     0.00   352.00     8.80     0.06    0.70    0.00    0.70   0.05   0.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    23.00    0.00   76.00     0.00   412.00    10.84     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00  122.00     0.00   520.00     8.52     0.11    0.92    0.00    0.92   0.07   0.80

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00   97.00     0.00   408.00     8.41     0.02    0.21    0.00    0.21   0.04   0.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00   65.00     0.00   296.00     9.11     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00    19.00    1.00   88.00     4.00   976.00    22.02     0.03    0.31   16.00    0.14   0.22   2.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     1.00    0.00  108.00     0.00   464.00     8.59     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00  157.00     0.00   708.00     9.02     0.12    0.76    0.00    0.76   0.08   1.20

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00   18.00     0.00    92.00    10.22     0.00    0.00    0.00    0.00   0.00   0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00  144.00     0.00   636.00     8.83     0.00    0.00    0.00    0.00   0.00   0.00

1.x areperformance kinbug wontfix

Most helpful comment

@MarcVanOevelen I spend few hours on this issue and found beside the way you do, may be can try
below command before remove directory
sudo chown -R influxdb:influxdb /var/lib/influxdb

All 36 comments

Quick update: this happened again on 1.3.6. I uninstalled and reinstalled, but that didn't fix it either. I had to uninstall and then install the latest nightly.

@natejgardner have you checked influxd.log?
When influxdb starts, it needs to read all shards before it can open http listener.
In my case (over 200 large shards), it takes more than 3 minutes.

@natejgardner at the moment this just looks like Influx is starting up. Port 8086 is one of the last things to be opened, and then the database is ready for querying and writing. Can you provide the logs during this period and we can check them for you.

The database wasn't very large (~10 million records) and it remained in this condition indefinitely (it was offline for more than one day). I wonder if it's hanging when trying to start? I can't find any logs. /var/log/influxdb/ is an empty directory and I could not locate any file called influxd.log on my system.

@natejgardner would you be willing to send us your database via secure and private means? You could probably zip it up and email it (or say a dropbox link). Or we could provide you with access to our SFTP site. I think it will be hard to investigate this further without being able to reproduce it.

Oh, I just noticed:

Quick update: this happened again on 1.3.6. I uninstalled and reinstalled, but that didn't fix it either. I had to uninstall and then install the latest nightly.

Can you confirm this has been fixed with a nightly?

Hi,
I got exact the same issue. I am using 1.3.7 and nightly with no luck,too.
CPU keep high around 80~99% and can't influx due to connection refuse.
By the way, how to delete database in such condition? Or you need my database for further
analysis?

Having a similar issue as well after upgrading to 1.3.7. We are restarting Influx 1-2 times a day at the moment. After about ~12 hours Influx starts refusing connections to :8086 and backing up connections, a restart is the only way to resolve it. Logs do not show anything.

I am getting something similar, Raspberry Pi Zero W, with Jessie and a small Python script to write points. On a hard boot, it seems OK, but on the early morning soft reboot that I hoped would cure this it gives:
Failed to establish a new connection [Errno 111] Connection refused after Max retries and then hangs.

I've put a 60 second sleep before the connection into the Python script, but may increase this to give the database time to 'start'. Logging is off at the moment, because everything is stripped down, so it can potentially run for months.

I'm curious about sockets, could it be they are exhausted because too many of them in CLOSE_WAIT state for ex. ?

I have the same problem with influx 1.5.1 on a Raspbery Pi 2 with Jessie. Since a few days I cannot write to the DB anymore.

Telegraf complains: E! Error writing to output [influxdb]: Could not write to any InfluxDB server in cluster InfluxDB Output Error: Post http://127.0.0.1:8086/write?consistency=any&db=telegraf: dial tcp 127.0.0.1:8086: getsockopt: connection refused

My own writer program outputs: SOCKET_ERROR : [111] Connection refused

Same issue here. Influx 1.5.2 on Raspberry Pi 3 with Jessie running Openhabian.

Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: getsockopt: connection refused
Please check your connection settings and ensure 'influxd' is running.

[22:37:03] openhabian@openHABianPi:/var/log$ ps ax | grep influx
29129 ? Ssl 0:03 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Log dir is empty and no log at messages

The only way to make influxdb work again was to move away /var/lib/influxdb/data. Re-installing influxdb without removing the data dir did not make it work.

I now have the problem that the old measurement values are in an inaccessible file format in the backup of the old data dir.

@RREE @saback when this happens (and influxd is not responding), could you please get us profile information using the following command (you may need to change the host).

curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"

You can either attach the archive here, or you can email it to me. My address is edd, the domain is the name of the database .com.

Hi @e-dard, if I got it correct, it will not create the archive as it can't connect.

Anyway, the command response is below:

curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to localhost port 8086: Connection refused

@saback when the process is hanging could you SIGQUIT the process, which will emit a stack trace.

You can either send the kill signal manually (kill -s SIGQUIT <pid>) or you can hit ctrl-\ on the process of it’s in the foreground.

Hi @e-dard, sent the profiles.tar.gz via email. The kill -s SIGQUIT didn't provide any output but allowed me to run the curl command.

Same problem here after upgrading to Influx 1.5.2 on a raspi 3 with jessie

$ sudo service influxdb status
● influxdb.service - InfluxDB is an open-source, distributed, time series database
   Loaded: loaded (/lib/systemd/system/influxdb.service; enabled)
   Active: active (running) since Tue 2018-05-01 12:17:49 UTC; 6min ago

$ influx
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: getsockopt: connection refused
Please check your connection settings and ensure 'influxd' is running.

Here is my little dirty hack... don't know if it will help you, but killall influxd then /usr/bin/influxd -config /etc/influxdb/influxdb.conf do the trick and influxd starts listening as it should on 8086
So I did put this in crontab, and it starts on reboot. It works for now, not tested long term. We will see in the future updates if it's fixed.

@reboot killall influxd
@reboot  /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Influx 1.5.2 on a raspberry pi 3 with Strech 9.4 Linux Kernel 4.14.39-v7+

Had the same problem when installing influxdb 1.6.2 on ubuntu16.04.
The problem seems to be caused by permissions of the influxdb user :

When starting influxd as root then it starts listening as it should on port 8086.
However when it is started as the influxdb user (which happens when starting it from systemd)
then the http listener does not start !
i.e.
# sudo -u influxdb /usr/bin/influxd => no listener on 8086
# /usr/bin/influxd => works fine

Found the issue : some files in /var/lib/influxdb were owned by root instead of influxdb
This was probably caused by remainders of a previous installation and having started influxd as
root user.
Fix : cleanup before install
$ sudo rm -r /var/lib/influxdb
$ sudo apt-get remove influxdb
$ sudo apt-get install influxdb

Now when influxd is started via systemd it is correctly started and listening on port 8086 :-)

@MarcVanOevelen I spend few hours on this issue and found beside the way you do, may be can try
below command before remove directory
sudo chown -R influxdb:influxdb /var/lib/influxdb

I have installed Oracle Vbox and have Centos7 running in it. I have installed influx on centos. Now I am trying to access Influxdb from java web service(hosted on local tomcat server).
getting error as
org.apache.http.conn.HttpHostConnectException: Connect to localhost:8086 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:158)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)

@MarcVanOevelen I spend few hours on this issue and found beside the way you do, may be can try
below command before remove directory
sudo chown -R influxdb:influxdb /var/lib/influxdb

this work well for me,thx

Is it possible to make InfluxDB check for and log this permissions issue rather than silently failing?

Unfortunately the chown solution doesn't work for me. Running InfluxDB 1.7 on RHEL now. Influx writes to /var/log/messages that it's opening all the shards and opening / reading the tsm1 files. It takes a long time to do this, but eventually it stops writing any messages. It doesn't write any errors to the log but still refuses connections. The database was working fine for a week before suddenly ceasing to function, just like before.

If I attempt to create a backup, I get:

2019/09/10 17:42:13 Download shard 0 failed copy backup to file: err=read tcp 127.0.0.1:19364->127.0.0.1:8088: read: connection reset by peer, n=0.  Waiting 2s and retrying (2)...

Started from scratch, three days later, this behavior started again when someone made long-running query. InfluxDB stopped responding and on restart I get the same behavior. Only solution is to delete the data and start over. Is anyone else still dealing with this issue regularly?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

This issue has been automatically closed because it has not had recent activity. Please reopen if this issue is still important to you. Thank you for your contributions.

Seeing similar issue, below is the output of

service influxdb status
Redirecting to /bin/systemctl status influxdb.service
* influxdb.service - InfluxDB is an open-source, distributed, time series database
   Loaded: loaded (/usr/lib/systemd/system/influxdb.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-03-26 16:41:17 IST; 2ms ago
     Docs: https://docs.influxdata.com/influxdb/
 Main PID: 98407 ((influxd))
    Tasks: 0
   Memory: 0B
   CGroup: /system.slice/influxdb.service
           `-98407 (influxd)
curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
curl: (56) Recv failure: Connection reset by peer

seeing the below error in the journalctl log

ar 26 16:48:51  systemd[1]: influxdb.service: main process exited, code=exited, status=1/FAILURE
Mar 26 16:48:51  systemd[1]: Unit influxdb.service entered failed state.
Mar 26 16:48:51  systemd[1]: influxdb.service failed.
Mar 26 16:48:51  systemd[1]: influxdb.service holdoff time over, scheduling restart.
Mar 26 16:48:51  systemd[1]: Stopped InfluxDB is an open-source, distributed, time series database.
Mar 26 16:48:51  systemd[1]: Started InfluxDB is an open-source, distributed, time series database.
Mar 26 16:48:54  influxd[108075]: ts=2020-03-26T11:18:54.347700Z lvl=info msg="InfluxDB starting" log_id=0Lmdub
Mar 26 16:48:54  influxd[108075]: ts=2020-03-26T11:18:54.347731Z lvl=info msg="Go runtime" log_id=0LmdubYl000 v
Mar 26 16:48:54  influxd[108075]: run: open server: listen: listen tcp 127.0.0.1:8088: bind: address already in
Mar 26 16:48:54  systemd[1]: influxdb.service: main process exited, code=exited, status=1/FAILURE
Mar 26 16:48:54  systemd[1]: Unit influxdb.service entered failed state.
Mar 26 16:48:54  systemd[1]: influxdb.service failed.
Mar 26 16:48:54  systemd[1]: influxdb.service holdoff time over, scheduling restart.
Mar 26 16:48:54  systemd[1]: Stopped InfluxDB is an open-source, distributed, time series database.
Mar 26 16:48:54  systemd[1]: Started InfluxDB is an open-source, distributed, time series database.
Mar 26 16:48:57  influxd[108126]: ts=2020-03-26T11:18:57.609233Z lvl=info msg="InfluxDB starting" log_id=0Lmduo
Mar 26 16:48:57  influxd[108126]: ts=2020-03-26T11:18:57.609263Z lvl=info msg="Go runtime" log_id=0LmduoIG000 v
Mar 26 16:48:57  influxd[108126]: run: open server: listen: listen tcp 127.0.0.1:8088: bind: address already in
Mar 26 16:48:57  systemd[1]: influxdb.service: main process exited, code=exited, status=1/FAILURE
Mar 26 16:48:57  systemd[1]: Unit influxdb.service entered failed state.
Mar 26 16:48:57  systemd[1]: influxdb.service failed.

pi@raspberrypi:~ $ influx
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection refused
Please check your connection settings and ensure 'influxd' is running.

This database has been functioning fine more months and months. I have been using Grafana to display 4 or 5 fields and recently increased that to about 20 and this problem occurred. I removed and re-installed and had influx command connecting then shortly back to this....has anyone actually found the cause and the fix?

@awwbaker could you share the output of this cmd

curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"

You can also check how other users provided details to reproduce above.

Thank you for reaching out....
pi@raspberrypi:/ $ curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to localhost port 8086: Connection refusedpi@raspberrypi:/ $ ^C

On Tuesday, April 14, 2020, 02:17:20 PM EDT, Mustafa <[email protected]> wrote:

@awwbaker could you share the output of this cmd

curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"

You can also check how other users provided details to reproduce above.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Omg this is great

After updating the system I see the exact same issue with influx 1.8.0:
grafik

All databases worked beforehand ... after the update more and more connection issues developed and now even after a fresh reboot it will almost instantly fail like this.

Can anybody please come to the rescue?
Thanks

We are having the same problem. This is a set of docker containers including influxdb, kapacitor, telegraf, and chronograf. We can connect to influxdb and chronograf with their web interfaces, but nothing seems to be able to communicate to influxdb.

influx_conn_refused_screen
Influx is starting and serving its interface on :9999, here we can see one OK response for chronograf but nothing else, and kapacitor having its connection rejected.
postman_influx_pingCapture
Here calling /ping from Postman fails.

We are attempting to connect from Elixir, and have tried several client libraries, all fail with variations of :closed, :socket_closed_remotely, and :econnrefused.

If anyone has been able to resolve is please let us know at info at unozerocode dotcom and reference this issue. [<:78A003457A3CD0A255CBB09225C28E2D:>]

Thanks.

We are having the same problem. I fixed it as follows:

# systemctl stop influxd
# /usr/bin/influxd -config /etc/influxdb/influxdb.conf &> /tmp/influxdb.log

I saw:

...
ts=2020-11-13T11:33:14.183571Z lvl=info msg="Reading file" log_id=0QSLvqlW000 engine=tsm1 service=cacheloader path=/var/lib/influxdb/wal/_internal/monitor/196/_00093.wal size=10726643
ts=2020-11-13T11:33:15.391503Z lvl=info msg="Reading file" log_id=0QSLvqlW000 engine=tsm1 service=cacheloader path=/var/lib/influxdb/wal/_internal/monitor/196/_00094.wal size=10730412
**fatal error: runtime: out of memory**

runtime stack:
runtime.throw(0x16a58ad, 0x16)
    /usr/local/go/src/runtime/panic.go:774 +0x72
runtime.sysMap(0xc02c000000, 0x4000000, 0x3613fb8)
    /usr/local/go/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x35fafa0, 0x2000, 0x2000, 0x41690f)
    /usr/local/go/src/runtime/malloc.go:701 +0x1cd
runtime.(*mheap).grow(0x35fafa0, 0x1, 0xffffffff)
    /usr/local/go/src/runtime/mheap.go:1255 +0xa3
runtime.(*mheap).allocSpanLocked(0x35fafa0, 0x1, 0x3613fc8, 0xc00004b320)
    /usr/local/go/src/runtime/mheap.go:1170 +0x266
runtime.(*mheap).alloc_m(0x35fafa0, 0x1, 0x7f0ff8ff0021, 0x45d0fa)
    /usr/local/go/src/runtime/mheap.go:1022 +0xc2
runtime.(*mheap).alloc.func1()
    /usr/local/go/src/runtime/mheap.go:1093 +0x4c
runtime.systemstack(0xc000000d80)
    /usr/local/go/src/runtime/asm_amd64.s:370 +0x66
runtime.mstart()
    /usr/local/go/src/runtime/proc.go:1146
...

Then I increased the RAM and the problem went away

Was this page helpful?
0 / 5 - 0 ratings