Parity-ethereum: RAM and open files very high

Created on 13 Sep 2017  Â·  10Comments  Â·  Source: openethereum/parity-ethereum

I'm running:

  • Parity version: 1.7.0
  • Operating system: Xubuntu 17.04
  • And installed: via *.deb from parity.io website

I'm running with command:
parity --auto-update all --base-path /mnt/ssd-drive --no-ui --no-ws --no-jsonrpc --no-dapps --db-compaction ssd --geth --log-file /var/log/parity.log

My RAM usage increases over a day to 12+ GB, and number of open files clocked in at 792204 (after increasing the default open file limit, of course).

I've tried various settings, like --cache-size 1024 to try to limit the RAM use. Nothing has changed.

I'm doing lots of contract reads, if that's related.


I'm having some other problems, so I'll include them just in case it helps identify a cluster of issues:

  • peers trend to 0 over time and stay there. The only way I can stay synced is with a hardcoded geth peer on my local network (NTP shows me at 50-100ms offset)
  • UPnP isn't working. My geth client already reserved 30303, but I expect it to still work and just broadcast a different external port.

I'd really rather use parity because that initial sync is so fast. This and the peers->0 issue are holding me back.

F3-annoyance 💩 M4-core ⛓ P5-sometimesoon 🌲 Z0-unconfirmed 🤔

Most helpful comment

If you like:
https://github.com/iFA88/web3.py/blob/master/web3/providers/ipc.py

This is a little recoded, this let the IPC sock open and with that you can do 1000+ request / s.

All 10 comments

Interesting, lsof -p $MY_PARITY_PID shows that the vast majority of the open handles are: $BASE/jsonrpc.ipc.

I'm making a lot of requests over IPC, and each one does open a connection, but AFAICT the code is closing all of them: https://github.com/pipermerriam/web3.py/blob/master/web3/providers/ipc.py#L34

Maybe parity is not letting go of the open file handle when the client closes the stream?

I'm doing lots of contract reads, if that's related.

What exactly are you doing? 12 GB memory are either a memory leak (unlikely) or releated to whatever you are doing there :)

Same could apply to the number of open files, but please try to share more details.

peers trend to 0 over time and stay there. The only way I can stay synced is with a hardcoded geth peer on my local network (NTP shows me at 50-100ms offset)

How long have you been running this node? Can you try again after removing ~/.local/share/io.parity.ethereum/chains/ethereum/network/nodes.json ?

UPnP isn't working. My geth client already reserved 30303, but I expect it to still work and just broadcast a different external port.

Have you seen the convenience options? Try --ports-shift 1:

Convenience Options:
  -c --config CONFIG           Specify a filename containing a configuration file.
                               (default: $BASE/config.toml)
  --ports-shift SHIFT          Add SHIFT to all port numbers Parity is listening on.
                               Includes network port and all servers (RPC, WebSockets, UI, IPFS, SecretStore).
                               (default: 0)

What exactly are you doing?

I'm doing a bunch of mapping lookups back to back. So maybe 10-20k reads, pause a bit, then run it again.

It sure looks like it's keeping an open file handle for every request to the IPC. I sampled this while my reader wasn't running:

$ lsof -p $(pgrep parity) | awk '{$3=""; $4=""; $8=""; print}' | sort | uniq -c | sort -n | tail -n 4
      3 parity 21011   CHR 136,3 0t0  /dev/pts/3
     12 parity 21011   FIFO 0,10 0t0  pipe
     46 parity 21011   sock 0,8 0t0  protocol: TCP
  86370 parity 21011   unix 0x0000000000000000 0t0  $BASE/jsonrpc.ipc type=STREAM

Then ran the reader again until completion. Then counted the open files again:

$ lsof -p $(pgrep parity) | awk '{$3=""; $4=""; $8=""; print}' | sort | uniq -c | sort -n | tail -n 4
      3 parity 21011   CHR 136,3 0t0  /dev/pts/3
     12 parity 21011   FIFO 0,10 0t0  pipe
     43 parity 21011   sock 0,8 0t0  protocol: TCP
 104069 parity 21011   unix 0x0000000000000000 0t0  $BASE/jsonrpc.ipc type=STREAM

~18k new file handles opened, permanently.


peers trend to 0 over time and stay there

How long have you been running this node?

Maybe a week. I'm pretty sure this is just a side effect of hitting the open file limit, because I have plenty of peers for the first few hours. Let's punt on this until the file limit issue is resolved.


Have you seen the convenience options? Try --ports-shift 1

Excellent, thanks. I tried it, but my node ID still shows my local network IP, and my router doesn't show a UPnP entry (it does for geth):

Public node URL: enode://a<snip>[email protected].<snip>:30304

The UPnP issue seems unrelated, so I'm happy to open a separate issue.

Thanks for sharing the details. I'll try to reproduce the IPC/open-files issue without web3.py to confirm this is a parity problem soon.

Running parity of various versions in kubernetes. Memory usage grows with version numbers: 1.6.8 - 2.5-3.5GB, 1.7.0 - 6-8.2GB, 1.7.2 - 8GB-11.7GB. These parities are not loaded at all, 3 connections to jsonrpc max. What's happening?

@5chdn any luck reproducing?

hey @carver sorry for not getting back to you yet.

could you look into #6575? could you try to reuse the IPC connection instead of creating new ones?

If you like:
https://github.com/iFA88/web3.py/blob/master/web3/providers/ipc.py

This is a little recoded, this let the IPC sock open and with that you can do 1000+ request / s.

thanks for sharing!

FYI, persistent IPC connections were released in web3.py v3.16.3 and v4.0.0-beta.1

Was this page helpful?
0 / 5 - 0 ratings

Related issues

famfamfam picture famfamfam  Â·  3Comments

bryaan picture bryaan  Â·  3Comments

uluhonolulu picture uluhonolulu  Â·  3Comments

jacogr picture jacogr  Â·  4Comments

danfinlay picture danfinlay  Â·  3Comments