Parity-ethereum: Clarify jsonrpc threads parameters

Created on 31 Jul 2018  ·  7Comments  ·  Source: openethereum/parity-ethereum

I'm running:

  • Which Parity version?: 1.11.8
  • Which operating system?: Linux
  • How installed?: -
  • Are you fully synchronized?: -
  • Which network are you connected to?: -
  • Did you try to restart the node?: -

In documentation https://wiki.parity.io/Configuring-Parity-Ethereum.html please clarify what are
--jsonrpc-threads and --jsonrpc-server-threads parameters:

  1. what is the difference between them? they both seem to be related to performance of rpc, but it's not clear what they do
  2. is there any preferred or limiting value to set? like number of CPU cores
  3. what are their corresponding names in config.toml?
M2-config 📂 Z1-question 🙋‍♀️

Most helpful comment

  1. right
  2. they are called server_threads, processing_threads (for here https://github.com/paritytech/parity-ethereum/blob/1b1941a896c8485a05fa3e9ffe8b251b498eb541/parity/cli/mod.rs#L486)

All 7 comments

jsonrpc-server-threads is multi-threaded HTTP server (can handle multiple incoming requests at the same time),
jsonrpc-threads is a cpupool to dispatch all RPC requests to (shared between all transports) so when you have some heavy calls they are dispatched in parallel on different threads.
:
WebSockets and IPC are single-threaded transports that can dispatch deserialized requests on multiple threads so that they are processed in parallel. So deserialization happens sequentially, while processing the request is done in parallel.

HTTP can support multiple threads, so you can deserialize requests in parallel and then run them in parallel as well.

Fine tuning performance may involve changing both parameters depending on the transports you are using and characteristic you get (timeouts).

For instance you may run with --jsonrpc-server-threads 8 --jsonrpc-threads 0 which will be able to handle 8 parallel requests over HTTP (deserialization + processing), but while these 8 requests are being processed the HTTP server will be blocked (won't accept more connections).
Running with --jsonrpc-server-threads 3 --jsonrpc-threads 4 however allows you to process 4 requests in parallel, while you can deserialize 3 incoming requests in parallel. While processing threads are occupied you are still accepting incoming HTTP requests, but the responses might take long.

coypright @tomusdrw for those explanations

@phahulin PR are always welcome if you feel you can come out with a concise and better explanation for those flags: https://github.com/paritytech/parity/blob/master/parity/cli/mod.rs

@Tbaut thank you. I'd like to clarify a few more things:

  1. since jsonrpc-threads is a cpupool, it doesn't make sense to set this value higher than number of cpu cores, right?
  2. what are their corresponding names in config.toml?
  1. right
  2. they are called server_threads, processing_threads (for here https://github.com/paritytech/parity-ethereum/blob/1b1941a896c8485a05fa3e9ffe8b251b498eb541/parity/cli/mod.rs#L486)

Running with --jsonrpc-server-threads 3 --jsonrpc-threads 4 however allows you to process 4 requests in parallel, while you can deserialize 3 incoming requests in parallel. While processing threads are occupied you are still accepting incoming HTTP requests, but the responses might take long.

Does this mean that there are 7 threads—3 server threads processing HTTP requests, deserializing the JSON and then passing them onto the 4 processing threads which do all of the processing and then passes the response back to the HTTP server thread, ie. does the server thread not do any processing in this case? If you have more server threads than processing threads, do requests for processing queue up?

@dtran320 Correct, the server threads are just handling incoming connections and do serialization (no processing), then dispatch to a cpupool awaiting for a future to completion and when the response is ready they are responsible for replying.
Processing requests may obviously queue up in the pool.

Although please note that this behaviour has changed on latest master after #9657, currently processing_threads are not used at all, I hope to look into either removing that param or restoring the cpupool after conducting some performance testing.

Hey guys. I am trying to send a lot of requests to my Parity private network. Each node has 2 cores (docker containers) and I am trying to achieve thousands of JSON-RPC tx/s spread over the network. I am using a workload generator (tung).

I observed that my network performs well till 300 tx/s, after that Parity starts to ignore RPC requests and throughput decreases. I tried to set server_threads = 2 (since I just have 2 cores per node)in the config.toml of each node and I increased performance a bit. Please could you confirm me that increasing the cores and thus the server threads I will be able to handle more requests? Are the node cars my issue?

what is the default for server_threads ?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jacogr picture jacogr  ·  4Comments

jurijbajzelj picture jurijbajzelj  ·  3Comments

bryaan picture bryaan  ·  3Comments

Michael2008S picture Michael2008S  ·  3Comments

famfamfam picture famfamfam  ·  3Comments