I'm running:
- Which Parity version?: 1.11.8-stable-c754a028e-20180725
- Which operating system?: Linux
- How installed?: binaries
- Are you fully synchronized?: yes
- Which network are you connected to?: ethereum
- Did you try to restart the node?: yes
Some time ago i've migrated from geth to parity (to use it with my pool), and now i getting some strange behavior - after few hours of work parity stuck and returns past work, but importing of new blocks still continues.
From last example - the parity stuck on block 6080556 and eth_getWork continues to return the work from this block, but in console i saw importing of blocks 6080646 and further. Only reboot parity solve the problem, but after several hours the problem occurs again
PS tried websocket and http rpc, all the same

@tomusdrw do you have any guess what may cause a problem? to me it looks a bit like problem with UsingQueue
with 2.0.1 beta i have another strange issue - when i do poll getWork reequest parity may miss some blocks.. it looks like i get work for blocks 1, 2, 3, 4, 7,10 - no work for block 5, 6, 8, 9, but i can see import of these blocks in parity console...
don't know how can i use parity in my eth pool
tried ellisium - no problem
@debris This condition seems fishy: https://github.com/paritytech/parity-ethereum/blob/92776e4acf7b9c462ef8b01861facce5d4806f90/ethcore/src/miner/miner.rs#L686
We don't compare the hashes here, so it may be true, when we store some stale block. Usually new block should be prepared in fn chain_new_blocks when --force-sealing, but maybe there is some kind of race condition there that causes the stalling in prepare_pending_block function.
My Parity gets stuck many times a day, and I even have to write a script to automatically restart it.
The strange thing is that stuck only happens on the ETH mainnet (foundation). The same configuration of Parity running ETC mainnet (classic) has never been stuck.
I have tried multiple versions, from v1.10.8 to v1.11.7 to v2.0.3-beta, all versions have the same problem.
I described the same problem in issue https://github.com/paritytech/parity-ethereum/issues/7787
There is my script: getwork-monitor-eth-parity.sh
And a part of its logs:
[2018-09-01 00:01:01] getwork stuck: blockNumber: 6247431, getworkNumber: 6247425, diff: 6
[2018-09-01 00:02:02] getwork stuck: blockNumber: 6247438, getworkNumber: 6247425, diff: 13
[2018-09-01 01:41:01] getwork stuck: blockNumber: 6247845, getworkNumber: 6247838, diff: 7
[2018-09-01 01:42:01] getwork stuck: blockNumber: 6247848, getworkNumber: 6247838, diff: 10
[2018-09-01 01:52:01] getwork stuck: blockNumber: 6247878, getworkNumber: 6247873, diff: 5
[2018-09-01 02:08:01] getwork stuck: blockNumber: 6247949, getworkNumber: 6247940, diff: 9
[2018-09-01 02:22:01] getwork stuck: blockNumber: 6248014, getworkNumber: 6248008, diff: 6
[2018-09-01 02:32:01] getwork stuck: blockNumber: 6248053, getworkNumber: 6248044, diff: 9
[2018-09-01 05:28:01] getwork stuck: blockNumber: 6248769, getworkNumber: 6248763, diff: 6
[2018-09-01 06:17:01] getwork stuck: blockNumber: 6248962, getworkNumber: 6248956, diff: 6
[2018-09-01 06:18:01] getwork stuck: blockNumber: 6248963, getworkNumber: 6248956, diff: 7
[2018-09-01 06:43:01] getwork stuck: blockNumber: 6249062, getworkNumber: 6249057, diff: 5
[2018-09-01 06:44:01] getwork stuck: blockNumber: 6249066, getworkNumber: 6249057, diff: 9
[2018-09-01 06:48:01] getwork stuck: blockNumber: 6249082, getworkNumber: 6249077, diff: 5
[2018-09-01 07:40:01] getwork stuck: blockNumber: 6249311, getworkNumber: 6249303, diff: 8
[2018-09-01 08:29:01] getwork stuck: blockNumber: 6249520, getworkNumber: 6249514, diff: 6
[2018-09-01 08:30:01] getwork stuck: blockNumber: 6249525, getworkNumber: 6249514, diff: 11
[2018-09-01 08:34:01] getwork stuck: blockNumber: 6249550, getworkNumber: 6249540, diff: 10
[2018-09-01 12:44:01] getwork stuck: blockNumber: 6250570, getworkNumber: 6250565, diff: 5
[2018-09-01 12:53:01] getwork stuck: blockNumber: 6250611, getworkNumber: 6250602, diff: 9
[2018-09-01 12:54:01] getwork stuck: blockNumber: 6250616, getworkNumber: 6250602, diff: 14
[2018-09-01 13:26:02] getwork stuck: blockNumber: 6250747, getworkNumber: 6250741, diff: 6
[2018-09-01 13:56:01] getwork stuck: blockNumber: 6250856, getworkNumber: 6250849, diff: 7
[2018-09-01 15:25:01] getwork stuck: blockNumber: 6251208, getworkNumber: 6251202, diff: 6
[2018-09-01 15:26:01] getwork stuck: blockNumber: 6251213, getworkNumber: 6251202, diff: 11
[2018-09-01 15:47:01] getwork stuck: blockNumber: 6251292, getworkNumber: 6251283, diff: 9
[2018-09-01 15:48:01] getwork stuck: blockNumber: 6251297, getworkNumber: 6251283, diff: 14
[2018-09-01 17:14:01] getwork stuck: blockNumber: 6251638, getworkNumber: 6251631, diff: 7
[2018-09-01 18:49:02] getwork stuck: blockNumber: 6252012, getworkNumber: 6252005, diff: 7
[2018-09-01 18:50:02] getwork stuck: blockNumber: 6252019, getworkNumber: 6252005, diff: 14
[2018-09-01 18:56:01] getwork stuck: blockNumber: 6252046, getworkNumber: 6252039, diff: 7
[2018-09-01 19:44:01] getwork stuck: blockNumber: 6252271, getworkNumber: 6252266, diff: 5
[2018-09-01 20:09:01] getwork stuck: blockNumber: 6252370, getworkNumber: 6252365, diff: 5
[2018-09-01 20:10:01] getwork stuck: blockNumber: 6252374, getworkNumber: 6252365, diff: 9
[2018-09-01 21:02:01] getwork stuck: blockNumber: 6252595, getworkNumber: 6252590, diff: 5
[2018-09-01 22:15:01] getwork stuck: blockNumber: 6252874, getworkNumber: 6252866, diff: 8
[2018-09-01 22:16:01] getwork stuck: blockNumber: 6252879, getworkNumber: 6252866, diff: 13
[2018-09-01 23:17:01] getwork stuck: blockNumber: 6253125, getworkNumber: 6253119, diff: 6
[2018-09-01 23:18:02] getwork stuck: blockNumber: 6253128, getworkNumber: 6253119, diff: 9
[2018-09-02 00:59:01] getwork stuck: blockNumber: 6253533, getworkNumber: 6253528, diff: 5
[2018-09-02 01:00:01] getwork stuck: blockNumber: 6253534, getworkNumber: 6253528, diff: 6
In addition, a method with a high probability to reproduce the problem:
Synchronize a new Parity node and always call eth_getWork during the synchronization process (call eth_getWork per second).
Although the call returns an "Syncing" error, do not stop the call.
And when the synchronization is complete, Parity's eth_getWork return value will always be stuck at a past block number.
All 8 nodes I deployed have reproduced the problem in this way.
Also, if you keep calling eth_getWork once per second in ETH mainnet (and only for ETH mainnet), you expect to have several chances to reproduce the problem every day, like me.
For me, I started calling eth_getWork every 0.5 seconds. But now I have changed to call eth_getWork every 3 seconds. Parity still stuck several times a day.
making any helper scripts is unacceptable to me,
i need stable node and now i get back to geth
I set force-sealing = true and the issue seems gone. @Rom1kz
My current config.toml:
# Parity Config Generator
# https://paritytech.github.io/parity-config-generator/
#
# This config should be placed in following path:
# ~/.local/share/io.parity.ethereum/config.toml
[parity]
# Ethereum Main Network
chain = "foundation"
# Parity continously syncs the chain
mode = "active"
# Disables auto downloading of new releases. Not recommended.
no_download = true
[ipc]
# You won't be able to use IPC to interact with Parity.
disable = true
[dapps]
# You won't be able to access any web Dapps.
disable = true
[rpc]
# JSON-RPC will be listening for connections on IP 0.0.0.0.
interface = "0.0.0.0"
# Allows Cross-Origin Requests from domain '*'.
cors = ["*"]
[mining]
# Account address to receive reward when block is mined.
author = "xxxxxxxxx"
# Blocks that you mine will have '/BTC.COM/' in extra data field.
extra_data = "xxxxxxxxx"
# Prepare a block to seal even when there are no miners connected.
force_sealing = true
# Force the node to author new blocks when a new uncle block is imported.
reseal_on_uncle = true
# New pending block will be created for all transactions (both local and external).
reseal_on_txs = "all"
# New pending block will be created only once per 4000 milliseconds.
reseal_min_period = 4000
# Parity will keep/relay at most 8192 transactions in queue.
tx_queue_size = 8192
tx_queue_per_sender = 128
[network]
# Parity will sync by downloading latest state first. Node will be operational in couple minutes.
warp = true
# Specify a path to a file with peers' enodes to be always connected to.
reserved_peers = "/root/.local/share/io.parity.ethereum/peer.list"
# Parity will try to maintain connection to at least 50 peers.
min_peers = 50
# Parity will maintain at most 200 peers.
max_peers = 200
[misc]
logging = "own_tx=trace,sync=info,chain=info,network=info,miner=trace"
log_file = "/root/.local/share/io.parity.ethereum/parity.log"
[footprint]
# Prune old state data. Maintains journal overlay - fast but extra 50MB of memory used.
pruning = "fast"
# If defined will never use more then 1024MB for all caches. (Overrides other cache settings).
cache_size = 1024
[snapshots]
# Disables automatic periodic snapshots.
disable_periodic = true
And a similar issue in geth (not strictly verified or reproduced) :sweat_smile:
https://github.com/ethereum/go-ethereum/issues/16211
This very serious issue to pool owners. Please fix it.
From out experience this problem occurs more often when server is under high load (more then 1 per core, cpu usage >70%, regardless of free memory).
Method to reproduce suggested by @YihaoPeng is very good. The more often you call getWork the sooner problem appears.
Have you tried 1.11.11 or 2.0.4? It seems like the fix (#9484) was backported to this versions.
@ordian #9484 doesn't look like a fix for the issue. The problem is not to return "Synging..." but return a old work. I can't understand how the code in #9484 affects this issue.
But I will test and provide feedback on 2.0.4.
I can confirm this. At its current state newer parity releases are unusable for mining-pools as the stale getWork result means the miners work for nothing.
Any update on this?
any fix for this?
I'm slightly confused. This issue bear the P0-dropeverything 馃寢label, yet there's release after release without anything done about this.
Although this is annoying, when several high priority issues compete, there got to be some prioritization between them as well. Please be patient or contribute if you can.
This very serious issue to pool owners. Please fix it.
dont expect an update before the fork. they sold out to mega pools. thats just way shtcoins roll. buy bitcoin.
(I'd say that the reason this is still open is because it's proving hard to diagnose the problem [we'd certainly appreciate more eyes], if in fact it is still an issue. It's been months since this issue has been reported and the code where the bug might live has had several changes and fixes made to it, more confirmation of the issue would be very appreciated, perhaps comparisons to clients or versions of parity that don't have the problem.
they sold out to mega pools.
pools are impacted most by this issue so I have genuinely no idea what that is supposed to mean?)
We've switched to geth. Problem solved.
@cheme this issue should now be closed right?
Yes, it should be in next backports (beta 2.2.1 and stable 2.1.6), it addresses the case where force_sealing = true is needed.
@oliverw if the force_sealing = true fix mentioned above didn't work for you, and this latest release still doesn't work for you, please reopen.
We often run into this issue still.
It's really easy to reproduce with periodic "get_work" requests.
@korjavin what version are you running? The issue should be addressed with #9876
Most helpful comment
We've switched to geth. Problem solved.