_Before filing a new issue, please provide the following information._
The websocket when subscribed using newBlockHeaders should return the forked blocks and also the canonical blocks.
The websocket when subscribed using newBlockHeaders does not return the canonical blocks (for almost every ~50 blocks)
I have a local parity node, am using web3.js for subscribing to newBlockHeaders, below is the code to get new blocks.
var Web3 = require('web3')
const fs = require('fs');
var web3 = new Web3();
web3.setProvider('ws://localhost:8546');
var subscription = web3.eth.subscribe('newBlockHeaders', function(error, result){
if (!error) {
console.log(result['hash']);
return;
}
console.error(error);
}).on("data", function(blockHeader){
console.log(blockHeader['number']);
strData = "data, " + blockHeader['hash'] +', ' + blockHeader['number'] + ', '+ blockHeader['parentHash'] + "\n"
fs.appendFile('test_pubsub.txt', strData , function (err) {
if (err) throw err;
});
}).on("changed", function(blockHeader){
console.log(blockHeader['number']);
strData = "change, " + blockHeader['hash'] +', ' + blockHeader['number'] + ', '+ blockHeader['parentHash'] + "\n"
fs.appendFile('test_pubsub.txt', strData , function (err) {
if (err) throw err;
});
}).on("error", console.error);
The subscription service regularly misses blocks, in the example below we get the blocks 6630351 and 6630353 but not the block in between 6630352.
0x9f3204da53f748aa52db99811bef51378ca303629211a5420f9b40aa1782cf2a, 6630351, 0x95e806a60dab8d9611668a485d1d98fc4e66525657a97ff75788e4252ddf6c71
0x532180a19001eff004853b625fa9e537db7a1174291a0229332dd5d94f6cbd97, 6630353, 0x784fb94564c6803334fa51f22fdc426c2186b73fe11a1dd14a50ca2e159b4c95
This has been documented before: https://github.com/ethereum/web3.js/issues/1375
Is there a reliable method to get stream of new blockheaders?
Probably related to the other issue you reported (#9858)
This is an important issue, I believe many companies rely on this. It's also a major differentiator if it works in Parity, also Geth is unreliable.
@joshua-mir why is it expected behavior?
Probably related to the other issue you reported (#9858)
@joshua-mir : Yes both are related, we were trying to get a reliable stream of canonical blocks.
@jpzk sorry, you're right about this issue being a bug. The other one is "expected behavior" as newBlockFilter returns new blocks, not reorgs of known blocks. (it won't return past blocks that are now canonical, as you may have noticed.)
@joshua-mir to clarify the past issue is also not "expected behavior",
~1 out of 50 blocks newBlockFilter never returns the canonical blocks in the history of eth_getFilterChanges, only returns the forked block.
Isn't that more important than "Sometime soon"?
This is actually a really important issue for a lot of people. In fact, there are open source projects such as:
that have sprung up solely to mitigate this type of behavior from the node clients.
The labels are purely cosmetic, milestones are more important. Importance is mostly decided by activity in reality because more active issues are constantly pinging our inboxes 😅 It is mostly up to devs familiar with the parts of the codebase affected to decide their own priorities and pick up issues themselves.
Any updates on this one?
Not yet assigned, @joshua-mir do you by chance know who worked on that component?
@tomusdrw is likely familiar with it and @sorpaas worked on https://github.com/paritytech/parity-ethereum/pull/8524 which looks like it addresses a related issue
@ankitchiplunkar Please provide parity logs from when it happens. The subscription doesn't return blocks in case the node goes into "syncing" mode (which usually indicates peering / performance issue).
hi @tomusdrw
Here are two sample parity logs when the blocks go missing through the websocket.
Output from websocket subscription
data, 0xf80a48809f824a84fc54d0983cc25684be12ac7b0e693023cd43d5d461a978f5, 6646729, 0xe7af8361337dab403884a8ad5fa2ce5e17b8fcc31ee14670a731f6f4d89e4bb8
data, 0x2c0d8be25767516dd7f0f1b18bdbc333728b722d46423de1d153ae19da9842c2, 6646731, 0xb2fa96795a61d1e7823dfdcb5bafeb479d62af4237c9faa81012012b5518c24e
Corresponding Parity logs
2018-11-05 07:40:10 UTC Imported #6646727 0x911c…a257 (165 txs, 7.99 Mgas, 512 ms, 23.56 KiB)
2018-11-05 07:40:15 UTC Imported #6646728 0xe7af…4bb8 (371 txs, 7.98 Mgas, 365 ms, 41.57 KiB)
2018-11-05 07:40:22 UTC Imported #6646729 0xf80a…78f5 (69 txs, 2.33 Mgas, 144 ms, 10.19 KiB)
2018-11-05 07:40:33 UTC Imported #6646730 0x6496…1004 (129 txs, 8.00 Mgas, 256 ms, 20.74 KiB)
2018-11-05 07:40:38 UTC Imported #6646731 0x2c0d…42c2 (164 txs, 7.98 Mgas, 496 ms, 22.77 KiB)
2018-11-05 07:40:45 UTC 498/512 peers 740 MiB chain 858 MiB db 0 bytes queue 489 KiB sync RPC: 7 conn, 3 req/s, 52 µs
2018-11-05 07:41:14 UTC Imported #6646732 0x2ce2…b9fd (62 txs, 1.53 Mgas, 142 ms, 8.29 KiB)
2018-11-05 07:41:20 UTC 499/512 peers 741 MiB chain 858 MiB db 0 bytes queue 489 KiB sync RPC: 7 conn, 3 req/s, 56 µs
2018-11-05 07:41:43 UTC Imported #6646733 0x5dcf…1fd6 (88 txs, 3.09 Mgas, 178 ms, 14.16 KiB) + another 1 block(s) containing 87 tx(s)
2018-11-05 07:41:55 UTC 500/512 peers 742 MiB chain 858 MiB db 0 bytes queue 489 KiB sync RPC: 7 conn, 3 req/s, 59 µs
2018-11-05 07:41:59 UTC Imported #6646734 0x4e84…2a8e (103 txs, 7.83 Mgas, 513 ms, 14.02 KiB)
Output of websocker subscription
data, 0xd7e5f9438eb5dcfc9a2f182099dc021e7239ef8c47051d8aed9dd4e539be4cc2, 6646737, 0x776ce15a00b011f8012ad4c0bc59036661b1e2d77234e79c44ce370e6268148c
data, 0xa111e895d3781450ef998fdae41dbcef3d0e76a37f77a7e76bcdca1b3bd611a2, 6646739, 0x6ad8bceedcf5007daf268b35d3d3e6388f792dda5df310fee89d241bb4c84534
Corresponding Parity logs
2018-11-05 07:42:16 UTC Imported #6646736 0x776c…148c (167 txs, 7.99 Mgas, 650 ms, 25.06 KiB)
2018-11-05 07:42:19 UTC Imported #6646737 0xd7e5…4cc2 (142 txs, 7.99 Mgas, 408 ms, 23.11 KiB)
2018-11-05 07:42:21 UTC Imported #6646737 0x6765…e9e8 (272 txs, 6.97 Mgas, 526 ms, 35.22 KiB)
2018-11-05 07:42:30 UTC 502/512 peers 744 MiB chain 859 MiB db 0 bytes queue 489 KiB sync RPC: 7 conn, 3 req/s, 56 µs
2018-11-05 07:43:05 UTC 502/512 peers 745 MiB chain 859 MiB db 0 bytes queue 489 KiB sync RPC: 7 conn, 3 req/s, 58 µs
2018-11-05 07:43:08 UTC Imported #6646738 0x79f0…072f (101 txs, 8.00 Mgas, 294 ms, 17.41 KiB)
2018-11-05 07:43:11 UTC Imported #6646739 0xa111…11a2 (140 txs, 7.99 Mgas, 419 ms, 24.81 KiB)
@ankitchiplunkar Thank you! It seems that we don't trigger a notification if we already have another block queued up for verification. @mattrutherford is investigating the issue and will provide a fix when ready.
Correct - I'm just testing a fix for 9858 - hopefully that will solve some problems; this one will follow...
Thanks for fixing. We appreciate at it a lot! 👍
Thanks. Parity <3
Most helpful comment
This is actually a really important issue for a lot of people. In fact, there are open source projects such as:
that have sprung up solely to mitigate this type of behavior from the node clients.