Sync block is always failing
Error: invalid merkle root (remote: 57cc91ee8b91b956592a27b14386abc2aba723b5f4f9e5d3181ace6b5d3cd433 local: 1f9ee59bfa683a25c7a15b626995a3ad7c58c571b40df96eea31e5c5eed9732d)
ERROR[11-11|15:08:37.772]
Chain config: {ChainID: 1 Homestead: 1150000 DAO: 1920000 DAOSupport: true EIP150: 2463000 EIP155: 2675000 EIP158: 2675000 Byzantium: 4370000 Constantinople: 7280000 Petersburg: 7
280000 Istanbul: 9069000, Muir Glacier: 9200000, Engine: ethash}
Number: 11234873
Hash: 0xd307c642087f1e143e0c7c766e47f77af13e496c8267a55b644bfc86b6f184c7
0: cumulative: 56209 gas: 56209 contract: 0x0000000000000000000000000000000000000000 status: 1 tx: 0x413e13facc3c6e287746616a34deec0ba55356beb8f3f28f8cc4e3522c3a76c5 logs
: [0xc0244c00b0] bloom: 00000000000000000000000000040000000002000000000000000000000000000000000000000000000000020000010000000000000000000000000000000000000000000000000000000008000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000010000000000000010000001000000000000
2000000080000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
state:

Geth
Version: 1.9.23-stable
Git Commit: 8c2f271528f9cccf541c6ea1c022e98407f26872
Git Commit Date: 20201015
Architecture: amd64
Protocol Versions: [65 64 63]
Go Version: go1.15.3
Operating System: linux
GOPATH=
GOROOT=go
Please provide the full message (preferrably as text instead of image), and double-check the version number
If you’ve just upgraded to the latest release try to run debug.setHead(11234872) or debug.setHead('0xAB6E38')
See https://twitter.com/nikzh/status/1326465592841351168
Tried this keeps stopping at block 11234899. Any other ideas?
@peterlayke Make sure you are on the latest geth version, we had this happen too on the same block on one of our older nodes even though no releases have been marked as mandatory since then.
Make sure you are on the latest geth version
^ this
Thanks all, problem was after setting using setHead I had to restart the client. I had just used the command and let it run the first time.
Edit: nope stuck in a loop now, node just rewinds blocks constantly now
谢谢,问题出在使用setHead设置后,我不得不重新启动客户端。我刚刚使用了该命令,并使其第一次运行。
After restarting, I can’t get the latest block number
eth.syncing
{
currentBlock: 11236515,
highestBlock: 11236588,
knownStates: 410761626,
pulledStates: 410761233,
startingBlock: 11234872
}
eth.blockNumber
0
Upgraded to 1.9.23-Stable, rolled back blocks. now I get to the below and my node just resets and winds back blocks.
This fork has caused a lot of damage, how do I fix this without blowing the whole chain away for days?
{
currentBlock: 11234913,
highestBlock: 11238641,
knownStates: 411108293,
pulledStates: 411108293,
startingBlock: 11238340
}
https://blog.infura.io/infura-mainnet-outage-post-mortem-2020-11-11/
When can we get this patch publicly? My production workloads are down until there is a resolution.

My two nodes running geth v1.9.16 suffered from this issue today, I applied the debug.setHead(xxx) trick and restart the geth node, it does not work at all, so I have to upgrade the geth version to the latest v1.9.23, and applied the magic "debug.setHead(xxx)' trick again. restart the geth nodes,
Now one node has finished syncing and works again, another node shows weird behavior as following:
eth.syncing
{
currentBlock: 11238944,
highestBlock: 11239046,
knownStates: 364999353,
pulledStates: 364881536,
startingBlock: 11236671
}
> eth.blockNumber
0
The syncing messages are:
INFO [11-11|22:39:50.893] Imported new state entries count=384 elapsed="3.079µs" processed=364881152 pending=117354 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:39:51.349] Initializing fast sync bloom items=1319333915 errorrate=0.009 elapsed=1h17m15.792s
INFO [11-11|22:39:59.349] Initializing fast sync bloom items=1321994903 errorrate=0.009 elapsed=1h17m23.792s
INFO [11-11|22:40:04.648] Imported new state entries count=384 elapsed="3.3µs" processed=364881536 pending=117817 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:40:07.354] Initializing fast sync bloom items=1325379313 errorrate=0.009 elapsed=1h17m31.796s
INFO [11-11|22:40:15.354] Initializing fast sync bloom items=1328175729 errorrate=0.009 elapsed=1h17m39.796s
INFO [11-11|22:40:16.847] Imported new block headers count=1 elapsed=6.888ms number=11239047 hash="8d4f9e…9fc8f1" age=1m9s
INFO [11-11|22:40:17.733] Imported new state entries count=384 elapsed="55.74µs" processed=364881920 pending=118287 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:40:19.882] Imported new block headers count=2 elapsed=10.675ms number=11239049 hash="9e2bb0…0293e7"
INFO [11-11|22:40:23.354] Initializing fast sync bloom items=1331746356 errorrate=0.009 elapsed=1h17m47.796s
INFO [11-11|22:40:25.937] Imported new state entries count=288 elapsed="736.253µs" processed=364882208 pending=118623 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:40:28.954] Imported new block headers count=1 elapsed=7.441ms number=11239050 hash="885a53…1a593d"
INFO [11-11|22:40:29.973] Downloader queue stats receiptTasks=0 blockTasks=0 itemSize=164.75KiB throttle=398
INFO [11-11|22:40:31.354] Initializing fast sync bloom items=1334752284 errorrate=0.009 elapsed=1h17m55.796s
INFO [11-11|22:40:36.381] Imported new state entries count=384 elapsed="129.972µs" processed=364882592 pending=119062 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:40:38.069] Imported new block headers count=1 elapsed=27.943ms number=11239051 hash="c74e9d…4af702"
INFO [11-11|22:40:39.357] Initializing fast sync bloom items=1337630821 errorrate=0.009 elapsed=1h18m3.799s
INFO [11-11|22:40:41.096] Imported new block headers count=1 elapsed=6.639ms number=11239052 hash="3c9210…f1c63b"
INFO [11-11|22:40:47.357] Initializing fast sync bloom items=1341269535 errorrate=0.009 elapsed=1h18m11.799s
INFO [11-11|22:40:52.734] Imported new state entries count=493 elapsed="220.964µs" processed=364883085 pending=119628 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:40:55.358] Initializing fast sync bloom items=1344203501 errorrate=0.009 elapsed=1h18m19.801s
INFO [11-11|22:40:56.222] Imported new block headers count=1 elapsed=10.524ms number=11239053 hash="970b84…26b4ec"
INFO [11-11|22:41:02.118] Imported new state entries count=384 elapsed="134.762µs" processed=364883469 pending=120074 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [11-11|22:41:03.362] Initializing fast sync bloom items=1347454987 errorrate=0.009 elapsed=1h18m27.805s
INFO [11-11|22:41:08.304] Imported new block headers count=1 elapsed=6.700ms number=11239054 hash="eabd85…046b3c"
INFO [11-11|22:41:11.363] Initializing fast sync bloom items=1351124513 errorrate=0.009 elapsed=1h18m35.805s
INFO [11-11|22:41:12.373] Imported new state entries count=384 elapsed="139.643µs" processed=364883853 pending=120526 trieretry=0 coderetry=0 duplicate=0 unexpected=0
I will run it for a while to see if it can finish syncing by itself, most likely its blockchain data integrity is already corrupted.
My two nodes running geth v1.9.16 suffered from this issue today, I applied the debug.setHead(xxx) trick and restart the geth node, it does not work at all, so I have to upgrade the geth version to the latest v1.9.23, and applied the magic "debug.setHead(xxx)' trick again. restart the geth nodes,
Now one node has finished syncing and works again, another node shows weird behavior as following:
eth.syncing { currentBlock: 11238944, highestBlock: 11239046, knownStates: 364999353, pulledStates: 364881536, startingBlock: 11236671 } > eth.blockNumber 0The syncing messages are:
INFO [11-11|22:39:50.893] Imported new state entries count=384 elapsed="3.079µs" processed=364881152 pending=117354 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:39:51.349] Initializing fast sync bloom items=1319333915 errorrate=0.009 elapsed=1h17m15.792s INFO [11-11|22:39:59.349] Initializing fast sync bloom items=1321994903 errorrate=0.009 elapsed=1h17m23.792s INFO [11-11|22:40:04.648] Imported new state entries count=384 elapsed="3.3µs" processed=364881536 pending=117817 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:40:07.354] Initializing fast sync bloom items=1325379313 errorrate=0.009 elapsed=1h17m31.796s INFO [11-11|22:40:15.354] Initializing fast sync bloom items=1328175729 errorrate=0.009 elapsed=1h17m39.796s INFO [11-11|22:40:16.847] Imported new block headers count=1 elapsed=6.888ms number=11239047 hash="8d4f9e…9fc8f1" age=1m9s INFO [11-11|22:40:17.733] Imported new state entries count=384 elapsed="55.74µs" processed=364881920 pending=118287 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:40:19.882] Imported new block headers count=2 elapsed=10.675ms number=11239049 hash="9e2bb0…0293e7" INFO [11-11|22:40:23.354] Initializing fast sync bloom items=1331746356 errorrate=0.009 elapsed=1h17m47.796s INFO [11-11|22:40:25.937] Imported new state entries count=288 elapsed="736.253µs" processed=364882208 pending=118623 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:40:28.954] Imported new block headers count=1 elapsed=7.441ms number=11239050 hash="885a53…1a593d" INFO [11-11|22:40:29.973] Downloader queue stats receiptTasks=0 blockTasks=0 itemSize=164.75KiB throttle=398 INFO [11-11|22:40:31.354] Initializing fast sync bloom items=1334752284 errorrate=0.009 elapsed=1h17m55.796s INFO [11-11|22:40:36.381] Imported new state entries count=384 elapsed="129.972µs" processed=364882592 pending=119062 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:40:38.069] Imported new block headers count=1 elapsed=27.943ms number=11239051 hash="c74e9d…4af702" INFO [11-11|22:40:39.357] Initializing fast sync bloom items=1337630821 errorrate=0.009 elapsed=1h18m3.799s INFO [11-11|22:40:41.096] Imported new block headers count=1 elapsed=6.639ms number=11239052 hash="3c9210…f1c63b" INFO [11-11|22:40:47.357] Initializing fast sync bloom items=1341269535 errorrate=0.009 elapsed=1h18m11.799s INFO [11-11|22:40:52.734] Imported new state entries count=493 elapsed="220.964µs" processed=364883085 pending=119628 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:40:55.358] Initializing fast sync bloom items=1344203501 errorrate=0.009 elapsed=1h18m19.801s INFO [11-11|22:40:56.222] Imported new block headers count=1 elapsed=10.524ms number=11239053 hash="970b84…26b4ec" INFO [11-11|22:41:02.118] Imported new state entries count=384 elapsed="134.762µs" processed=364883469 pending=120074 trieretry=0 coderetry=0 duplicate=0 unexpected=0 INFO [11-11|22:41:03.362] Initializing fast sync bloom items=1347454987 errorrate=0.009 elapsed=1h18m27.805s INFO [11-11|22:41:08.304] Imported new block headers count=1 elapsed=6.700ms number=11239054 hash="eabd85…046b3c" INFO [11-11|22:41:11.363] Initializing fast sync bloom items=1351124513 errorrate=0.009 elapsed=1h18m35.805s INFO [11-11|22:41:12.373] Imported new state entries count=384 elapsed="139.643µs" processed=364883853 pending=120526 trieretry=0 coderetry=0 duplicate=0 unexpected=0I will run it for a while to see if it can finish syncing by itself, most likely its blockchain data integrity is already corrupted.
My problem is the same as yours. The block number is always 0.
Can you share the machine you can work with
The bad node with sync.blockNumber == 0 still cannot sync after many hours, there should be a bug somewhere or the blockchain data is already broken. Fortunately I don't have to resync the whole blockchain, making a copy of block data from the good node should work.
My problem is the same as yours. The block number is always 0.
Can you share the machine you can work with
Sorry, my geth nodes are private and only reserved for internal api services.
making a copy of block data from the good node should work.
Yes, if you have two machines, it's perfectly valid to (when both nodes are shut down) copy the chaindata from one machine to the other. "Sync via scp".
The _bad_ node with sync.blockNumber == 0 still cannot sync after many hours, there should be a bug somewhere or the blockchain data is already broken. Fortunately I don't have to resync the whole blockchain, making a copy of block data from the good node should work.
The good news is: the _bad_ node has healed itself before I give up:
> eth.syncing
false
> eth.blockNumber
11241586
>
So we must be __patient__ when dealing with eth blockchain ;-)
making a copy of block data from the good node should work.
Yes, if you have two machines, it's perfectly valid to (when both nodes are shut down) copy the chaindata from one machine to the other. "Sync via scp".
Before rsync or scp the _good_ data to overwrite the _bad_ data, please make sure to backup the nodekey file ( geth/nodekey ), it should be unique for each eth node, otherwise you will have twins with the same identity.
Having this exact problem.
Confirming that upgrading to Geth-v.1.9.24-stable and then running debug.setHead('0xAB6E38') in the console, then restarting geth will fix the issue.
If anything else goes wrong I'll update here 👇
Edit: Another node which I did not upgrade or touch at all seemed to fix itself.. not entirely sure though 🤔
Iam currently upgraded to geth 1.9.24-stable-cc05b050 cause of this issue. Upgraded to M2.SSD without success, even the sync is much faster
eth.blockNumber
0
eth.syncing
{
currentBlock: 11269232,
highestBlock: 11281225,
knownStates: 404426203,
pulledStates: 404420180,
startingBlock: 11234872
}
Some Warnings i observed related to that:
Nov 18 10:25:06 geth-main geth[202068]: WARN [11-18|10:25:06.348] Header broke chain ancestry peer=6b45efdf39775dc4 number=11278776 hash="0d73de…fde582"
Nov 18 10:25:11 geth-main geth[202068]: WARN [11-18|10:25:11.184] Rewinding blockchain target=11276712
Nov 18 10:25:11 geth-main geth[202068]: WARN [11-18|10:25:11.309] Rolled back chain segment header=11278761->11276712 fast=11276508->11276508 block=0->0 reason="syncing canceled (requested)"
Nov 18 10:25:11 geth-main geth[202068]: WARN [11-18|10:25:11.309] Synchronisation failed, dropping peer peer=46776dff2667d129 err=timeout
Nov 18 10:25:11 geth-main geth[202068]: WARN [11-18|10:25:11.415] Synchronisation failed, dropping peer peer=6b45efdf39775dc4 err="action from bad peer ignored: no pivot included along head header"
Nov 18 10:25:21 geth-main geth[202068]: WARN [11-18|10:25:21.590] Invalid header encountered number=11280112 hash="a7c055…44e42a" parent="eba9c9…12e0a2" err="invalid mix digest"
Nov 18 10:25:21 geth-main geth[202068]: WARN [11-18|10:25:21.591] Rewinding blockchain target=11276763
Nov 18 10:25:21 geth-main geth[202068]: WARN [11-18|10:25:21.717] Rolled back chain segment header=11278812->11276763 fast=11276526->11276526 block=0->0 reason="invalid mix digest"
Nov 18 10:25:21 geth-main geth[202068]: WARN [11-18|10:25:21.717] Synchronisation failed, dropping peer peer=0106da03098626f5 err="retrieved hash chain is invalid: invalid mix digest"
Tried the setHead here after adding debug to http.api argument:
debug.setHead('0xAB6E38')
Error: Post "http://172.31.0.213:8545": EOF
at web3.js:6347:37(47)
at web3.js:5081:62(37)
at:1:14(4)
this result null after several tries and restart the geth service deleted all chaindata and starts syncing again from here:
eth.syncing
{
currentBlock: 11238323,
highestBlock: 11281717,
knownStates: 405881488,
pulledStates: 405831636,
startingBlock: 11237319
}
// Another print after some hours. Some chances this will get synced?
eth.syncing
{
currentBlock: 11276886,
highestBlock: 11282281,
knownStates: 412371141,
pulledStates: 412370776,
startingBlock: 11271990
}
// Current warnings:
-- Logs begin at Thu 2020-11-05 09:19:13 CET. --
Nov 18 17:23:34 geth-main geth[218410]: WARN [11-18|17:23:34.698] Rolled back chain segment header=11279749->11277700 fast=11275973->11275973 block=0->0 reason="invalid mix digest"
Nov 18 17:23:34 geth-main geth[218410]: WARN [11-18|17:23:34.700] Synchronisation failed, dropping peer peer=86418f27c1491f23 err="retrieved hash chain is invalid: invalid mix digest"
Nov 18 17:23:39 geth-main geth[218410]: WARN [11-18|17:23:39.851] Header broke chain ancestry peer=965b131451c585d6 number=11279486 hash="224013…e0a1a9"
Nov 18 17:23:43 geth-main geth[218410]: WARN [11-18|17:23:43.457] Invalid header encountered number=11280137 hash="b2911a…24b110" parent="7e71e7…93453f" err="invalid mix digest"
Nov 18 17:23:43 geth-main geth[218410]: WARN [11-18|17:23:43.457] Rewinding blockchain target=11276230
Nov 18 17:23:44 geth-main geth[218410]: WARN [11-18|17:23:44.022] Rolled back chain segment header=11278279->11276230 fast=11275975->11275975 block=0->0 reason="invalid mix digest"
Nov 18 17:23:44 geth-main geth[218410]: WARN [11-18|17:23:44.022] Synchronisation failed, dropping peer peer=44c1f68430c13ea4 err="retrieved hash chain is invalid: invalid mix digest"
Nov 18 17:23:50 geth-main geth[218410]: WARN [11-18|17:23:50.023] Synchronisation failed, dropping peer peer=50ada560c7e462a6 err=timeout
Nov 18 17:23:50 geth-main geth[218410]: WARN [11-18|17:23:50.474] Dropping unsynced node during fast sync id=8ee972c5e107920a conn=inbound addr=5.188.124.12:56888 type=geth/v1.9.23-stable-...
Nov 18 17:23:53 geth-main geth[218410]: WARN [11-18|17:23:53.004] Dropping unsynced node during fast sync id=c36e43b49c47ed50 conn=inbound addr=101.95.9.242:42148 type=Geth/v1.9.25-unstabl...
Nov 18 17:23:56 geth-main geth[218410]: WARN [11-18|17:23:56.720] Dropping unsynced node during fast sync id=220616472e27daaf conn=inbound addr=116.202.210.165:34134 type=geth/v1.9.23-stable-...
Nov 18 17:23:57 geth-main geth[218410]: WARN [11-18|17:23:57.087] Header broke chain ancestry peer=965b131451c585d6 number=11279658 hash="788c8d…afacef"
Nov 18 17:23:57 geth-main geth[218410]: WARN [11-18|17:23:57.700] Dropping unsynced node during fast sync id=45b493773600f24f conn=inbound addr=34.91.247.204:39258 type=OpenEthereum/v3.1.0-...
// deleted ethash folder and restarted the service. Invalid mix digest is gone so far
// After deleting ethash and wait some time, iam NOW finally synced!
eth.syncing
false
eth.blockNumber
11285313
Afaict this can now be closed?
I reproduced this issue with reverting my chaindata. After deleting ethash here again, the sync makes progress. So ethash is sometimes corrupted and does not match to the chaindata.
I had the same problem. Updated to 1.9.24, cleared ethhas & restarted. Seems to work now.