I'm trying to understand the problem but the only conclusion I can arrive at is there's not enough peers to seed, can someone confirm the problem?
DEBUG[12-26|04:34:59] Peer discarded announcement peer=980dc5b107632a26 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:34:59] Peer discarded announcement peer=21b13874df3cb141 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:34:59] Peer discarded announcement peer=24032d4a4be035e5 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Discarded propagated block, too far away peer=24032d4a4be035e5 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Discarded propagated block, too far away peer=407dfc207efdad3d number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Peer discarded announcement peer=18f60a977445d999 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Discarded propagated block, too far away peer=2f40ff0e1dee36f7 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Peer discarded announcement peer=915406cdb1a18374 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Peer discarded announcement peer=8fff43f96e7749e8 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:00] Peer discarded announcement peer=7e7afff8d43b14bb number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:03] Peer discarded announcement peer=677d93e7ae2b2e94 number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:04] Peer discarded announcement peer=f05ce2525cf4f00e number=4800554 hash=a9587a…d753eb distance=2460633
DEBUG[12-26|04:35:05] Recalculated downloader QoS values rtt=19.999999997s confidence=1.000 ttl=1m0s
> eth.syncing
{
currentBlock: 2339921,
highestBlock: 4793922,
knownStates: 0,
pulledStates: 0,
startingBlock: 0
}
> DEBUG[12-26|04:35:20] Peer discarded announcement peer=18f60a977445d999 number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:20] Peer discarded announcement peer=21b13874df3cb141 number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:21] Peer discarded announcement peer=24032d4a4be035e5 number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:21] Discarded propagated block, too far away peer=915406cdb1a18374 number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:21] Peer discarded announcement peer=980dc5b107632a26 number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:21] Discarded propagated block, too far away peer=2f40ff0e1dee36f7 number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:22] Peer discarded announcement peer=407dfc207efdad3d number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:22] Peer discarded announcement peer=7e7afff8d43b14bb number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:22] Peer discarded announcement peer=f05ce2525cf4f00e number=4800555 hash=b8d7b5…4c9504 distance=2460634
DEBUG[12-26|04:35:25] Recalculated downloader QoS values rtt=19.999999997s confidence=1.000 ttl=1m0s
DEBUG[12-26|04:35:25] Peer discarded announcement peer=18f60a977445d999 number=4800556 hash=a7ec82…8421f6 distance=2460635
DEBUG[12-26|04:35:25] Peer discarded announcement peer=21b13874df3cb141 number=4800556 hash=a7ec82…8421f6 distance=2460635
it's been having a real tough time the past couple of days, maybe if it'd help I could setup a few peers to seed the blockchain exclusively... I want to understand this first though
what is at fault here is all I'd like to know
Same issue. I have a ton of ignored blocks and unexpected state entries, and then it eventually drops all the peers.
Seems related to https://github.com/ethereum/go-ethereum/issues/15712
not so sure that it is, seems more related to this https://github.com/ethereum/go-ethereum/issues/15649
same here. fast sync downloads chain till 123 blocks left, then starts downloading a few million state entries and then nothing else happens. every peer is discarded.
after restart of programm the few million state entries are gone, program starts from zero state entries with last-block-123 downloaded. trying to sync for 3 weeks by now. multiple restarts, chain deletes and a few days of undisturbed sitting in front of that pos watching what happens with verbosity 4. remarkable entry: FS scan time o sec bla bla every second. when that appears all normal action like contacting peers successfully stops. in that situation the program also can no longer be interrupted clean with strg+c.
i see 2 possibilities here: the program is bugged or a major part of the ethereum net consists of faulty/malevolent clients or both. there is a reason i say that, the last time syncs were that faulty for me i could see that this behaviour only occured when 10 peers contacted the same time. 1-5 clients connected, all ok. 10 joined together at once, sync broken. that led me to believe intention. but i missed the tools to check and corelate the ips. aditionally i plz want to use the program and not debug the environment.
all peers same distance 4572546 when sync stalls.
the machine and client 1.7.2 was running ok for months. then it lost sync due to machine off a day. Never synced again, switched to 1.7.3, same behaviour. had 2.5 million stale entries downloaded, after restart of geth it is starting with 0 state entries and 256 missing blocks., downloading ~1000 state entries all 10 minutes. so roughly 50000 minutes for the 50 millions. the block download does not change at all.
start of sync is wonderfull, 4-6 hours till block ~4.080.000. then totally random behaviour occurs (last block changing/state entries number changing +-) till block ~4.200.000 / 4.300.000, then wonderfull again but never finishing.
last remark: good sync: state and blocks are downloaded parallel. actual situation bad sync: blocks are downloaded and a few 1000 states. then block download stops and then states are downloaded.
maybe a firewall/IDS/AV issue that certain traffic is blocked on a few peers and noone notices it.
{
currentBlock: 1101600,
highestBlock: 4872702,
knownStates: 1668568,
pulledStates: 1661993,
startingBlock: 0
}
eth.syncing
Error: EOF
at web3.js:3143:20
at web3.js:6347:15
at get (web3.js:6247:38)
at
that is the error on stall
bump
--
Securely sent with Tutanota. Claim your encrypted mailbox today!
https://tutanota.com
>
same here. downloads chain till 123 blocks left, starts downloading a few million state entries and then nothing else happens. every peer is discarded. trying to sync for 3 weeks by now. multiple restarts, chain deletes and a few days of undisturbed sitting in front of that pos watching what happens with verbosity 4. remarkable entry: FS scan time o sec bla bla every second. when that appears all normal action like contacting peers successfully stops.
i see 2 possibilities here: the program is bugged or a major part of the ethereum net consists of faulty/malevolent clients or both.—
You are receiving this because you authored the thread.
Reply to this email directly, > view it on GitHub> , or > mute the thread> .
even after a restart the program does not load,
just
FS scan time list=0s each second for hours/days
@philoctetes409bc Your OP lacks version information and steps to reproduce (why'd you clear the issue template?..).
Same situation of @JimdiGriz2 here, any news?
well, i dont know what the programs do. i switched to parity on the same virtual machine, also not syncing. machine was on a 9-disk raid 5 iscsi. after moving the machine and virtual disk on a M2 ssd both programs synced successfully in 1-2 weeks. still have a problem understanding why it needs a ssd for successfull sync. it is 100-200 gb on a ssd, thats a little cost-factor.
why did i clear the issue template? because it is an easy issue: geth not syncing. google that term. the only solution seems to be ssd. and it happened with multiple versions and multiple machines.
I was unable to get it to sync even on a modern SSD, so I think the issue runs deeper.
unusable!
Sorry, closing this because the report isn't actionable. There is no single bug in geth that causes sync failures. We are aware that sync may sometimes fail for networking reasons.
A change to move the fast sync pivot block as the chain advances was implemented middle of 2018. This removed many fast sync issues people had. Numerous other sync issues have been fixed since this issue was created.