Go-ethereum: Swarm doesn't respond on POST to `/bzz-raw:`

Created on 21 Aug 2018  路  10Comments  路  Source: ethereum/go-ethereum

System information

Geth
Version: 1.8.13-stable
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.10.3
Operating System: darwin
Swarm
Version: 0.3.1-stable
Go Version: go1.10.3
OS: darwin



md5-2464d1875de3ac754a4ea9bb84339c6e



swarm --bzzaccount 1b70c52e735460c80184da2af751cf696c509dd3 --password /Users/emizzle/Code/__Github/embark_demo/config/privatenet/password --corsdomain http://localhost:8000,http://localhost:8080,http://localhost:8545,ws://localhost:8546,http://localhost:3000 --maxpeers 0 --nodiscover



md5-b48048288d8bf3fe6f1513c82c729510



INFO [08-21|10:23:47.771] Maximum peer count                       ETH=0 LES=0 total=0
INFO [08-21|10:23:49.905] Starting peer-to-peer node               instance=swarm/v0.3.1/darwin-amd64/go1.10.3
INFO [08-21|10:23:49.921] Starting P2P networking
WARN [08-21|10:23:49.921] Updated bzz local addr                   oaddr=9578c155186687664a60591866e4bdd37b72b8f3e7a83d8ddff965c82cd4cd35 uaddr="enode://d0da0c7ce096eb62371be329acdb7a7cf29a606411bda3cf46da3f4fba43725f02459bd9fe8c3cd705fbbdd890ec33ed3108a05aeaefe8c86bb82d24fa3f9ece@[::]:30399?discport=0"
WARN [08-21|10:23:49.921] Starting Swarm service
INFO [08-21|10:23:49.921] 9578c155 hive starting
INFO [08-21|10:23:49.921] detected an existing store. trying to load peers
INFO [08-21|10:23:49.922] RLPx listener up                         self="enode://d0da0c7ce096eb62371be329acdb7a7cf29a606411bda3cf46da3f4fba43725f02459bd9fe8c3cd705fbbdd890ec33ed3108a05aeaefe8c86bb82d24fa3f9ece@[::]:30399?discport=0"
INFO [08-21|10:23:49.922] hive 9578c155: peers loaded
INFO [08-21|10:23:49.923] Swarm network started on bzz address: 9578c155186687664a60591866e4bdd37b72b8f3e7a83d8ddff965c82cd4cd35
INFO [08-21|10:23:49.923] Pss started
INFO [08-21|10:23:49.923] Streamer started
INFO [08-21|10:23:49.925] IPC endpoint opened                      url=/Users/emizzle/Library/Ethereum/bzzd.ipc
INFO [08-21|10:23:52.410] Mapped network port                      proto=tcp extport=30399 intport=30399 interface="UPNP IGDv1-IP1"
WARN [08-21|10:26:13.945] 9578c155: handshake failed with remote peer 2338c1a1: Handshake error: Invalid message (RLP error): <= msg #0 (202 bytes): rlp: input list has too many elements for network.HandshakeMsg



md5-225e7d8c00b677c4328464ea8c036559



INFO [08-21|10:27:03.360] serving request                          ruid=a5c664b9 method=POST url=/bzz-raw:/
WARN [08-21|10:27:15.449] 9578c155: handshake failed with remote peer 2338c1a1: Handshake error: Invalid message (RLP error): <= msg #0 (202 bytes): rlp: input list has too many elements for network.HandshakeMsg



md5-665ad4434017889219e35815fbb1c99a



 INFO [08-21|10:29:13.400] IPC endpoint closed                      endpoint=/Users/emizzle/Library/Ethereum/bzzd.ipc
INFO [08-21|10:29:13.400] pss shutting down
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.400] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
INFO [08-21|10:29:13.401] 9578c155 hive stopping, saving peers
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOffered
 INFO [08-21|10:29:13.403] 9578c155 hive stopped, dropping peers
INFO [08-21|10:29:13.403] 9578c155 dropping peer ef84860d
INFO [08-21|10:29:13.403] 9578c155 dropping peer eb807086
INFO [08-21|10:29:13.403] 9578c155 dropping peer f4d06010
INFO [08-21|10:29:13.403] 9578c155 dropping peer fe3eb64e
INFO [08-21|10:29:13.403] 9578c155 dropping peer fe3a0995
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.402] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.403] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
INFO [08-21|10:29:13.403] 9578c155 dropping peer f878603d
INFO [08-21|10:29:13.404] 9578c155 dropping peer d1ac3ef6
INFO [08-21|10:29:13.404] 9578c155 dropping peer dad2de2e
INFO [08-21|10:29:13.404] 9578c155 dropping peer dba1c948
INFO [08-21|10:29:13.404] 9578c155 dropping peer dec2a07b
INFO [08-21|10:29:13.404] 9578c155 dropping peer 1d6c297e
INFO [08-21|10:29:13.404] 9578c155 all peers dropped
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
INFO [08-21|10:29:13.404] served response                          ruid=a5c664b9 code=200
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.401] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
WARN [08-21|10:29:13.404] SendOfferedHashes dropping peer          err="leveldb: closed"
steps-to-reproduce bug

All 10 comments

@emizzle thank you for reporting this issue.
could you please also specify the exact command you used to POST to /bzz-raw:/? Otherwise this would be a bit hard to trace.

For what is worth, the handshake error is independent from the POST to /bzz-raw:/ issue. You are getting the handshake error because the Handshake protocol changed, and some nodes are running a different version.

could you please also specify the exact command you used to POST to /bzz-raw:/? Otherwise this would be a bit hard to trace.

@justelad, I've tried with curl using curl -d test -X POST http://localhost:8500/bzz-raw:/ and also using a plain raw body POST with Postman (as well as using our updates to a forked swarmgw but that's out of scope for this issue). Postman POST

For what is worth, the handshake error is independent from the POST to /bzz-raw:/ issue. You are getting the handshake error because the Handshake protocol changed, and some nodes are running a different version.

@nonsense, makes sense, thanks for the info. I kind of assumed something like that was the cause, but wanted to include the info just in case.

@emizzle I unfortunately cannot reproduce with curl -d test -X POST http://localhost:8500/bzz-raw:/. I tried both the latest unstable and also 0.3.1-stable (built from sources).
Could you please specify if you've built from source or if you've installed from binaries?

Also, instead of terminating with Ctrl+C, could you please use kill -3 <swarm_pid> and paste the thread dump here?

@justelad I built from source, commit 225171a4bfcc16bd12a1906b1e0d43d0b18c353b

Setup for dump:

  1. Started swarm using same command as before
swarm --bzzaccount 1b70c52e735460c80184da2af751cf696c509dd3 --password /Users/emizzle/Code/__Github/embark_demo/config/privatenet/password --corsdomain http://localhost:8000,http://localhost:8080,http://localhost:8545,ws://localhost:8546,http://localhost:3000 --maxpeers 0 --nodiscover
  1. Here's the output from the above command:
swarm --bzzaccount 1b70c52e735460c80184da2af751cf696c509dd3 --password /Users/emizzle/Code/__Github/embark_demo/config/privatenet/password --corsdomain http://localhost:8000,http://localhost:8080,http://localhost:8545,ws://localhost:8546,http://localhost:3000 --maxpeers 0 --nodiscover
INFO [08-23|13:17:42.719] Maximum peer count                       ETH=0 LES=0 total=0
INFO [08-23|13:17:43.754] Starting peer-to-peer node               instance=swarm/v0.3.1/darwin-amd64/go1.10.3
INFO [08-23|13:17:43.767] Starting P2P networking
INFO [08-23|13:17:43.768] RLPx listener up                         self="enode://d0da0c7ce096eb62371be329acdb7a7cf29a606411bda3cf46da3f4fba43725f02459bd9fe8c3cd705fbbdd890ec33ed3108a05aeaefe8c86bb82d24fa3f9ece@[::]:30399?discport=0"
WARN [08-23|13:17:43.768] Updated bzz local addr                   oaddr=9578c155186687664a60591866e4bdd37b72b8f3e7a83d8ddff965c82cd4cd35 uaddr="enode://d0da0c7ce096eb62371be329acdb7a7cf29a606411bda3cf46da3f4fba43725f02459bd9fe8c3cd705fbbdd890ec33ed3108a05aeaefe8c86bb82d24fa3f9ece@[::]:30399?discport=0"
WARN [08-23|13:17:43.768] Starting Swarm service
INFO [08-23|13:17:43.768] 9578c155 hive starting
INFO [08-23|13:17:43.768] detected an existing store. trying to load peers
INFO [08-23|13:17:43.769] hive 9578c155: peers loaded
INFO [08-23|13:17:43.769] Swarm network started on bzz address: 9578c155186687664a60591866e4bdd37b72b8f3e7a83d8ddff965c82cd4cd35
INFO [08-23|13:17:43.769] Pss started
INFO [08-23|13:17:43.769] Streamer started
INFO [08-23|13:17:43.771] IPC endpoint opened                      url=/Users/emizzle/Library/Ethereum/bzzd.ipc
INFO [08-23|13:17:46.281] Mapped network port                      proto=tcp extport=30399 intport=30399 interface="UPNP IGDv1-IP1"
  1. Run curl -XPOST http://localhost:8500/bzz-raw:/ --data testtesttesttesttestest-dafdsa
  2. Get successful response b512f24c5b15886fb3d41ee9298f996195477d7ca0140c579d919ec4b1c94458%
  3. Run curl -XPOST http://localhost:8500/bzz-raw:/ --data testtesttesttesttestest (notice how I changed the data)
  4. Swarm freezes.
  5. Swarm output shows the first successful request/response, then the hang:
INFO [08-23|13:17:49.279] serving request                          ruid=082b4bda method=POST url=/bzz-raw:/
INFO [08-23|13:17:49.280] served response                          ruid=082b4bda code=200
INFO [08-23|13:17:53.679] serving request                          ruid=764acae2 method=POST url=/bzz-raw:/
  1. run kill -3 <swarm pid>
  2. Dump of kill -3

@emizzle i unfortunately cannot reproduce this. but it might suggest a bug somewhere else;

@zelig @janos the thread dump suggest this might have to be with a WaitGroup that doesn't resolve properly when there's just 1 data chunk in the pyramid chunker:


goroutine 137762 [semacquire]:
sync.runtime_Semacquire(0xc420bce34c)
    /usr/local/Cellar/go/1.10.3/libexec/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc420bce340)
    /usr/local/Cellar/go/1.10.3/libexec/src/sync/waitgroup.go:129 +0x72
github.com/ethereum/go-ethereum/swarm/storage.(*PyramidChunker).prepareChunks(0xc4201ed110, 0x0)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/pyramid.go:498 +0x9e5
github.com/ethereum/go-ethereum/swarm/storage.(*PyramidChunker).Split(0xc4201ed110, 0x4c66480, 0xc420bcae70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/pyramid.go:211 +0xb8
github.com/ethereum/go-ethereum/swarm/storage.PyramidSplit(0x4c66480, 0xc420bcae70, 0x6b80dc0, 0xc421241700, 0x4c66900, 0xc421241840, 0x4c58940, 0xc421241840, 0x9, 0x4, ...)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/pyramid.go:104 +0xba
github.com/ethereum/go-ethereum/swarm/storage.(*FileStore).Store(0xc42034c260, 0x4c66480, 0xc420bcae70, 0x6b80dc0, 0xc421241700, 0x17, 0x4a6f600, 0xc420bcadb0, 0x4b49fbf, 0xe, ...)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/filestore.go:93 +0xb2
github.com/ethereum/go-ethereum/swarm/api.(*API).Store(0xc42034c320, 0x4c66480, 0xc420bcae70, 0x6b80dc0, 0xc421241700, 0x17, 0x4c6ca00, 0x5561e38, 0xc420210fa0, 0x0, ...)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/api/api.go:257 +0x12f

where swarm/storage/pyramid.go:498 is the chunkWG.Wait() in:

// only one data chunk .. so dont add any parent chunk
if parent.branchCount <= 1 {
    chunkWG.Wait()
...

Then there's also this lock acquisition:

goroutine 137763 [semacquire]:
sync.runtime_SemacquireMutex(0xc4203fc06c, 0xc420336a00)
    /usr/local/Cellar/go/1.10.3/libexec/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc4203fc068)
    /usr/local/Cellar/go/1.10.3/libexec/src/sync/mutex.go:134 +0x108
sync.(*RWMutex).Lock(0xc4203fc068)
    /usr/local/Cellar/go/1.10.3/libexec/src/sync/rwmutex.go:93 +0x2d
github.com/ethereum/go-ethereum/swarm/storage.(*LDBStore).Put(0xc4203fc000, 0x4c66400, 0xc4200b2030, 0xc421230c80)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/ldbstore.go:511 +0x230
github.com/ethereum/go-ethereum/swarm/storage.(*LocalStore).Put(0xc420444bd0, 0x4c66400, 0xc4200b2030, 0xc421230c80)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/localstore.go:140 +0x323
github.com/ethereum/go-ethereum/swarm/storage.(*NetStore).Put(0xc4200a9e10, 0x4c66400, 0xc4200b2030, 0xc421230c80)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/netstore.go:192 +0x4c
github.com/ethereum/go-ethereum/swarm/storage.(*hasherStore).storeChunk(0xc421241840, 0x4c66400, 0xc4200b2030, 0xc421230c80)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/hasherstore.go:216 +0x9c
github.com/ethereum/go-ethereum/swarm/storage.(*hasherStore).Put(0xc421241840, 0x4c66400, 0xc4200b2030, 0xc420c04600, 0x1f, 0x1008, 0xc420501d80, 0x4684fb4, 0xc4200be1b0, 0x1, ...)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/hasherstore.go:90 +0xd5
github.com/ethereum/go-ethereum/swarm/storage.(*PyramidChunker).processChunk(0xc4201ed110, 0x1, 0xc421241880)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/pyramid.go:290 +0x12e
github.com/ethereum/go-ethereum/swarm/storage.(*PyramidChunker).processor(0xc4201ed110, 0x1)
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/pyramid.go:280 +0x81
created by github.com/ethereum/go-ethereum/swarm/storage.(*PyramidChunker).prepareChunks
    /Users/emizzle/go/src/github.com/ethereum/go-ethereum/swarm/storage/pyramid.go:395 +0x157

where swarm/storage/ldbstore.go:511 is

func (s *LDBStore) Put(ctx context.Context, chunk *Chunk) {
    metrics.GetOrRegisterCounter("ldbstore.put", nil).Inc(1)
    log.Trace("ldbstore.put", "key", chunk.Addr)

    ikey := getIndexKey(chunk.Addr)
    var index dpaDBIndex

    po := s.po(chunk.Addr)
    s.lock.Lock()

@emizzle can you also please update if the problem still persists on master@HEAD?

This was solved for me by running geth along side swarm. After re-reading the documentation, this doesn't seem to be an obvious need at all. Does it have something to do with sharing a datadir maybe? Either way, it's working when geth is running, and the above is happening when geth is not running.

@emizzle there's no need to run geth alongside swarm. the only exception to that is that geth is needed when you would like to have ENS resolution (and you have to specify the endpoint with --ens-api when starting swarm). I'd leave this issue open for now.

My guess is that if you clear out your datadir you might be able to restore normal operations.
But if you do so - please make a complete backup of your current one (swarm and geth included) so we could get back to finding out why this is happening.

closing due to inability to reproduce

Was this page helpful?
0 / 5 - 0 ratings

Related issues

vogelito picture vogelito  路  3Comments

keitaj picture keitaj  路  3Comments

phpsamsb picture phpsamsb  路  3Comments

VoR0220 picture VoR0220  路  3Comments

aomiba picture aomiba  路  3Comments