Go-ipfs: ERROR: parse.go:200 - could not guess encoding from content type

Created on 31 Jan 2020  路  10Comments  路  Source: ipfs/go-ipfs

Version information:

go-ipfs version: 0.4.23-6ce9a355f
Repo version: 7
System version: amd64/linux
Golang version: go1.13.7

Description:

on ipfs files rm i get occasionally this error directly in the console, while the return code of the command is 0. Any idea why this is happening? Is this a type of data corruption?

`
ERROR cmds/http: could not guess encoding from content type "" parse.go:200

Might be tied to this output in the log of the daemon, or completely independent:

https://github.com/ipfs/go-ipfs/issues/6860
`

dieasy help wanted kinbug topihttp-api

All 10 comments

Can you positively link this error message to ipfs files rm ?

It would seems this would show only if ipfs would return a response without the Content-Type header sent, but I see it always set to application/json on files/rm. I also don't see why this sould happen only occasionally. Is there anything special in your setup?

The parse.go file is at https://github.com/ipfs/go-ipfs-cmds/blob/master/http/parse.go#L200.

If you can reproduce reliably I would enable debug logging for cmds, cmds/cli and cmds/http (ipfs loglevel cmds/http debug) and post the info (might offer a bit more details). Alternatively, testing with a modified go-ipfs-cmds version that prints what the actual request causing issues when the issue happens might help.

Can you positively link this error message to ipfs files rm ?

Yes, that's was the immediate response to an ipfs files rm command on the console.

Is there anything special in your setup?

I don't know how a 'normal' setup would look like, that's my first setup with ipfs. :)

I'm running a cluster on this node as well, and the commands do manipulate the same data. So stuff added via a cluster command, gets located on a folder via ipfs. If my source deletes a file I set a 2-month expire-in time via the cluster command and delete the file via ipfs from the folder.

But there _should_ be days in between a file add and a file delete. While I was setting this up and testing my script, it _might_ has been a race condition. Because I had added the file via the cluster command very recently.

But I never run commands concurrently on this node.

If you can reproduce reliably I would enable debug logging for cmds, cmds/cli and cmds/http (ipfs loglevel cmds/http debug) and post the info (might offer a bit more details). Alternatively, testing with a modified go-ipfs-cmds version that prints what the actual request causing issues when the issue happens might help.

It doesn't happen anymore for me. Not sure what changed, though.

Does this happen at the same moments when the #6860 panic ?

@hsanjuan that's quite likely. Haven't checked the timestamps back then. But both happened roughly at the same time.

No panic and no error messages since then. Both stopped after I removed all pins from the cluster and readded them.

Ok, it should be easy to check if a panic on the server side causes this client side. It is probably a harmless message anyways.

Ok, it should be easy to check if a panic on the server side causes this client side. It is probably a harmless message anyways.

Alright, do you need anything additionally from me? 馃

Ok, it should be easy to check if a panic on the server side causes this client side. It is probably a harmless message anyways.

Alright, do you need anything additionally from my? thinking

Testing if the error message pops up the same moments when a panic is triggered would be nice. You can manually panic in core/commands/version.go and then do ipfs version, for example.

@Stebalien I guess this also explains this log messages on the console running the ipfs files rm command, right?

If you were seeing that log message when you saw that stacktrace? Probably.

@Stebalien yeah was at least on the same run of the script I was writing and testing.

I think it's fine to close it. :)

Was this page helpful?
0 / 5 - 0 ratings