ipfs version --all
go-ipfs version: 0.4.13-
Repo version: 6
System version: amd64/linux
Golang version: go1.9.2
Bug
Critical (possibly related to an experimental feature?)
This error is preventing my node from starting up or from otherwise doing anything. Besides having an event where I temporarily ran out of disk space a couple of days ago (I don't remember if IPFS automatically started after the next boot or not), there aren't any relevant changes that I'm aware of.
In case it's relevant, this node is using the badger datastore.
$ ipfs daemon
Initializing daemon...
Adjusting current ulimit to 2048...
Successfully raised file descriptor limit to 2048.
12:59:26.950 ERROR cmd/ipfs: error from node construction: reading varint: buffer too small daemon.go:320
Error: reading varint: buffer too small
$ ipfs repo stat
Error: reading varint: buffer too small
@leerspace thanks for reporting. What sorts of things were you doing with your node before it crashed? (will help us hone in on where the problem might be)
cc @kevina
That error means that there's a corrupted CID somewhere. Whatever you do, please
do not delete that repo (we'll need it to debug this issue).
Would you feel comfortable sharing it with us?
Oh, totally missed that you were using badger. I think that bit is definitely relevant.
cc @magik6k
If Ipfs was running when you ran out of space, it's quite likely badger got corrupted as Ipfs does datastore writes quite frequently (even when not used actively, mostly related to provider records iirc).
If that's the case we may want to report it to the badger team, though it may be something they may not be able to fix.
@magik6k a database _should not_ get corrupted to do running out of space (it should just fail to update), I consider that a bug.
I don't remember if it actually crashed; I ended up rebooting shortly after the issue happened. This was on my laptop where IPFS is set to automatically start on boot and my main method to see that it's running is the IPFS companion Firefox plugin (I didn't notice it until today, though).
If it was the event where my disk filled up, the reason it filled up had nothing to do with IPFS. It was downloading too many torrents that tried to preallocate too much space.
I don't know what all is in my repo, but it's probably a mix of various archives.ipfs.io content and dtube videos. In either case, here's the repo: /ipfs/QmSeJDCE9AjigoamTBgV5CC8WpnfEPriobnBTwU9bubVTB.
Thanks @leerspace, I'll be taking a look at the repo to check if the error is related to badger, the out-of-space scenario should be tested more thoroughly to see whether the DB is left in a corrupted (unusable) state.
@schomatis Sadly, I think that the node I pinned the corrupted repo to eventually got corrupted itself due to a hardware failure and I had to re-initialize it. Unless someone else grabbed the repo in my previous post, I think the corrupted repo might be lost since I can't seem to fully resolve my previous hash anymore.
Thanks @leerspace the file is still up. (IPFS to the rescue :)
We haven't seen any similar reports in a while and have updated badger several times since. Closing assuming fixed.
Most helpful comment
@magik6k a database _should not_ get corrupted to do running out of space (it should just fail to update), I consider that a bug.