OS: Arch Linux & MacOS
~ % ipfs version
ipfs version 0.4.10
~ % ipfs init
initializing IPFS node at /Users/user/.ipfs
generating 2048-bit RSA keypair...done
peer identity: Qme3fGyQWP4mf3J9Ln3EjofWyYhiGgVCZZ41jgVA9o78u7
to get started, enter:
ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme
~ % ipfs daemon
Initializing daemon...
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/192.168.0.102/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready
Lets try adding a video:
ipfs add raw_video.mkv
added Qmf6vier2j9rtG7hjA8Bf8ohzT5VYNmGfRntSc4zXsQyPL raw_video.mkv
About 15-20 seconds later, ipfs will crash:
https://gist.github.com/Netherdrake/4da51b24da82fe25ae476cffeb09cc31
This issue is intermittent - sometimes the add will be successful, and I can access the files on localhost:8080/ipfs/HASH/raw_video.mkv
without issues, but most of the time, daemon will crash.
You can redirect by ipfs daemon 2>stderr.log
I am also unable to reproduce it, I have tried about 10x60MiB files.
It turns out, ipfs will crash when adding any large file.
For example, Official Ubuntu ISO.
On MacOS it crashes:
ipfs add ubuntu-16.04.2-desktop-amd64.iso
488.00 MB / 1.45 GB [=============================>------------------------------------------------------------] 32.92% 20s17:54:27.509 ERROR commands/h: unexpected EOF client.go:247
Error: unexpected EOF
On Linux it errors out, but the daemon process seems to keep running:
~/Downloads % ipfs add ubuntu-16.04.2-desktop-amd64.iso
32.00 MB / 1.45 GB [=>----------------------------------------------------------------------------] 2.16% 30s17:53:01.534 ERROR commands/h: open /home/user/.ipfs/blocks/SF/put-579906520: too many open files client.go:247
Error: open /home/user/.ipfs/blocks/SF/put-579906520: too many open files
% ipfs daemon
Initializing daemon...
Adjusting current ulimit to 2048...
Successfully raised file descriptor limit to 2048.
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/172.17.0.1/tcp/4001
Swarm listening on /ip4/172.18.0.1/tcp/4001
Swarm listening on /ip4/172.19.0.1/tcp/4001
Swarm listening on /ip4/172.20.0.1/tcp/4001
Swarm listening on /ip4/172.21.0.1/tcp/4001
Swarm listening on /ip4/172.22.0.1/tcp/4001
Swarm listening on /ip4/172.23.0.1/tcp/4001
Swarm listening on /ip4/172.24.0.1/tcp/4001
Swarm listening on /ip4/192.168.1.107/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready
17:53:01.534 ERROR commands/h: err: open /home/user/.ipfs/blocks/SF/put-579906520: too many open files handler.go:285
I have tried downgrading to previous versions, and the only version that doesn't crash is 0.4.6
:
~/Downloads % ipfs add ubuntu-16.04.2-desktop-amd64.iso
added QmTc9mzzoEChP2Wyc4uGWGkkifC99y8o4KmxwVyg18MP76 ubuntu-16.04.2-desktop-amd64.iso
~/Downloads % ipfs version
ipfs version 0.4.6
~/Downloads % pacman -Q go-ipfs
go-ipfs 0.4.6-1
I am unable to reproduce this either on macOS 10.12.5 and Ubuntu 16.04 using ipfs 0.4.10.
$ truncate -s 500M testfile
added QmV7q5aTmvZtGWja4wpodiUTEpBVWYFkQGRQ8PmJMDPG62 testfile
$
Was your build of ipfs from source, your package manager, or the binary from the web? Maybe try it with a clean ~/.ipfs
directory to see if the problem persists.
They are all from package managers. Every output in the messages above started with rm -rf ~/.ipfs && ipfs init
.
@Netherdrake if you run your daemon with the --routing=none
option, does it still fail in the same way?
With --routing=none
it works fine.
okay, so this is a case of adding a file causing the DHTs provider subsystem to connect to wayyyy too many peers. If you remove the --routing=none
flag, and use ipfs add --local
things should also work fine.
I'm working on a fix for this, we will hopefully have something in the next release.
Oh, i also realize this issue is on the wrong repo. In future, use ipfs/go-ipfs to report issues like this.
Most helpful comment
okay, so this is a case of adding a file causing the DHTs provider subsystem to connect to wayyyy too many peers. If you remove the
--routing=none
flag, and useipfs add --local
things should also work fine.I'm working on a fix for this, we will hopefully have something in the next release.