I'm seeing this error in my production Lucky app pretty often. Prior to Crystal 0.34 I would see it maybe a handful of times a day, but after upgrading to Crystal 0.34, we noticed we were seeing it on almost every request coming in.
As per @waj comment you can see the easiest way to get a similar error is from:
$ crystal eval 'puts "hi"' | echo
Unhandled exception: Error writing file: Broken pipe (IO::Error)
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/crystal/system/unix/file_descriptor.cr:82:13 in 'unbuffered_write'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/io/buffered.cr:218:5 in 'flush'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/io/buffered.cr:179:7 in 'write_byte'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/char.cr:775:9 in 'to_s'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/io.cr:175:5 in '<<'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/io.cr:189:5 in 'print'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/io.cr:244:5 in 'puts'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/io.cr:219:5 in 'puts'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/kernel.cr:377:3 in 'puts'
from /eval:1:1 in '__crystal_main'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/crystal/main.cr:105:5 in 'main_user_code'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/crystal/main.cr:91:7 in 'main'
from /Users/jeremywoertink/.asdf/installs/crystal/0.34.0/src/crystal/main.cr:114:3 in 'main'
Though, there's several other ways to get a similar error like shown in this thread
crystal eval "1.upto(1000) { |i| puts i }" | head
Or just putting those lines in to a separate crystal file and running it like normal.
Duplicate of #2713.
Well... apparently I keep running in circles 😆
https://github.com/crystal-lang/crystal/issues/7810#issuecomment-585362217
@jwoertink note that this also happens in Ruby. It's just that you need to write more items to trigger:
$ ruby -e "1.upto(100000) { |i| puts i }" | head -n 1
1
-e:1:in `write': Broken pipe @ io_write - <STDOUT> (Errno::EPIPE)
from -e:1:in `puts'
from -e:1:in `puts'
from -e:1:in `block in <main>'
from -e:1:in `upto'
from -e:1:in `<main>'
Ah interesting. So crystal tanks out earlier where Ruby can handle a little bit more it seems. I wonder what node does differently...
This takes a while to run, but it finishes.
node -e "for(let x = 1; x < 1000000; x++) { console.log(x) }" | head
Now, I'm not actually doing any command line piping in Lucky. I am currently using a slightly older version of Dexter which just wraps the old logger. There is a chance that the new logger might fix the issue, but we saw several hours of downtime on our app, and there's a worry that pushing this up could take the app down again.
You're talking about this error, rigth? https://forum.crystal-lang.org/t/unhandled-exception-error-writing-file-broken-pipe-errno/1396/4
In that case, it's unrelated because it's the TCP socket what is closed. Maybe the client moved to another page already? Any idea what trigger those? Client disconnection should be handled (and ignored) by the HTTP::Server
.
Yes, that error. And actually, in that case, that was me booting my app locally and hitting the home page. I wish I could just say "here's my app!" 😂 Is there a good way I can extract some of this out? I guess I'll try and make a mini http server and logger and see what I can get to happen.
Ok, so I just checked bugsnag, here's the last error we got which was before dropping back down to 0.33
Errno Error writing to socket: Broken pipe
/usr/share/crystal/src/io/evented.cr:156:7 in 'unbuffered_write' <file>
/usr/share/crystal/src/http/server/response.cr:209:11 in 'unbuffered_write' <file>
src/handlers/route_handler.cr:19:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call_next' <file>
src/handlers/cdn_validator_handler.cr:5:5 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file>
/usr/share/crystal/src/http/server/request_processor.cr:48:11 in 'process' <file>
/usr/share/crystal/src/fiber.cr:255:3 in 'run' <file>
??? <file>
In case that helps any. I think I'm going to give the new logger a try and see if that makes a difference. But as for the original post, this issue is still valid as far as saying there's a crystal compiler issue even though the error is technically valid, right?
Could it be that the server is writing the response but the client already closed the connection? I'll try to reproduce that scenario...
Yes, the error from the compiler should be handled somehow to avoid that nasty output. But it's totally unrelated despite the message similarity.
Regarding your error with the HTTP server, I can see in that backtrace that the exception is an Errno
. That doesn't exist anymore in 0.34.
I just tried with this simple server and doing curl
and immediately cancel the request. You can see that broken pipe errors are raising but they are handled by the http server.
require "http/server"
server = HTTP::Server.new do |context|
context.response.content_type = "text/plain"
10000000.times do |i|
context.response.print "Hello world! #{i}\n"
end
rescue e
puts e.inspect
raise e
end
address = server.bind_tcp 8080
puts "Listening on http://#{address}"
server.listen
Oh yes, you're right. I have several different errors in here with the same error message. This is the right one:
IO::Error Error writing to socket: Broken pipe
/usr/share/crystal/src/io/evented.cr:156:7 in 'unbuffered_write' <file>
/usr/share/crystal/src/http/server/response.cr:216:11 in 'unbuffered_write' <file>
src/handlers/route_handler.cr:19:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next' <file>
lib/lucky/src/lucky/error_handler.cr:15:5 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call' <file>
/usr/share/crystal/src/http/server/handler.cr:28:7 in 'call' <file>
/usr/share/crystal/src/http/server/request_processor.cr:48:11 in 'process' <file>
/usr/share/crystal/src/fiber.cr:255:3 in 'run' <file>
??? <file>
@waj are you able to see the errors? See #9067
@asterite you're right, I thought HTTP::Server was already silently ignoring client disconnections but it wasn't logging anything at all. Now with the new logger we could test that more effectively.
I appreciate you guys helping me through all this ❤️
See https://pmhahn.github.io/SIGPIPE/
We want to handle EPIPE where write(2)
failed and don't want to deal with a global SIGPIPE, that will interrupt the running thread, then detect which file descriptor failed, ... We also must ignore/handle SIGPIPE, otherwise the program will abort with Program received and didn't handle signal PIPE
for example. See:
Signal::PIPE.reset
loop { print "foo\n" }
Nobody wants a HTTP server to exit because a client closed a connection. But we're so used to simple command line tools that merely trap SIGPIPE to _exit_ the program ASAP, that we assume that a failed read/write on STDIN and STDOUT isn't important, but... it's still an error and it must be handled.
Now, the exit on SIGPIPE behavior can be achieved in Crystal with the following:
LibC.signal(LibC::SIGPIPE, ->(signal : Int32) { LibC._exit(0) })
loop { print "foo\n" }
We _can't_ use Signal::PIPE.trap { exit }
because we delay signal handlers so the Crystal handler will run on safe Crystal stack, but we still install a handler for SIGPIPE which means the signal would be handled, and write(2)
will return an EPIPE _before_ the signal handler is given a chance to run and exit the program. We thus rely on raw C calls to exit immediately (calling exit
would be unsafe).
@ysbaddaden would a Signal::PIPE.trap_and_exit
work in that case? For simple CLI tools it might be useful. Of course that would be ungracefully exiting the process so no chance to clean up resources, so I'm not sure it's a good practice anyway. Also, it would be desirable check which fd actually was the one that failed and react only to STDOUT or any other specified.
Maybe something on IO would fit better? Like IO.exit_on_broken_pipe
. It's IO related after all, Signal is just a mean to convey the broken pipe. Or maybe STDIN#read
, STDOUT#write
and STDERR#write
could check for EPIPE and exit or raise.
Maybe we should have a special exception for EPIPE
from STDIN
/STDOUT
/STDERR
which is catched in main
? User code could still catch it earlier then to override the default behavior of just exiting. Or maybe main
could catch this generally and try to identify it coming from STDIN
/STDOUT
/STDERR
, I guess the exception would need to carry the FD for that.
Is there an easy way to repro this failing? (and not failing in 0.33)?
Not really because it happened in 0.33 too (intermittently). It's just something changed in 0.34 that made it more apparent. I do see it occasionally locally when running Lucky through overmind, but it's very rare.
On a side note, there was an issue in Lucky with the old logger that seems to be fixed with the new logger. I'm going to deploy today to see if, magically, this issue is also fixed 🤷♂️
Ok, so I just checked bugsnag, here's the last error we got which was before dropping back down to 0.33
Errno Error writing to socket: Broken pipe /usr/share/crystal/src/io/evented.cr:156:7 in 'unbuffered_write' <file> /usr/share/crystal/src/http/server/response.cr:209:11 in 'unbuffered_write' <file> src/handlers/route_handler.cr:19:7 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call_next' <file> src/handlers/cdn_validator_handler.cr:5:5 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file> /usr/share/crystal/src/http/server/handler.cr:26:7 in 'call' <file> /usr/share/crystal/src/http/server/request_processor.cr:48:11 in 'process' <file> /usr/share/crystal/src/fiber.cr:255:3 in 'run' <file> ??? <file>
In case that helps any. I think I'm going to give the new logger a try and see if that makes a difference. But as for the original post, this issue is still valid as far as saying there's a crystal compiler issue even though the error is technically valid, right?
I had the same issue which rarely occurred. Don't know how to re-produce it, but I am pretty sure it happens from version 0.34 and later. Maybe it related to the changes on src/io.cr and src/io/*.cr.
Exception: Error while writing data to the client (HTTP::Server::ClientError)
from ../../../../../usr/share/crystal/src/http/server/response.cr:215:9 in 'unbuffered_write'
from ../../../../../usr/share/crystal/src/io/buffered.cr:217:5 in 'flush'
from ../../../../../usr/share/crystal/src/io/buffered.cr:156:7 in 'write'
from ../../../../../usr/share/crystal/src/http/server/response.cr:86:7 in 'write'
from ../../../../../usr/share/crystal/src/io.cr:1120:7 in 'copy'
from lib/kemal/src/kemal/helpers/helpers.cr:155:7 in 'send_file'
from lib/kemal/src/kemal/static_file_handler.cr:66:9 in 'call'
from ../../../../../usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next'
from lib/kemal/src/kemal/exception_handler.cr:8:7 in 'call'
from ../../../../../usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next'
from lib/kemal/src/kemal/log_handler.cr:8:37 in 'call'
from ../../../../../usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next'
from lib/kemal/src/kemal/init_handler.cr:12:7 in 'call'
from ../../../../../usr/share/crystal/src/http/server/request_processor.cr:50:11 in 'process'
from ../../../../../usr/share/crystal/src/http/server.cr:513:5 in 'handle_client'
from ../../../../../usr/share/crystal/src/http/server.cr:468:13 in '->'
from ../../../../../usr/share/crystal/src/primitives.cr:255:3 in 'run'
from ../../../../../usr/share/crystal/src/fiber.cr:92:34 in '->'
from ???
Caused by: Error writing to socket: Broken pipe (IO::Error)
from ../../../../../usr/share/crystal/src/io/evented.cr:82:13 in 'unbuffered_write'
from ../../../../../usr/share/crystal/src/io/buffered.cr:152:14 in 'write'
from ../../../../../usr/share/crystal/src/http/server/response.cr:211:11 in 'unbuffered_write'
from ../../../../../usr/share/crystal/src/io/buffered.cr:217:5 in 'flush'
from ../../../../../usr/share/crystal/src/io/buffered.cr:156:7 in 'write'
from ../../../../../usr/share/crystal/src/http/server/response.cr:86:7 in 'write'
from ../../../../../usr/share/crystal/src/io.cr:1120:7 in 'copy'
from lib/kemal/src/kemal/helpers/helpers.cr:155:7 in 'send_file'
from lib/kemal/src/kemal/static_file_handler.cr:66:9 in 'call'
from ../../../../../usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next'
from lib/kemal/src/kemal/exception_handler.cr:8:7 in 'call'
from ../../../../../usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next'
from lib/kemal/src/kemal/log_handler.cr:8:37 in 'call'
from ../../../../../usr/share/crystal/src/http/server/handler.cr:28:7 in 'call_next'
from lib/kemal/src/kemal/init_handler.cr:12:7 in 'call'
from ../../../../../usr/share/crystal/src/http/server/request_processor.cr:50:11 in 'process'
from ../../../../../usr/share/crystal/src/http/server.cr:513:5 in 'handle_client'
from ../../../../../usr/share/crystal/src/http/server.cr:468:13 in '->'
from ../../../../../usr/share/crystal/src/primitives.cr:255:3 in 'run'
from ../../../../../usr/share/crystal/src/fiber.cr:92:34 in '->'
from ???
Yeah, I still see the issue in 0.35.1. It's just never consistent enough to really understand what part is causing it. My guess is maybe something with a connection pool. I see you're using kemal @Dan-Do, are you doing any database stuff? Or just straight web requests?
Or just straight web requests?
I am using ArangoDB but I don't think that's the cause. This issue only occured to static resource.
I wonder if other functions not adapt to the following pull:
https://github.com/crystal-lang/crystal/pull/8707
Not sure if that'll be any help, but I also see this often on close's flush on our invidious instance:
Oct 2 08:07:33 invidious invidious[11745]: Unhandled exception in spawn: Error while flushing data to the client (HTTP::Server::ClientError)
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/http/server/response.cr:258:9 in 'unbuffered_flush'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/io/buffered.cr:219:5 in 'close'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/http/server/request_processor.cr:62:11 in 'process'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/http/server.cr:498:5 in '->'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/primitives.cr:255:3 in 'run'
Oct 2 08:07:33 invidious invidious[11745]: from ???
Oct 2 08:07:33 invidious invidious[11745]: Caused by: Error writing to socket: Broken pipe (IO::Error)
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/io/evented.cr:156:7 in 'unbuffered_write'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/io/buffered.cr:219:5 in 'unbuffered_flush'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/io/buffered.cr:219:5 in 'close'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/http/server/request_processor.cr:62:11 in 'process'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/http/server.cr:498:5 in '->'
Oct 2 08:07:33 invidious invidious[11745]: from ../../../usr/share/crystal/src/primitives.cr:255:3 in 'run'
Oct 2 08:07:33 invidious invidious[11745]: from ???
then if it happens enough we somehow run out of fd and also get
Oct 2 17:00:01 invidious invidious[28308]: 2020-10-02T15:00:01.766671Z ERROR - http.server: Error while connecting a new socket
Oct 2 17:00:01 invidious invidious[28308]: accept: Too many open files (Socket::Error)
Oct 2 17:00:01 invidious invidious[28308]: from ../../../usr/share/crystal/src/socket.cr:276:17 in 'accept?'
Oct 2 17:00:01 invidious invidious[28308]: from ../../../usr/share/crystal/src/http/server.cr:459:13 in '->'
Oct 2 17:00:01 invidious invidious[28308]: from ../../../usr/share/crystal/src/primitives.cr:255:3 in 'run'
Oct 2 17:00:01 invidious invidious[28308]: from ???
As a workaround we restart invidious in a crontab and it's a bit weird, it normally doesn't happen but when it happens we get a lot e.g. with a restart every 10 minutes we get:
root@invidious:/var/log# zcat syslog.1.gz | grep -E '^Oct 2 17' | sed -ne 's/.*\(Broken\|Too many\)/\1/p' | uniq -c
1609 Too many open files (Socket::Error)
1 Oct 2 17:00:01 invidious systemd[1]: Started Invidious (An alternative YouTube front-end).
1715 Broken pipe (IO::Error)
36404 Too many open files (Socket::Error)
1 Oct 2 17:10:01 invidious systemd[1]: Started Invidious (An alternative YouTube front-end).
1698 Broken pipe (IO::Error)
136532 Too many open files (Socket::Error)
1 Oct 2 17:20:02 invidious systemd[1]: Started Invidious (An alternative YouTube front-end).
893 Broken pipe (IO::Error)
327483 Too many open files (Socket::Error)
1 Oct 2 17:30:01 invidious systemd[1]: Started Invidious (An alternative YouTube front-end).
2010 Broken pipe (IO::Error)
1 Oct 2 17:40:01 invidious systemd[1]: Started Invidious (An alternative YouTube front-end).
1488 Broken pipe (IO::Error)
140047 Too many open files (Socket::Error)
1 Oct 2 17:50:01 invidious systemd[1]: Started Invidious (An alternative YouTube front-end).
1688 Broken pipe (IO::Error)
The numbers make me think it's an attack of sort (I, err, don't think we'd normally get 300k pages opened in 10 minutes for our small instance that keeps breaking down; but it got worse after invidio.us stopped -- I've just enabled access logs so will be able to confirm that soon... It's just too weird we get almost none for a while then a burst of them and restarting doesn't make it stop)
I've tried reproducing and I can semi-reliably get the Broken pipe to happen if I load a page and press escape before it's done loading, so nginx will close the socket while invidious still tries to flush data and we pretty much get what I'd expect to get, I'd say these messages could just be silenced as they likely used to be and that part is an artefact of the log changes.
OTOH the too many open files error makes me think we're leaking a number of fd when this happens, and I'd like to understand why -- in HTTP::Server the 'close' method is just setting a bool so I assume this is supposed to be closed by garbage collection once the server is no longer referenced?
I'm not familiar with the internals but I don't see anything done much different in case of error or normal close?
Could be somewhere else though I'll try to reproduce more reliably once I get access pattern from access logs (so in a few days..)
Thanks!
(open files issue is likely unrelated: https://github.com/iv-org/invidious/issues/1375#issuecomment-697563637 -- so as far as I'm concerned these are "normal" errors we can probably just ignore although I'd still like to understand why we get bursts)
EDIT: nvm doesn't apply to us
btw, after recent crystal update, i also start get many 'Too many open files', when running specs. I just increase local ulimit for that, but may be there is some bug with not closing files fast (seems there is no leak globally, because in production with big ulimit is also ok).
Most helpful comment
Maybe we should have a special exception for
EPIPE
fromSTDIN
/STDOUT
/STDERR
which is catched inmain
? User code could still catch it earlier then to override the default behavior of just exiting. Or maybemain
could catch this generally and try to identify it coming fromSTDIN
/STDOUT
/STDERR
, I guess the exception would need to carry the FD for that.