caddy -version)?0.9.2
Use Caddy with php-fpm started with startup command.
I am using a lumen framework.
:80 {
root /app/public
fastcgi / 127.0.0.1:9000 php
rewrite {
to {path} {path}/ /index.php?{query}
}
cors
gzip
log stdout
prometheus 0.0.0.0:9180
startup /usr/sbin/php-fpm7.0 -D -O
Caddy is dockerized.
And I send a GET request with curl. It is working half of the time (1 in 2). It works, it doesn't, it works, it doesn't.
curl -v 'localhost/_health'
A json repsonse.
And a 200 return code
172.19.0.1 - [23/Sep/2016:11:42:19 +0200] "GET /index.php HTTP/1.1" 200 188
A 502 error and this entry in caddy logfile:
172.19.0.1 - [23/Sep/2016:11:42:21 +0200] "GET /index.php HTTP/1.1" 502 16
23/Sep/2016:11:42:21 +0200 [ERROR 502 /index.php] fcgi: invalid header version
I can't share my source code.
I'm having the same problem with version 0.9.2. Every second request gives a 23/Sep/2016:13:04:12 +0200 [ERROR 502 /] fcgi: invalid header version
The problem can be reproduced with a new laravel installation if you have php7.0-fpm and composer installed.
cd /tmp
composer create-project --prefer-dist laravel/laravel test
Use the following Caddyfile
localhost:80 {
tls off
root /tmp/test/public
errors visible
fastcgi / /var/run/php/php7.0-fpm.sock php {
index index.php
}
rewrite {
to {path} {path}/ /index.php?{query}
}
}
Then start caddy
➜ /tmp/test caddy -version
Caddy 0.9.2
➜ /tmp/test caddy -conf /tmp/test/Caddyfile
Then
➜ ~ curl localhost
<!DOCTYPE html>...
➜ ~ curl localhost
23/Sep/2016:13:10:59 +0200 [ERROR 502 /] fcgi: invalid header version
➜ ~ curl localhost
<!DOCTYPE html>...
➜ ~ curl localhost
23/Sep/2016:13:10:59 +0200 [ERROR 502 /] fcgi: invalid header version
The cause is probably related to this https://github.com/mholt/caddy/issues/1123
I don't see any tcp connection related logs.
FastCGI is currently reusing connections (since 0.9.2); so this can be due to writing the headers on an existing connection.
Thanks for the explanation @abiosoft
Just started using Caddy and found it very odd that it would drop every second request! Is there a workaround, or just roll back to .9.1 for now?
@timothyallan The fix for this was just merged in #1129 -- so if you can (or want to) build Caddy from source, you can begin using it immediately. But I plan to roll out a patch release within the week.
Thank you @abiosoft!
@timothyallan you can roll back to 0.9.1 or build from source.
I have built the master and everything works fine. Thanks @abiosoft @Echsecutor and @mholt !
This bug is indeed due to reusing connections. More precisely, due to my quick and dirty serialization instead of proper fastCGI demuxing, it can happen that an empty header (or body) is received. The former is invalid.
The current master does not use persistent connections any more, hence the bug is gone, but also the feature. ;) I guess 'we are working on it' ;)
@Echsecutor But persistent connections can be turned on with the pool parameter now, right? Does the bug still surface when pool 1 is used?
The feature is not gone but rather it is now opt-in. So yes, using pool n where n > 0 enables it.
And nope the bug doesn't surface anymore whether or not persistent connection is used.
jepp, I see that this is how it's intended. I am just saying that also for pool n > 0 on the current master no persistent connections are being used. Hence @commarla 's comment (that things are working fine now) is due to the fact that persistent connections are currently not working. Wether they work as intended is still up to testing once a minor problem in the current master is fixed. https://github.com/mholt/caddy/pull/1129#issuecomment-249391582
A I meant this comment: https://github.com/mholt/caddy/pull/1129#pullrequestreview-1443068. Anyway in PR https://github.com/mholt/caddy/pull/1134 pooling is working and I can not reproduce this bug any more (it was also showing up in my setup). At least demuxing errors for the body should now also be caught by the tests.