caddy -version)?Caddy 0.10.11 (+5552dcb Wed Mar 07 16:41:48 UTC 2018) (unofficial)
I am using Caddy as a reverse proxy to a webserver I am running (hosted via docker-compose, exposed on localhost:8080. The server is a control panel I am building for my computer science department to be able to easily allocate specified disk space to any developer using a secure endpoint (with a randomly generated secret link as part of the upload/delete links).
The way I am enforcing the quotas is by modifying the http.Reader body using the http.MaxBytesReader as the following code in Go in my upload POST handler illustrates:
availableBytes := 1073741824 * 10 // Convert 10 GB into bytes
r.Body = http.MaxBytesReader(w, r.Body, availableBytes)
file, header, err := r.FormFile("file")
if err != nil {
log.Println("Client", publiclink, "attempted to upload a file that is over the allotted disk quota by the app.")
json.NewEncoder(w).Encode(response{Status: false, Message: "You attempted to upload a file that is over the allotted disk quota by the app. If you need more space, please contact a BigDisk Admin"})
return
}
defer file.Close()
bigdisk.gilgameshskytrooper.io {
errors stdout
log stdout
gzip
proxy / localhost:8080 {
transparent
}
limits {
body /upload 10gb
}
timeouts {
read none
write none
header none
idle none
}
tls [email protected] {
dns namecheap
}
}
caddy -conf /etc/caddy/Caddyfile
Uploading the file video_host.mov which is 215.9MB:
curl -F 'file=@video_host.mov' https://bigdisk.gilgameshskytrooper.io/upload/first/9tJerC0sIDGXLIfB1MxmRrnNHKq9dPuMuyOxe0bq
When I upload a file which is relatively small (I have not checked the details of how small it has to be), the file will be uploaded fine.
(IMG_1635.JPG is 1.5MB)
curl -F 'file=@IMG_1635.JPG' https://bigdisk.gilgameshskytrooper.io/upload/first/9tJerC0sIDGXLIfB1MxmRrnNHKq9dPuMuyOxe0bq
returns the correct JSON object containing the success status (bool) and Message.
Hence, I was expecting a similar success in trying to upload a larger file
It seems to me that the response of the endpoint became non-deterministic even when trying to upload the same file.
When uploading said large file, I would get one of two errors:
{"Status":false,"Message":"You attempted to upload a file that is over the allotted disk quota by the app. If you need more space, please contact a BigDisk Admin"}
curl: (55) SSLWrite() returned error -36Despite having logs and errors turned on in my Caddyfile, no helpful errors show up. However, in case I'm missing something here's the dump of the logs I get from starting Caddy:
caddy -conf /etc/caddy/Caddyfile
Activating privacy features... done.
https://andrewshinsuke.me
https://customer.pause.pizza
https://kitchen.pause.pizza
https://clutchmemes.gilgameshskytrooper.io
https://chat.gilgameshskytrooper.io
https://chatbot.gilgameshskytrooper.io
https://prometheus.gilgameshskytrooper.io
https://adm.gilgameshskytrooper.io
https://hotel.gilgameshskytrooper.io
https://bigdisk.gilgameshskytrooper.io
https://clutchmemes.com
http://andrewshinsuke.me
http://customer.pause.pizza
http://kitchen.pause.pizza
http://clutchmemes.gilgameshskytrooper.io
http://chat.gilgameshskytrooper.io
http://chatbot.gilgameshskytrooper.io
http://prometheus.gilgameshskytrooper.io
http://adm.gilgameshskytrooper.io
http://hotel.gilgameshskytrooper.io
http://bigdisk.gilgameshskytrooper.io
http://clutchmemes.com
130.71.237.91 - - [02/Apr/2018:21:29:39 -0500] "POST /upload/first/9tJerC0sIDGXLIfB1MxmRrnNHKq9dPuMuyOxe0bq HTTP/1.1" 200 164
130.71.237.91 - - [02/Apr/2018:21:30:31 -0500] "POST /upload/first/9tJerC0sIDGXLIfB1MxmRrnNHKq9dPuMuyOxe0bq HTTP/1.1" 200 55
131.220.6.27 - - [02/Apr/2018:21:30:51 -0500] "GET / HTTP/1.1" 200 637
66.249.75.91 - - [02/Apr/2018:21:31:37 -0500] "GET / HTTP/1.1" 200 4287
130.71.237.91 - - [02/Apr/2018:21:34:22 -0500] "POST /upload/first/9tJerC0sIDGXLIfB1MxmRrnNHKq9dPuMuyOxe0bq HTTP/1.1" 200 258
130.71.237.91 - - [02/Apr/2018:21:35:32 -0500] "POST /upload/first/9tJerC0sIDGXLIfB1MxmRrnNHKq9dPuMuyOxe0bq HTTP/1.1" 200 164
git clone https://github.com/gilgameshskytrooper/bigdisk.gitcd bigdiskcp docker-compose.yaml docker-compose-actual.yamldocker-compose-actual.yaml. BIGDISKSUPERADMINEMAIL should be an email you can access. BIGDISKURL will be the URL that will be defined in the Caddyfile (in my case, https://bigdisk.gilgameshskytrooper.io). Replace BIGDISKEMAILUSERNAME and BIGDISKEMAILPASSWORD with the login credential for a Gmail account with secure apps turned off (if you want to verify where this is used, look at email/email.go). BIGDISKSUPERADMINPASSWORD will be any password you would like to use for the admin superadmin account on the website../build (will create the necessary folders, and launch docker-compose up [not in detached mode so as to give you the ability to log the calls]. I am using this method instead of normal docker-compose up so that I can add docker-compose-actual.yaml in my .gitignore so that private login information is not exposed publically)Then, serve a given URL to localhost:8080 using Caddy (If you didn't modify service.bigdisk.ports, you should be able to use my Caddyfile verbatim.
I suspect this is an issue with Caddy rather than my program because if I run the program directly on port 8080 and expose that port on the OS level, then all my upload limiting functionality will work just fine.
However, if there is something you can tell from my code that is the problem, please let me know. Thanks!
Thanks, that's a nice-looking issue right there. :+1:
I have only read the issue and haven't gone through your instructions (I don't actually have Docker either -- so maybe somebody else can try it if they feel like helping?) but I want to speculate first anyway. :smile: You say the error happens _sometimes_ when uploading larger -- but not huge -- files, like ~200MB. Does the error ever happen on the first uploads after the server (Caddy, or the backend, or both) is started? My first thought is to wonder if connections are being pooled/kept alive and for some reason that might be accumulating the size counted by the LimitReader? In other words, if you shut down the server and start it between each file upload or request, does the error still happen?
Also, you can replace your whole big timeouts block with just timeouts none. :wink:
My bad. I realized that the problem is in the difference of how the difference between how Docker containers try to store data and how Go was doing it on bare-metal. It turns out that it's trying to access an area of storage called /tmp/multipart-[multipart-id] which doesn't exist on the system.
It's ALWAYS Docker. :wink:
for future reference, it turns out the only thing I needed to do was to define the environment variable TMPDIR, and make that directory in the Docker Scratch container at creation. But ya, Docker is always a wildcard. Thanks again tho, loving Caddy
FWIW I recommend using alpine as the base image most of the time, because it gets rid of some of those potential issues by actually having a properly configured environment, requiring less hacks to work around. See https://github.com/abiosoft/caddy-docker for the most popular Caddy docker image
Thanks a bunch. Part of my goal for this project is to convince my C.S. Professor and my Boss that teaching Go in our advanced web development classes is the right choice by showing how you can run a Go program in a scratch container due to the lack of external dependencies.
Good luck, those CS departments can be tough cookies. (Source: I work in one right now.)
Most helpful comment
FWIW I recommend using
alpineas the base image most of the time, because it gets rid of some of those potential issues by actually having a properly configured environment, requiring less hacks to work around. See https://github.com/abiosoft/caddy-docker for the most popular Caddy docker image