As discussed on Reddit, it seems that reverse proxying some applications (in this case Plex) can lead to some extensive error log spamming. Requesting this log level be updated to Debug instead of Error.
I'd submit a PR, but I'm not sure what to change other than h.logger.Error to h.logger.Debug, but I'm sure there's a lot more to it than that.
Actually that should be all there is to it (Error -> Debug) :smile:
The zap.Error(err) part doesn't need to be altered or anything?
No, that just encodes the actual err into something zap (the logging lib we use) will know how to serialize.
Well I can submit a super ugly, github-created PR if you're ok with that. 馃槃 (I've found some devs really hate them.)
Sure, if you do that, you can then grab the CI artifact generated from the PR to test it out and confirm that it works as you expect.
The _only_ thing I would mention here is that we don't _know_ whether it's a server error or not; I think usually they are client errors (or errors caused by context cancellation, for example an admin-invoked config reload that expended out the grace period) which probably aren't "error-worthy". If there are any situations that could be errors though, if we switch this to debug level, we'll never see the errors (on the server) without enabling debug logging.
I think it's probably a safe change, but just be aware of the potential tradeoff that could be irritating down the road.
In that case: before submitting a messy, github-produced PR, I'll monitor the logs for a few more days and see if the same log spam as before occurs consistently during remote streams. If not, then I'll close out this issue without submitting a PR.
When you say "spam" how many are we talking about, and in what timeframe? Any reason you can't just filter out those logs? Caddy's logs are structured which makes this very easy to automate.
Also, note that clients have control over whether errors happen here; for example if they close the connection before reading the response, I believe that is one case where you get a log/error. (This is one reason why having them as debug level would probably be preferable; but again, I'm not sure whether that's the exclusive cause.)
When you say "spam" how many are we talking about, and in what timeframe?
It was consistently sending ~5-10 error messages per second.
Any reason you can't just filter out those logs? Caddy's logs are structured which makes this very easy to automate.
I haven't looked into this at all, honestly. Is there a page other than this I should be looking at?
It was consistently sending ~5-10 error messages per second.
For how long? How many clients, etc?
I haven't looked into this at all, honestly. Is there a page other than this I should be looking at?
Eh, I'd probably just read this: https://caddyserver.com/docs/logging - and then use whatever log tooling you prefer. Log emission is totally separate from log consumption.
Per https://caddy.community/t/context-cancelled-when-webpack-hmr-sends-updates-fix/9850/ my particular case is basically solved. I'll leave it up to y'all on if this issue/change is still worth anything or not.
No need to re-open this, but just as a general note for future readers: while the linked changes did not completely remove those things from my logs, it significantly reduced the quantity.
Edit: and I'll occasionally get bursts of stuff like this:
{"level":"error","ts":1600709913.9482598,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->108.162.237.33:41748: write: connection reset by peer"}
{"level":"error","ts":1600709915.3028126,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.186.239:18096: write: broken pipe"}
{"level":"error","ts":1600709915.3081138,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.186.239:22760: write: broken pipe"}
{"level":"error","ts":1600709928.825919,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23346: write: connection reset by peer"}
{"level":"error","ts":1600709929.1461644,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23510: write: connection reset by peer"}
{"level":"error","ts":1600709929.3285327,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"context canceled"}
{"level":"error","ts":1600709929.389466,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23484: write: broken pipe"}
{"level":"error","ts":1600709929.6223059,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23476: write: broken pipe"}
{"level":"error","ts":1600709929.792621,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23722: write: connection reset by peer"}
{"level":"error","ts":1600709929.9248655,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"context canceled"}
{"level":"error","ts":1600709929.9959579,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23466: write: connection reset by peer"}
{"level":"error","ts":1600709930.219334,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23486: write: connection reset by peer"}
{"level":"error","ts":1600709930.5161674,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:42956: write: broken pipe"}
{"level":"error","ts":1600709930.698858,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23440: write: connection reset by peer"}
{"level":"error","ts":1600709930.8915508,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23246: write: connection reset by peer"}
{"level":"error","ts":1600709931.1235616,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23508: write: broken pipe"}
{"level":"error","ts":1600709931.3563228,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:23566: write: connection reset by peer"}
{"level":"error","ts":1600709931.5276394,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"write tcp 172.28.0.2:443->162.158.187.244:20116: write: broken pipe"}
But that appears to be cloudflare (the CDN I'm behind) probably scanning services or something to just make sure they're up or whatever. Not a big deal.
Cool, thanks for following up. And yep, those logs are not a problem: it just means the client connected, made a request, and was like "Actually I don't need this response" and then hung up. (So... I dunno, maybe they could be debug level. Ughhhh I'm so on the fence about this. It's probably not even that big of a deal.)