Prometheus: Error: http: superfluous response.WriteHeader call

Created on 15 Oct 2019  路  50Comments  路  Source: prometheus/prometheus

Bug Report

What did you see?

Multiple errors of http: superfluous response.WriteHeader call in the logs - during normal operation.

level=info ts=2019-10-14T11:01:57.535Z caller=head.go:666 component=tsdb msg="WAL checkpoint complete" first=90900 last=90995 duration=58.045866315s
level=error ts=2019-10-14T11:02:03.505Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:06.599Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:07.848Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
[...]

Environment

  • System information:

    busybox Docker image running the Prometheus binary.

  • Prometheus version:

    level=info ts=2019-10-09T12:24:15.591Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"

hacktoberfest help wanted kinbug prioritP2

All 50 comments

This is strange. Are you able to pinpoint a particular type of HTTP requests that would trigger it?

I've noticed the same error on many of my own servers. I'll see if I can narrow down the trigger as well.

This could be caused by a bug in prometheus/client_golang addressed here: https://github.com/prometheus/client_golang/pull/634
It is fixed in v1.2.0 (released yesterday), but Prometheus v2.13.0 only uses v1.1.0.

This is a plausible course of action:

If an endpoint is instrumented with the promhttp middleware and the wrapper response writer implements Flusher, and then Flush is actually called, the underlying response writer will call WriteHeader but the delegator implemented by promhttp will not track that. It will call WriteHeader in its own upon another call that calls WriteHeader if needed. (The above mentioned fix adds the tracking to the delegated Flush call. The reason why the delegator calls WriteHeader itself is that it is needed for the instrumentation to work.)

This issue should be closed now as it's fixed in v1.2.0

We weren't quite sure if client_goleng 1.2.0 was actually addressing this particular issue.
Let's optimistically close this and re-open if people report it to be still ongoing. :grin:

Hi,
I'm getting these messages in version 2.13.1

level=error ts=2019-11-20T09:35:55.341Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=) level=error ts=2019-11-20T09:35:55.341Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=) level=error ts=2019-11-20T09:35:55.341Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=) Nov 20 09:39:49 level=error ts=2019-11-20T09:39:49.328Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

prometheus, version 2.13.1 (branch: HEAD, revision: 6f92ce56053866194ae5937012c1bec40f1dd1d9) build user: root@88e419aa1676 build date: 20191017-13:15:01 go version: go1.13.1

Thanks for the stack trace. This might now be in a different codepath, but the issue title still makes sense. Let's re-use it.

@cameronkerrnz is the version number you've given for Prometheus? 1.2.1?

I'm pretty sure he is referring to client_golang v1.2.1.

For context: https://github.com/prometheus/client_golang/releases/tag/v1.2.0 contained a fix for one possible way of superfluous response.WriteHeader calls.

However, what is reported here now must happen in a different way. prometheus/client_golang might or might not be involved.

notice the same error, and prometheus exit because of this.

I had the same problem .

level=error ts=2020-01-08T09:35:48.028Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T09:37:19.845Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T09:54:55.364Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:05:59.829Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:05:59.831Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:19:29.069Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:22:25.500Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

Prometheus version info

level=info ts=2020-01-08T12:08:54.625Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.14.0, branch=HEAD, revision=edeb7a44cbf745f1d8be4ea6f215e79e651bfe19)"
level=info ts=2020-01-08T12:08:54.625Z caller=main.go:333 build_context="(go=go1.13.4, user=root@df2327081015, date=20191111-14:27:12)"

level=info ts=2020-01-08T12:08:54.625Z caller=main.go:334 host_details="(Linux 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 BXPL-K8S-80-18 (none))"

do you know how to reproduce it?

I am seeing these errors when running a combination of Prometheus v2.13.1 and Thanos v0.8.1. Using a query of max(node_cpu_seconds_total) I can recreate pretty quickly and no data is being returned.

{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.795Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(
statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.880Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.170Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(
statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.184Z"}

I am seeing these errors when running a combination of Prometheus v2.13.1 and Thanos v0.8.1. Using a query of max(node_cpu_seconds_total) I can recreate pretty quickly and no data is being returned.

{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.795Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(
statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.880Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.170Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(
statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.184Z"}

@chjohnst , I'm curious, when this happens are you seeing high IO/Wait or CPU / performance issues on the instance running prometheus and thanos side-car?

level=error ts=2020-02-05T21:25:14.544Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:24" msg=)
goroutine 89434 [running]:
runtime/debug.Stack(0xc000566000, 0x4d, 0x0)
        /home/roidelapluie/godist/go/src/runtime/debug/stack.go:24 +0x9d
runtime/debug.PrintStack()
        /home/roidelapluie/godist/go/src/runtime/debug/stack.go:16 +0x22
github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader(0xc00e63ee70, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/opentracing-contrib/go-stdlib/nethttp/status-code-tracker.go:25 +0x95
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader(0xc00e63ef30, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:58 +0x45
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader(0xc00e63ef60, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:58 +0x45
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader(0xc00e63f110, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:58 +0x45
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).Write(0xc00e63f110, 0xc01190d000, 0x1ad, 0x1000, 0x0, 0x0, 0x0)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:68 +0x9e
bufio.(*Writer).Flush(0xc00dace200, 0xc00903afd8, 0xc00903af28)
        /home/roidelapluie/godist/go/src/bufio/bufio.go:593 +0x75
github.com/prometheus/common/expfmt.MetricFamilyToText.func1(0xc00dace200, 0xc00903b0b0)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/text_create.go:85 +0x2f
github.com/prometheus/common/expfmt.MetricFamilyToText(0x7f2e7d96d970, 0xc00ffcfe00, 0xc00c4a19f0, 0x1ad, 0x0, 0x0)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/text_create.go:269 +0x11c0
github.com/prometheus/common/expfmt.NewEncoder.func4(0xc00c4a19f0, 0x1, 0xc005619e20)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/encode.go:83 +0x3d
github.com/prometheus/common/expfmt.encoder.Encode(0xc00b3992a0, 0xc00c4a19f0, 0x25722c8, 0x28)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/encode.go:36 +0x30

I reproduce it by interrupting /metrics call in the middle with a tcp reset

I may have found the issue, still need to confirm.. when doing a really large query my Thanos backend instance get an OOM condition which I think is causing the error to happen.

https://github.com/prometheus/client_golang/pull/724 might help with this, even if this is a 3rd party bug and not in any Prometheus code.

On the other hand, masking a 3rd party bug might not be desired. See discussion on https://github.com/prometheus/client_golang/pull/724 , and feel free to chime in.

Sooo, see https://github.com/prometheus/client_golang/pull/726 for more context and results. prometheus/client_golang v1.5.1 has been released now, and once we upgrade prometheus/prometheus to use it, this bug should be fixed. At least that's the theory.

@roidelapluie what do you think? We could still do that update for 2.17.0-rc.1.

Yes, let's do this!

error message still exists after upgrading to 2.17.1 and seems in higher rate

@motyla could you share the exact circumstances (ideally how to reproduce, but every little hint helps) and stack traces?

Experiencing the same issue - bunches of

level=error ts=... caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/prometheus/prometheus/vendor/github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

lines with 2.17.1 version.

(I'll try to find out more info...)

@beorn7 Is this now fixed or should it still be investigated?

I assume it's still happening (as nothing was changed in this regard since 2.17.1), but it's unclear in which code path this is triggered.

We fixed one of those bugs in client_golang. There might be more in client_golang, or the bug might be elsewhere. Ideally, somebody encountering these messages would hunt it down while they can reproduce it.

Hello, I've also experienced this problem but restart of thanos-sidecar resolved it, however is there any metric that is related to this error?

I have also encountered the same issue with my HA/redundant Prometheus deployment, where I have two identically configured Prometheus instances federated via Thanos. Interestingly this error occurred on both Prometheus instances at the same time (within a couple of seconds).

As said, it would be great if someone that is actually seeing this issue and has the needed debug skills could get to the bottom of this. I haven't seen this issue ever since one of the ways triggering it was fixed in client_golang.

We also experience this issue (also using Thanos). The only thing I can add to this conversation is that Thanos Query cannot even find the Prometheus instance around the time we have this error message.
Other then that, we are also still debugging...

We also experience this issue (also using Thanos and we use federate as well ).
Each time this bug triggered our Prometheus will auto-restart but our WAL directory is a little big then Prometheus will OOM
it will loop nonstop restart-OOM-restart, till we delete WAL directory.

As said, it would be great if someone that is actually seeing this issue and has the needed debug skills could get to the bottom of this. I haven't seen this issue ever since one of the ways triggering it was fixed in client_golang.

how to fix the client_golang is there any version that can refer to or marked as a fix for this issue?

how to fix the client_golang is there any version that can refer to or marked as a fix for this issue?

we had even changed the client_golang to latest version but no luck "http: superfluous response.WriteHeader call" have not stopped and it's kind of recurring every time and it also referring to couple of Opentracing libs while we encounter this superfluous error.

I'm also seeing this log. In my case its when Prometheus is under heavy load, it begins to fail liveness checks shortly after.

+1
also seeing this with thanos

+1 with using prometheus & thanos

Had this on mux handlerfunc when I was using json.NewEncoder(w).Encode(...) and than w.writeheader. Fixed by changing the order, first writeheader than json thing.

Just curious does superfluous issue been fixed in any off the latest versions of Prometheus?

We tracked down one cause of this today.

Not long after bringing up the prometheus process it would hang and sometimes flood with the error message in this thread.

Running the thanos sidecar seemed to accelerate the time to hanging.

However when we looked at the targets (before it hung) we noticed several jobs where thousands of targets were timed out.

We commented those jobs out, and that prom cluster is now running fine.

I'll tag this for Hacktoberfest, to get more eyes on it. But please note that his won't be easy to figure out.

I am also facing the same issue,

level=error ts=2020-11-11T08:22:00.001Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-11-11T08:22:00.006Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-11-11T08:22:00.010Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

prometheus version: v2.19.1

is there any mitigation for this issue? I am also getting this error in v2.22.1.

We are still looking for a proper way to reproduce, if you have one, that would be welcome.

I get the same error

Client: Docker Engine - Community
 Cloud integration: 1.0.2
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 16:58:31 2020
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:07:04 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

I'm using skaffold

I resolved this by going to docker resources and bumping the memory from default 1GB to 5GB.

you change from 1 to 5 directly or you where changing one to one until work?

you change from 1 to 5 directly or you where changing one to one until work?

@dubcl I tried 6 then moved to 5 but it could possibly work for less as well; docker crashes while compilling gRPC on 1GB

Was this page helpful?
0 / 5 - 0 ratings