A Handler
has no way of changing the underlying connection Deadline, since it has no access to the net.Conn
(except by maintaining a map from RemoteAddr
to net.Conn
via Server.ConnState
, but it's more than anyone should need to do). Moreover, it can't implement a timeout itself because the Close
method of the ResponseWriter
implementation is not documented to unblock concurrent Write
s.
This means that if the server has a WriteTimeout, the connection has a definite lifespan, and streaming is impossible. So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.
A possible solution might be to expose the net.Conn in the Context. Another could be to allow interface upgrades to the SetDeadline
methods on ResponseWriter. Yet another would be to make (*response).Close
unblock (*response).Write
.
Given that we store the *http.Server
in the request context, making net.Conn
available in the context via (e.g.) ConnContextKey
could be an option. This could be opt-in via a field on the http.Server
as stuffing the request context with things by default is not ideal.
I think this can be done with providing a custom net.Listener
to (*http.Server).Serve
.
That is embedding a *net.TCPListener
and overwriting the Accept
method, which return a custom net.Conn
. The custom net.Conn
will embed a *net.TCPConn
and overwrite the Write
method.
The overwritten Write
method could reset the write deadline on every write, or use a atomic counter to reset the write deadline on some numbers/bytes of consecutively write. But for truly on demand write deadline resetting, one still need some way to do that on the higer level handler side.
Since a http/2 connection can do multiplexing, it would be helpful to have a set of timeouts for individual stream. When a stream hang on client, we could use those timeouts to release resources associated with that stream, this is not possible with setting deadlines on the lower level underlying connection.
@FiloSottile, we won't be exposing net.Conn to handlers, or let users explicitly set deadlines on conns. All public APIs need to consider both HTTP/1 and HTTP/2.
You propose many solutions, but I'd like to get a clear statement of the problem first. Even the title of this bug seems like a description of a lack of solution, rather than a problem that's not solvable.
I agree that the WriteTimeout is ill-specified. See also my comment about ReadTimeout here: https://github.com/golang/go/issues/16958#issuecomment-244239789
You allude to your original problem here:
So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.
So you have an infinite stream, and you want to forcibly abort that stream if the user isn't reading fast enough? In HTTP/1, that means closing the TCP connection. In HTTP/2, that means sending a RST_STREAM.
I guess we need to define WriteTimeout before we make progress on this bug.
What do you think WriteTimeout should mean?
Why not provide a timeout version of Read/Write ? Is there any reason ?
Request.Body.TimedRead(p []byte, timeout time.Duration)
Request.Body.TimedWrite(p []byte, timeout time.Duration)
My concrete case is:
client send data via. a POST API provided by server, but if the network disconnected during the server read request data via. request.Body.Read
, the server will be blocked at Body.Read
forever.
similar that it's server write data back to client.
So I need a timeout mechanism that server can be unblocked from this Read/Write.
This is very intuitive requirement, might be there are other solution already ?
Thanks.
What if the mechanism to abort blocked read or write I/O is to just exit from the ServeHTTP function?
So then handlers can do their own timers before reads/writes and cancel or extend them as necessary. That implies that all such reads & writes would need to be done in a separate goroutine. And then if they're too slow and the timer fires before the hander's I/O goroutine completes, just return from ServeHTTP and we'd either kill the TCP connection (for http1) or do a RST_STREAM (for http2).
I suppose then the problem is there's a small window where the I/O might've completed just in the window between the timer firing and the ServeHTTP code exiting, so then once the http package sees ServeHTTP is done, it may not see any calls still in Read or Write, and might then not fail the connection as intended.
That suggests we need some sort of explicit means of aborting a response. http1 has that with Hijacker.
I've recommended in the past that people just panic, since the http package has that weird feature where it recovers panics.
Would that work? Do all your I/O in separate goroutines, and panic if they take too long?
We might get data races with http1 and/or http2 with that, but I could probably make it work if you like the idea.
CL https://golang.org/cl/32024 mentions this issue.
Deferring to Go 1.9 due to lack of reply.
@bradfitz I faced a similar problem when I started with Go. I had a service in Node using HTTP 1.1 with chunked transfer encoding, it implemented bidirectional streaming over HTTP 1.1. On each write or read, the deadline/timeout would get reset in order for the connection to remain opened. It worked on Node.js which uses libuv. This predated websockets and avoided long polling. It is a little known technique but is somewhat mentioned in https://tools.ietf.org/rfc/rfc6202.txt (numeral 3). I also remember Twitter's streaming API using it. Anyway, migrating my Node service to Go was not possible because of the use of absolute timeouts/deadlines.
So, I guess @FiloSottile is referring to a similar case. At least in my particular scenario, what I wanted to do was to be able to reset the connection's write and read timeout/deadline so that the connection remained open but still got closed if it became idle.
A timeout that restart on each write or read would be useful for my use case too and if implemented I could rewrite in go a legacy c++ application.
If the timeout could be set per request and not only globally would be a good plus
@bradfitz Sorry for not following up on this one, it turned out to be more nuanced than I thought (HTTP/2, brain, HTTP/2 exists) and didn't have the time to look at it further.
Doing I/O in a goroutine instead of the timer sounds upside-down, but I don't have specific points to make against it. What would make more sense to me would be a way to cancel the whole thing, which would make Body.Read/ResponseWriter.Write return an error that would then be handled normally.
Question, without such a mechanism, how is the user supposed to timeout a read when using ReadHeaderTimeout
?
// ReadHeaderTimeout is the amount of time allowed to read
// request headers. The connection's read deadline is reset
// after reading the headers and the Handler can decide what
// is considered too slow for the body.
One possible solution would be for Request.Body to be documented to allow Close concurrent with Read, and not to panic on double Close, and for ResponseWriter to be upgradeable to io.Closer.
For what it's worth ... I think we have a related use case. (only for reading requests and not for writing)
We have some slow clients doing large PUT requests - sometime on unstable connections which dies.
But we would like to allow these PUT requests as long as there is actually progress and Read() returns data. Preferably only on the endpoints where that should be allowed.
Currently, though we provide a net.Listener/net.Conn which sets Deadline on every Read/Write ... but that seems not a viable solution since it would interfere with timeouts set by net/http.Server.
Would that work? Do all your I/O in separate goroutines, and panic if they take too long?
The problem is defining what is "too long".
Often what you want is not to kill long running IO in absolute terms, but to kill "too slow" connections.
I don't want to kill a long running PUT request as long as it's actually transferring data. But I would kill a request only sending 1 byte/second.
These demands for IO activity should (for HTTP) only be enforced during ConnState stateActive (and stateNew). It would be OK for a connection to not have any IO activity in stateIdle.
I've been doing some experiments setting deadlines on connections - which is defeated by tls.Conn hiding the underlying connection object from external access.
Also trying to have a reaper go-routine Close connections with no IO activity. ... which becomes equally messy, although not impossible. (**)
Being able to set demands for IO activity during HTTP stateActive - or a more general way to do this for net.Conn without overwriting deadlines set by other API and regardless of whether the conn object is wrapped in TLS. - would be nice :)
** PS: ...much of the complexity comes from not being able to access the underlying connection in a crypto/tls.Conn object. I can understand why not exposing the connection to a Handler, but in the ConnState callback, is there any reason not to allow getting the underlying connection for a tls.Conn?
Just to throw in my 2¢, I also have personally hit the need to set timeouts on infinite connections, and have met someone at a go meetup who independently brought up the same problem.
In terms of API, why can't the underlying ResponseWriter
implement Set{Read,Write}Deadline
? It seems to me that they could happily do the write thing of sending STREAM_RST
for http2. Just because it happens to implement the same API you'd use to timeout TCP connections doesn't mean it has to time out the TCP connection itself if there is another layer that can be used.
I agree with @FiloSottile's comment about the separate goroutine seeming upside-down. Though I sort of like it, it uses go's primitives in a clever non-obvious way which makes me feel nervous and seems like it would be hard to document.
Having ResponseWriter have Set*Deadline() would only solve to OP problem of writing data to the client. - not the problem of stopping long running PUT which don't really transfer data at any satisfying rate.
How about (*http.Request)
or (*http.Request).Body
implementing Set*Deadline
?
If that can be implemented without interfering with other deadlines, like ReadTimeout, then it would be interesting to try.
@bradfitz Is it expected with the current master of the http2 pkg that when panicing out of a handler and a left behind goroutine blocked on a body read or response writer write, for a panic to occur that is internal to the http2 pkg? (existing TimeoutHandler would have same issue). Or is the behavior your referring to what would be made safe if that is the chosen route, but doesn't exist today?
We ran into this issue on our project: https://github.com/upspin/upspin/issues/313
We had our timeouts set quite low, as per @FiloSottile's recommendations but our users regularly send 1MB POST requests, which not all connections can deliver in a short time.
We would like to be able to set the timeout after reading the request headers. This would let us increase the timeouts for authenticated users only (we don't mind if our own users want to DoS us; at least we'll know who they are), but give unauthenticated users a very short timeout.
cc @robpike
I think we managed to make a reasonable simple solution where we can have each connection monitored by a reaper go-routine which will close it if there's been no traffic for a while. We can turn on the "reaping" in the ConnState callback on StateActive and turn it off on StateIdle.
This, of course, doesn't solve anything if the client is just slow, but still sends a few bytes now and then. ... and it doesn't solve the problem of wanting to control timeouts in a specific http.Handler.
But it does clean up connections without a living peer.
I guess this didn't happen for Go 1.9. We still need a plan.
/cc @tombergan
It took me an awfully long time, but it finally clicked what didn't sit right with the idea of making a return from ServeHTTP the way to break I/O: it overloads the http.Handler
interface with a behaviour that won't be guaranteed by any other user of the interface. Anything from tests to frameworks will not replicate the behaviour, and there will be no way for the code to know (which is the point of interface upgrades). This makes the weird inversion of control where you do the main work in a child goroutine especially bad, because if you start the I/O goroutine, then return from ServeHTTP expecting that to unblock the I/O goroutine, and it actually doesn't, you leaked a goroutine.
And anyway, I don't think we should ever encourage starting a goroutine and losing track of it before knowing it returned.
I think we should go for some sort of interface upgrade to io.Closer, possibly in a way that works, or is easy to support, with ServeHTTP composition (aka middlewares).
This bug is jumping around a bit, so I've tried to summarize the use cases below. Please respond if I missed something important. I will keep this comment updated with the list of use cases.
For any request:
For any response:
CL https://golang.org/cl/47390 mentions this issue.
Hi guys. I tried to implement a patch for this issue.
I made 2 modifications on net/http in order to timeout slow Request.Body.Read() and RequestWriter.Write().
This is for timeout slow upload.
For HTTP/2, requestBody.Close() have already been able to abort Read(). So, somehow I made HTTP/1.1 work like that.
This is for slow download.
Now, we can unblock in-flight Read()Write()s by calling Close() concurrently on the body.
I think that with those modifications, we can easily implement timeouts on Read()/Write() body like below.
How do you think about this approach?
type timeoutReadCloser struct {
io.ReadCloser
timeout time.Duration
}
// Read Reads from underlying ReadCloser and automatically calls the Close()
// if the Read() takes longer than timeout.
func (r *timeoutReadCloser) Read(p []byte) (int, error) {

t := time.AfterFunc(r.timeout, func() {r.Close()})
defer t.Stop()
return r.Read(p)

}
func handler( w http.ResponseWriter, r *http.Request) {
n, err := io.Copy(ioutil.Discard, timeoutReadCloser{r, 10*time.Second})
// …
}

I'm not yet sure if I like using Close in that way. It's certainly an expedient solution, but is it the right API?
If we were doing everything from scratch, body.Read
and rw.Write
would take a context
, which would naturally allow the caller to specify deadlines on each call. I would like to explicitly reject that approach before using @matope's solution. Otherwise, we could end up in a situation like we're in for Request cancellation, where there are three ways to cancel a request, two of which are deprecated (Transport.CancelRequest, Request.Cancel).
@tombergan Thank you for your feed back!
It's certainly an expedient solution, but is it the right API?
The reason why I thought using Close() would be a good way to do this is I noticed that x/net/http2.requestBody.Close() already abort pending Read(). See this.
I thought this semantics is good and aborting Read()/Write()s of all h1/h2 and Request/Response body in a same way would be a reasonable and good idea.

I think this additional semantics is so low-level and it enable us handle timeouts in some custom high-level ways which fit to your own app like I demonstrated as timeoutReader. Of course you can implement context based one. Please see following example.
If we're going to provide some high-level API to timeout, I think it might be deprecated someday. But IMO, this additional semantics are reasonably small and low-level and so it will not be stale (I hope).
An example of context based cancelation. (I didn’t test. Sorry if this does not work)
```go
func handler( w http.ResponseWriter, r *http.Request) {

ctx, cancel := context.WithDeadline( r.Context(), 10 * time.Second)

defer cancel()
closed := make(chan struct{})

go func() {

 <-ctx.Done()

r.Body.Close() // We can do this if concurrent Read() and Close() are allowed :)
closed<- struct{}

}
defer func() { <-closed }
n, err := io.Copy(ioutil.Discard, r.Body)
// …
}

CL https://golang.org/cl/47870 mentions this issue.
The reason why I thought using Close() would be a good way to do this is I noticed that x/net/http2.requestBody.Close() already abort pending Read().
That's an argument for why Close is an expedient solution, but not an argument for why Close is the right API. The Close solution forces callers to write little adapters, such as your timeoutReadCloser adapter or your context adapter. Why should callers have to go through so much trouble?
The contexts solution reuses a standard construct, contexts, which Go programmers should already be familiar with. There exist standard ways of manipulating contexts -- no custom adapters needed. We may even want to define interfaces to io
, such as io.ContextReader
, as well as standard adapters that convert a pair of (context.Context, io.ContextReader)
to an io.Reader
. For example, the response writers for HTTP/1 and HTTP/2 might implement io.ContextReader
as defined below:
package io
type ContextReader interface {
ReadContext(ctx context.Context, buf []byte) (int, error)
}
func ReaderUsingContext(ctx context.Context, r ContextReader) io.Reader {
return readerUsingContext{ctx, r}
}
func (r readerUsingContext) Read(p []byte) (int, err) {
return r.r.ReadContext(r.ctx)
}
The only ugliness (to me) is that callers would need explicit casts, but the Close solution also has this problem. I guess this boils down to whether or not you believe the standard library should be more context-ified. This is a larger question that is worth raising beyond this issue, and I will do so after GopherCon this week.
@tombergan I see what you want to propose.
I guess this boils down to whether or not you believe the standard library should be more context-ified.
I agree with you on this point. At least for now, context
is not a standard or familiar (or even efficient) way to abort single Reads. So, providing ContextReader
by net/http
would not reasonable for now. I think you need to propose it as a standard first (on golang-dev?).
Main motivation of this issue is to implement timeouts based on blocking times of Read/Write to shut down slow clients (efficiently). If we provide context-based API instead of Close(), we need to implement another Reader to achieve the goal.
@bradfitz Hi, now that Go1.9 has been released, I'd like to proceed this issue.
For now, I've proposed 2 CLs and they're in review. One is for net/http and the other is for x/net/http2.
I think at least they worked as a POC to show that we're able to unblock pending I/Os on Body by using Clsoe(). But I'm not 100% sure if closing TCP connections by using Close() is a good design for HTTP/1.1 Bodies. (Would it cause unacceptable incompatibilities on its semantics?) I'd be happy if you could give me some feedbacks both on the design and on the implementations.
@matope I would like to have the context API explicitly rejected by the wider Go team before moving forward with the io.Closer approach. Otherwise we might some day end up with two APIs for the same problem, which will be ugly. I will try to write up a proposal this week.
Concurrently, perhaps we can short-circuit that conversation: Does anyone have a use case that cannot be solved with ReadContext / WriteContext, but _can_ be solved by making net/http's ResponseWriter implement io.Closer?
I still don't like Close.
I'd rather see a function in the http package that returns a non-interface value (so we can add things to it later) given an *http.Request, with type name hopefully better than "HandlerHelper" or "HandlerWriter". (since ResponseWriter is taken already)
Then we could put methods for Flush (returning an error!) and Hijack and bytes written accessors and close and reset stream etc.
I understand not wanting to add yet-another-interface that users must cast ResponseWriter to, whether that interface is io.Closer, ctxio.Writer, or whatever. Adding a concrete struct that we can throw new methods on seems like a good idea. But it doesn't directly solve the problem raised by this bug.
How should callers apply per-Read/Write cancellations and deadlines: Should we add a Close or Cancel method as @matope proposed; should we add context methods as I proposed; or something else? i.e., I'd separate the questions:
AFAICT, we haven't settled the first question yet.
I don't have time to read and consider the discussion above. I just wanted to say I'd heavily argue against new optional interfaces. I think the existing ones were probably a mistake and I'd like to stop the bleeding.
@bradfitz Thank you for your feedback!
I think introducing a new helper structure rather than adding new optional interfaces would be a good idea. It would enable us to extend the functionalities without breaking backward compatibility or having a lot of function-specific-interfaces.
I'm interested in your plan and I'd like to see more detail on it. But I'm not gonna rush you.
I'm gonna stop working on my CLs until new proposal will come.
Also, if the new struct supports settings of per-Read/Write timeout like below, users don't have to implement their own TimeoutReaders which I described above. I expect @tombergan would like this API).
func (r *Request) HandlerHelper() *HandlerHelper {}
// HandlerHelper does some stuff on HTTP connection and stream.
// I don't come up with a nice name.
type HandlerHelper struct{
// unexporeted fields and methods
}
// Close closes the request connection(on HTTP/1.1) or reset the request stream (on HTTP/2).
func (hh *HanlderHelper) Close() error {}
// SetReadTimeout sets per-Read timeouts for Request (Body) if not 0
func (hh *HanlderHelper) SetReadTimeout(d time.Duration) {}
// SetWriteTimeout sets per-Write timeouts for Response (Body) if not 0
func (hh *HanlderHelper) SetWriteTimeout(d time.Duration) {}
The current docs for Read[Header]Timeout say:
// ReadTimeout is the maximum duration for reading the entire
// request, including the body.
//
// Because ReadTimeout does not let Handlers make per-request
// decisions on each request body's acceptable deadline or
// upload rate, most users will prefer to use
// ReadHeaderTimeout. It is valid to use them both.
ReadTimeout time.Duration
// ReadHeaderTimeout is the amount of time allowed to read
// request headers. The connection's read deadline is reset
// after reading the headers and the Handler can decide what
// is considered too slow for the body.
ReadHeaderTimeout time.Duration
Given this issue is not fixed yet, it sounds misleading to me. ReadHeaderTimeout does nothing against a malicious attacker, and "the Handler can decide what is considered too slow for the body" is inaccurate, not having a way to set deadlines.
Am I missing something?
That suggests we need some sort of explicit means of aborting a response.
This seems to be now implemented, as panic(http.ErrAbortHandler)
, I think.
Is http.ErrAbortHandler enough? Can we close this?
So if I understand correctly the plan is to set a timeout on the main goroutine with a panic at the end, and do your I/O in a new goroutine. https://play.golang.org/p/J8IDixEn9y1
I see a few issues with this:
@FiloSottile, as background, we're asking whether this is enough for now because there's no great place to put new options. We want to stop adding optional interfaces for http.ResponseWriter
. They compose poorly.
If we make a "v2" version of net/http, we can fix this (and many other things), but we're wondering if we really need anything more for now.
If you feel strongly we do need this now, then what API do you propose?
I was about to propose a hypothetical API but I realized I wasn't sure whether I had the right bug so I just tried to reread this whole bug to find the problem statement and I'm mostly seeing solutions.
Can somebody summarize the problem we're trying to solve here?
The shortest tl;dr is that there is no way to set timeouts based on the endpoint or on the client, but only server-wide, nor there is any way to update timeouts while serving a response based on the changing circumstances.
The reason one wants to set timeouts is to protect against clients going away or being intentionally slow. A reason one might want to exempt an endpoint is for example if that handler is serving streaming content or long downloads. A reason one might want to exempt a client is for example following authentication.
https://github.com/golang/go/issues/16100#issuecomment-285573480 is a good practical example. I feel like the first two paragraphs of my original report are also not solution-oriented.
Okay, thanks.
If the client goes away ("laptop lid closes in a coffee shop" case), our default TCP keep-alive detection will kill the connection after some time. So I'm less worried about that.
But having different policies based on authentication is a good example.
I imagine this would be equally about read timeouts in addition to write timeouts.
How about something like this for an API: (ignoring the names)
package http // net/http
func NewHandlerController(w http.ResponseWriter, r *http.Request) *HandlerController { ... }
type HandlerController struct { /* opaque */ }
// SetFoo changes the foo deadline and reports whether it was successfully adjusted.
func (hc *HandlerController) SetFoo(time.Time) bool { ... }
func (hc *HandlerController) SetBar(time.Duration) bool { ... }
And then we're largely insulated from the mechanism we use to implement it over time. If we want to do private magic on the ResposneWriter interface, we can. If we want to do magic private context values, we can. But the user gets concrete types.
I'd start with that (for Go 1.12) and implement it for HTTP/1 and HTTP/2 only and not provide a way for others to implement it for their own Servers/ResponseWriters right away, knowing that this API would let us do so later.
I'm not really a fan of the name HandlerController
name, though, but it's verbose here as an example.
LGTM, although I would return an error (Set[Read|Write]Deadline(time.Time) error
) so that we can communicate why the feature is not available.
Pulling together a few comments over time here. Given the ResponseWriter
is what currently let's us hijack
the underlying connection, would it be sufficient to start having net/http
's ResponseWriterimplement
io.Closerand a new
http.DeadlineController` with methods:
type DeadlineController interface {
SetReadDeadline(time.Duration)
SetWriteDeadline(time.Duration)
}
This would be in line with the other interfaces already in use and implemented by the underlying ResponseWriter
from the server.
For a user wanting to implement Server-Side Events, they could do the following:
func streamHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/event-stream")
controller, ok := w.(http.DeadlineController)
if !ok {
panic("expected w to be a http.DeadlineController")
}
for {
controller.SetWriteDeadline(2*time.Second)
w.Write([]byte("data: some piece of data that takes up to 2 seconds to write\n\n"))
w.(http.Flusher).Flush()
time.Sleep(time.Second)
}
}
You could mix in io.Closer
into that example or use the request.Context
to signal the handler to stop the stream.
I do also like @tombergan's mention of contextified reads and writes as potentially new methods so as to avoid breaking the library.
With all that said, it feels like the right future approach is to keep functionality scoped to the ResponseWriter
and not at a higher level (handler, router, or server) for the motivation of having the most flexibility and obvious/expected API.
The shortest tl;dr is that there is no way to set timeouts based on the endpoint or on the client, but only server-wide, nor there is any way to update timeouts while serving a response based on the changing circumstances.
I want to do this. I have an endpoint which exports a large file while others perform operations on smaller sized data. I would like to keep different timeouts for these two different endpoints.
So it it okay for anyone send a CL for the API @bradfitz described in https://github.com/golang/go/issues/16100#issuecomment-396690586? (And we'll bikeshed the naming there?)
Hello any news ?
Any updates on this ?
Hi Any updates ??
@FiloSottile (author of this issue) wrote a great blogpost on cloudflare's blog about this. It's not linked anywhere here so I thought I'd include it for any interested readers.
https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
My interest in this is being able to continuously update the timeout (to a few seconds in the future each time) on SSE server-sent-event streaming handlers while keeping deadlines for other regular non-streaming handlers much shorter. Hope to see some movement on this soon. EDIT: exactly like this: https://github.com/golang/go/issues/16100#issuecomment-405068548
Any update at all?
maybe we could use the callback in http server structure:
// ConnState specifies an optional callback function that is
// called when a client connection changes state. See the
// ConnState type and associated constants for details.
ConnState func(net.Conn, ConnState)
states change like this:
after conn accept -> StateNew
read some bytes -> StateActive // loop on the same connection
ServeHTTP
after handle -> StateIdle
when the first request come in, we just set a default timeout through ConnState.
After we have handled the request in ServeHTTP, we have known the timeout through our rule, such as path or host, then record a map such as map[remoteAddr]timeout.
Then when other requestes come on the same connection, we could set the corrected timeout through ConnState.
we could not know the timeout until we have handled the request in ServeHTTP, so the first arrival request will use default timeout, other requests on the same connection will set correctly.
moreover, we could not set SetWriteDeadline(time.Now().Add(d)) just after read, so write timeout is inaccuracy. And this operation also conflict with server's WriteTimeout and readTimeout unless we don't set these timeout.
Hmmm... just forgot ConnState, it is not a good way.
Now that Server.ConnContext
has been added in go 1.13, the solution of the issue's original author @FiloSottile to expose the net.Conn
in the Context
is I think more viable:
// ConnContext optionally specifies a function that modifies
// the context used for a new connection c. The provided ctx
// is derived from the base context and has a ServerContextKey
// value.
ConnContext func(ctx context.Context, c net.Conn) context.Context // Go 1.13
Approach: Create a *DeadlineController
inside of Server.ConnContext
and embed it into the Context
.
func NewDeadlineController(c net.Conn, s *http.Server) *DeadlineController {
return &DeadlineController{
c: c,
s: s,
}
}
type DeadlineController struct {
c net.Conn
s *http.Server
}
// ExtendWriteDeadline can be called to reset the write deadine.
// The deadline will be extended as if a new request had come in.
// Can be called repeatedly to extend the deadline forever.
func (dc *DeadlineController) ExtendWriteDeadline() error {
return dc.c.SetWriteDeadline(time.Now().Add(dc.s.WriteTimeout))
}
I'm thinking of using the above code to extend the write deadline (possibly indefinitely) inside of a streaming http.ResponseWriter
(which is sending Server-Sent-Events to a browser in my case). The goal is to be able to have short deadlines on other routes for security purposes (see cloudflare's blogposts linked in this issue for more details on that).
You could do something similar for the read deadline.
Request for any comments: Are there any issues with this approach? Will it work for http1 and http2? Is there a better solution we can work towards?
I (again) run into the issue of specifying handler-specific timeouts.
@mikelnrd AFAICS your approach does not work since the connection you get from ConnContext func(ctx context.Context, c net.Conn) context.Context
is the TCP / TLS connection. However, the write timeout closes the (private) HTTP connection. At least I was not able to get this working - either all connections stick around forever or get closed after the WriteTimeout
...
However, I may have found a similar work-a-round:
ConnContext: func(ctx context.Context, c net.Conn) context.Context {
writeTimeout, cancelWriteTimeout := context.WithTimeout(ctx, 10*time.Second)
go func() {
defer cancelWriteTimeout() // Release resources
_ = <-writeTimeout.Done() // Wait for the timeout or the timeout cancel
if err := writeTimeout.Err(); err == context.DeadlineExceeded {
c.Close() // Only close the connection in case of exceeded deadline - not in case of cancellation
}
}()
return context.WithValue(ctx, ctx.Value(http.ServerContextKey), cancelWriteTimeout)
},
Here, I create a go routine that waits for the writeTimeout
. If the deadline exceeds, we close the connection since that's the purpose of the write timeout.
However, if the timeout is canceled then we don't close the connection. This allows handler functions which want to opt-out of write timeouts call the cancelWriteTimeout function - which they can access via http.Request.Context()
At the moment this approach has the problem that if a handler function returns (e.g. the request has been handled successfully) the write-timeout go routine will stick around until the timeout elapses.
(Unfortunately, we can't use a select with <-ctx.Done()
here - See: ListenAndServe
implementation)
Therefore, I wrap the server.Handler
with:
type cancelWriteTimeoutHandler struct{ *http.ServeMux }
func (h cancelWriteTimeoutHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
defer func() {
ctx := r.Context()
f := ctx.Value(ctx.Value(http.ServerContextKey))
if f, ok := f.(context.CancelFunc); ok {
f()
}
}()
h.ServeMux.ServeHTTP(w, r)
}
This handler will cancel the write timeout when the request has been handled. This ensures that the write-timeout go routine only lives as long as the request.
For a minimal example see: https://play.golang.org/p/ZOXeHvmqznZ
A cleaner solution would be nice but this seems to work :crossed_fingers:
nginx have the send_timeout
option: https://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout
Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.
This can be implemented with a new Server.SendTimeout
field.
@perillo Settings for the whole server are explicitly not what this issue asks for. There's a need for handler-specific timeouts.
(Secondly, the idea of a write-per-timeout is still open to slowloris abuse)
This issue needs attention.
There are many reasonable use cases, like different policies depending on authentication or varied types of requests. In my case, a single service is supposed to usually serve small responses, but sometimes after authentication and understandind the request, it turns out that the file to be served is huge and might take some legitimate user hours to download.
So you have to balance between:
It is hard to comprehend how there is no way to keep a legitimate connection going, while giving unauthenticated users a proper timeout. I cannot understand how people are able to overlook this and expose Go services to the internet relying on net/http.
Is there any acceptable workaround besides having a reverse proxy babysitting the Go service? Or another package to drop in place of net/http?
My project also needs per-handler timeouts. So I can do something like
req.SetReadTimeout()
req.SetWriteTimeout()
in handler after classifying incoming request.
@bradfitz @fraenkel @FiloSottile could any of you file a specific proposal from the above discussion in a new issue?
Most helpful comment
We ran into this issue on our project: https://github.com/upspin/upspin/issues/313
We had our timeouts set quite low, as per @FiloSottile's recommendations but our users regularly send 1MB POST requests, which not all connections can deliver in a short time.
We would like to be able to set the timeout after reading the request headers. This would let us increase the timeouts for authenticated users only (we don't mind if our own users want to DoS us; at least we'll know who they are), but give unauthenticated users a very short timeout.
cc @robpike