Currently in the promclient package, it is not supported to add HTTP headers when sending Get requests https://github.com/thanos-io/thanos/blob/master/pkg/promclient/promclient.go#L100.
Cortex frontend supports a special HTTP header Cache-Control to control whether to cache results or not https://github.com/cortexproject/cortex/blob/master/pkg/querier/queryrange/results_cache.go#L203. So it would be nice to support specifying the headers in promclient when sending query range requests, then we can test the cache behavior in the E2E test.
Makes sense :+1:
Just found that I was totally wrong :cry: Cortex only checks the HTTP response to decide whether to cache or not. Not based on the HTTP query request. So this is not what we want and #3003 is not needed I think.
If we want to control the cache behavior, I have one idea:
shouldCacheFn in Cortex results cache middleware and it is called at the beginning of the cache middleware here to check whether to continue the caching logic or directly send the request to downstream based on the request.type shouldCacheFn func(r Request) bool
With https://github.com/thanos-io/thanos/pull/3018, we can make cache decisions based on request params.
Since ThanosRequest in https://github.com/thanos-io/thanos/pull/3018 doesn't contain HTTP headers, maybe we can add a cache param to control the cache behavior. One example request might be
http://localhost:10902/api/v1/query_range?query=up&start=1597087925&end=1597088025&step=14s&dedup=false&cache=false
This cache param only makes sense to the results cache and we will check it in shouldCacheFn.
Any thoughts about this idea? @bwplotka @pracucci
I just saw https://github.com/cortexproject/cortex/issues/3049, I think it is similar to what we want to do here.
@yeya24 I didn't actually get what blocks us from using Cache-Control Header as also mentioned in https://github.com/cortexproject/cortex/issues/3049.
Could you elaborate more? IMHO, it'd be the best to utilize the headers to control this behaviour.
Let's make sure @yeya24 that no cache means also normal splitting blocks, retry etc
Let's make sure @yeya24 that no cache means also normal splitting blocks, retry etc
@bwplotka Yes, all other middlewares are still working.
That's one example, if no cache, just send the reqeust to the next middleware.
func (s resultsCache) Do(ctx context.Context, r Request) (Response, error) {
userID, err := user.ExtractOrgID(ctx)
if err != nil {
return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
}
if s.shouldCache != nil && !s.shouldCache(r) {
return s.next.Do(ctx, r)
}
}
@yeya24 I didn't actually get what blocks us from using Cache-Control Header as also mentioned in cortexproject/cortex#3049.
Could you elaborate more? IMHO, it'd be the best to utilize the headers to control this behaviour.
The problem is that Cortex only checks this cache control header in the response it gets from downstream Prometheus.
So if we send a request with this header, it won't handle it. I think what we want is that controlling cache behavior before sending the request to downstream.
Looks like before AND after is the key part
You are right, I opened a pr at Cortex side now. https://github.com/cortexproject/cortex/pull/3054 It would be great if you can help review.
It is merged in Cortex https://github.com/cortexproject/cortex/pull/3054. So this issue can be closed.