Able to cache the response of specific API calls, for a specified amount of time.
To implement response caching we need to parse the HTTP caching headers and properly store the cached items in-memory (in a nginx dict) or optionally in the datastore.
This is also an interesting project that solves the caching problem: https://github.com/pintsized/ledge (but the problem is that it uses redis). We could take some inspiration from it especially for the HTTP headers parsing.
@thefosk It would be nice if you could enable caching based on path as well what ever the header from the upstream is.
like:
/api/user/get = 0 sec
/api/article/get = 10 sec ..
@samek if this is going to be a plugin then all plugins will have setting per: api, endpoint, consumer, organization, path, etc
Note: Consider using Redis
there is this circular referencce and I couldn't reach anywhere....
#749 -----> #411 <----|
<-----| |
-----> #412 |
<-----| |
|------> #830
I agree with @naoko, there seems to be a confusing circle of references to this topic. However, I believe this is the official issue (#411) for the topic of response caching. Others looking for info on this, look no further.
I'm kicking myself for not doing better research before jumping in with Kong, but now that I've got it set up and going I'm sorry to find it doesn't actually offer any response caching/cache invalidation.
Many of the diagrams used to market Kong are misleading (check the "Kong Architecture" diagram on the getkong.org front page, for example). They list "caching" as one of the things Kong can do in front of your APIs, but this appears to be flatly untrue at the moment. Someone please tell me I'm overlooking something!
I am very interested in this feature and wouldn't mind pitching in. I think it's a feature that would definitely help increase adoption rate. Not familiar with the BC label, but chiming in here to add my +1 and bump up the priority.
+1 for response caching.
+1 for caching
+1 for response caching with more control on eviction and request patterns
def keen on response caching !
+1
@thefosk @sinzone is there any chance we can get an update on if/when this plugin would be developed? my company is making a deceision on kong vs other options and this would be a fairly important factor. thanks!
@greenieweinee we don't have any short term plan for now - but you can maybe leverage the underlying nginx process or use a third-party caching system between Kong and the final API.
+1
+1 ^^
I tried to enable caching using custom ngnix config for kong (as suggested). Worked, but it disabled all Kong plugins for that API.
What if you put Varnish, or any other HTTP Cache, behind Kong?
That's what I've done (not obviously part of Kong):
Kong <--> Varnish <--> Origin API endpoints
I have started a POC plugin for caching here. It caches responses based on URL patterns in redis, but could be applied more granularly with the new routing in 0.10. Could be abstracted to use the policies pattern in rate-limiting plugin for different store implementation.
I plan on including ability to specify headers and query params for computing cache key.
Nice, @wshirey! If responses pool is not too big, Redis would be an overkill though. In-mem or file storage (as nginx does it) would be preferable in this case, I recon.
Hi @wshirey - nice. It would be nice if the plugin could autonomously follow the response HTTP caching headers returned by the API, and then cache appropriately the response, ie:
Cache-ControlLast-ModifiedETag@thefosk Honestly, I'm not sure I would add that myself, since I'm not that familiar with all the caching headers and wouldn't be using caching headers on my upstream APIs.
It would only make sense to widely release such a plugin if it is implemented in a standardized and un-opinionated way. Hence, it is crucial for it to respect the HTTP caching headers, or else it would be unusable for most people out there...
This is now available as part of Kong Enterprise! https://getkong.org/plugins/ee-proxy-caching/
Most helpful comment
It would only make sense to widely release such a plugin if it is implemented in a standardized and un-opinionated way. Hence, it is crucial for it to respect the HTTP caching headers, or else it would be unusable for most people out there...