Hello,
i tried to find out by myself and failed :(
Is there an easy way to enable caching for nginx?
Or i need to create a custom template for this?
Thanks!
@tpolekhin you need to use a custom template
@tpolekhin can you share what are you trying to do?
I am also interested in this. I would like for my ingress-controller to cache static content for the services being proxy with proxy_cache_path.
@aledbf right now i use ==> VARNISH ==> HAPROXY ==> K8S solution.
I thought i could make it simpler by using one component for routing+caching, such as nginx.
As i understood nginx store cache on disk, not in memory, like varnish do?
I'm a bit concerned about performance if the above is true
@tpolekhin you are right, nginx uses disk to store the cache but we can use an emptyDir volume mount (uses tmpfs) if is too slow
I agree @aledbf. Nginx proxy_cache_path can be mounted on disk or in memory. While there are always other options, such as throwing Varnish or Redis in the stack. In my case my needs are humble and I would prefer to turn on an already available configuration in Nginx than add more applications to the stack.
@kyv let me see if I can include this feature in the next release
IMO ingress needs to be a simple and bullet proof service that delegates complex tasks such as cache to other services. With this idea in mind my ongoing vision for a cache will use a separated redis service.
Something like this:
server {
location / {
set $redis_key $uri;
redis_pass $FRONT_END_REDIS_CACHE:6379;
default_type text/html;
error_page 404 = /fallback;
}
location = /fallback {
proxy_pass backend;
}
}
I fully agree on @rturk. IMHO the nginx standard template is already very complex. Implementing something special and individual stuff like caching shouldn't be implemented in the standard template.
Just think of users who want to use custom templates and try to use the standard template as base. It'll take some time to check what is important/essential and what isn't.
Closing. Using the nginx cache makes impossible to share the cache (and also is hard to debug). Using redis requires adding additional external modules and also redis in the cluster.
Edit: this branch contains the cache feature https://github.com/aledbf/ingress-nginx/tree/nginx-cache
@aledbf wow this commit on that branch of yours is exactly what I was looking for. I really need a simple proxy-cache implementation, and for me using Redis or another caching layer isn't an option.
Is there any chance that your work here might ever make it into core? Is the problem that multiple nginx ingress controllers would each have their own cache, which is confusing, and there's no way to share that cache? Might it not be possible to implement the annotation so as to make it obvious that it comes with caveats? In my case I'd much rather have an imperfect nginx cache than none at all.
Is there any other way to hack our own nginx config into the nginx ingress controller without forking the project?
Is there any other way to hack our own nginx config into the nginx ingress controller without forking the project?
Reuse this template https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl and mount it into /etc/nginx/template/ in your nginx ingress controllers.
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/custom-template.md
@pieterlange thanks that's very helpful. Presumably this will effect upgradeability? If ever that template changes I'd have to remember to update our override template. There's no way to avoid that is there?
Just sharing my setup as I came across this issue numerous times during my research. I'm using Terraform so the syntax is a little different.
resource "kubernetes_config_map" "nginx_configuration" {
metadata {
...
}
data = {
"http-snippet" = <<EOF
proxy_cache_path
/tmp/cache
levels=1:2
keys_zone=app_cache:100m
max_size=100g
inactive=100y
use_temp_path=off;
EOF
}
}
resource "kubernetes_ingress" "app-ingress" {
metadata {
name = "app-ingress"
annotations = {
"kubernetes.io/ingress.class" = "nginx"
"nginx.ingress.kubernetes.io/proxy-buffering": "on"
"nginx.ingress.kubernetes.io/server-snippet": <<EOF
proxy_cache app_cache;
proxy_cache_lock on;
proxy_ignore_headers Cache-Control; // Just until I get the app setting its headers correctly
proxy_cache_valid any 30m;
add_header X-Cache-Status $upstream_cache_status;
EOF
}
}
spec {
...
@tombh thanks, then how to purge all of the caches of these nginx instances?
It's been a while, so I can't actually remember now. I think I would just remove the files directly in the filesystem, so I guess in the example above it'd be the files in /tmp/cache
Most helpful comment
I agree @aledbf. Nginx proxy_cache_path can be mounted on disk or in memory. While there are always other options, such as throwing Varnish or Redis in the stack. In my case my needs are humble and I would prefer to turn on an already available configuration in Nginx than add more applications to the stack.