Can you explain more?
Session affinity overrides the load-balancing algorithm by directing all requests in a session to a specific application server. For some applications to work correctly, the application requires session affinity between the client and the application server.
We want use Caddy to reverse proxy and loadbalance multi stateful service, but currently the lb_policy types can't support this. So i think in this case, we need use cookie to select upstream server.
Thanks! That seems like a good use case. Can you explain how the session affinity actually works? I.e. what about the cookie defines a session?
Edit: After looking through the PR, I get some idea how it works... but I am not sure if it is a good idea for a load balancing policy to modify the state of the request & response. I feel like it should just read the request and not mutate it. Perhaps assigning a session ID to a client is something better done by a handler than a load balancing module.
I think it is a common solution for a loadbalancer to use cookie to identity a session, in the first request, if the specified cookie not exist, they use set-cookie in response to assign one. Maybe we can alse use another field to decide if we need to add cookie if it is not present(active mode or passive mode).
Yes, maybe it's a better idea to use handler to assign session id and in LB module to do upstream selection. If so we don't need to pass response writer to selectionpolicies.
Here are some article about session affinity feature of other loadbalancers:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html
https://www.haproxy.com/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
https://dev.to/peterj/what-are-sticky-sessions-and-how-to-configure-them-with-istio-1e1a
Very interested in seeing cookie-based session affinity as well! It's the main thing blocking adoption for our app, which requires sticky sessions. One wrinkle with the implementation in the PR is that if the number of hosts changes (say, during a rolling deployment with caddy-docker-proxy) then hostByHashing returns a different host since its dependent on the number of hosts available. I don't know if there's an easy way to avoid this beyond keeping a dictionary of which cookie hashes prefer which hosts, which doesn't seem very elegant...
Actually, Traefik seems to solve the wrinkle I mentioned by storing the destination host in the cookie somehow. _"On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy. If not, a new backend will be assigned."_ [[ref](https://docs.traefik.io/v1.7/basics/#sticky-sessions)]
FYI an implementation has been proposed here, but it needs more discussion: https://github.com/caddyserver/caddy/pull/3408
Hello,
I'm also very interested in this feature (as I have some legacy statefull application) so I worked on my side on that: I rebase master on @utick branch and I try to manage @francislavoie remarks on https://github.com/caddyserver/caddy/pull/3408
Could you please have a look on this new merge request https://github.com/caddyserver/caddy/pull/3809 ?
Thank you !