(As mentioned in #115, I will open a new issue for this)
caddy -version)?Caddy 0.9.1 (+e8e5595 Mon Sep 05 04:12:57 UTC 2016)
Set-up case-specific proxy configuration between multiple hosts. I have two vms, backend1 is slow and backend2 is fast. Requests come in on backend1 (slow), which proxies all relevant compute-intensive requests to the (backend2 (fast). However, the backend1 is perfectly capable of servicing (most of) these requests, albeit slower :-)
Now I want to configure caddy to proxy the requests to backend2, unless backend2 is down (maintenance or outage) - in which case requests should be handled by backend1 (aka localhost). In Apache I had something like this:
<proxy balancer://mysite>
BalancerMember http://_backend1_:20006 status=+H
BalancerMember http://_backend2_:20006
</proxy>
This meant that backend1 was a 'hot stand-by', or backup server.
Another use case is this what we have at work:
Many many servers serve many many sites, all going through a single pair of loadbalancers. Every site on the loadbalancer has the same 'backup server' configured, which serves the requests when none of the normal servers for the site are available. The backup server doesn't serve the original site but a nicely styled "We are sorry for the inconvenience" page (sometimes depending on the original site this could be differently worded or styled, or be the "planned maintenance" page if this happened to be the case)
I hope my use cases are clear? :-)
Would a more general way to describe this request be for a priority for each backend? Maybe this could simply be a load balancing policy that only chooses the first N hosts unless they aren't available.
This would be a more general description, enabling more than my two tiers of servers.
Calling it a load balancing strategy might be confusing? You would then have multiple strategies combined? (One for the priorities, one to pick a server inside a tier)
I also think a load balancing policy can suffice.
I would like to setup a varnish cache, and set up a proxy to the varnish cache, with a "hot-standby" of the actual backend. Right now, the only load balancing policies to choose from would all send some traffic to the backend instead of the varnish cache. I think a load balancing policy would be perfect, even if it was as simple as hot_standby and just tried one after the other until it got a successful response.
@jovandeginste does the merged MR fulfill your requirements?
@pbeckmann Which commits should i look at? Quick scan didn't reveal anything relevant...
@jovandeginste I think he's referring to https://github.com/mholt/caddy/pull/1513 -- by using the "first" policy, only the first available host will be used. So if you take down you first host for maintenance, the other ones act as kind of hot stand-by backends.
I think I should be able to fit my use cases here. I'll try to confirm with working configuration later today.
I'm just gonna close this since I'm going through the issues that (I think) are resolved with this upcoming release. Let us know if you find a problem with the first policy!
Is there an ETA for the release that includes this feature to hit Docker Hub?
@ckeeney Thursday! https://www.facebook.com/events/1413078512092363/
I don't actually control the Docker container, but I'm sure @abiosoft will update it pretty quick. We'll eventually make it official...
Is that a 1.0 launch party? If so, congrats! I look forward to this feature.