Origin: HTTP/2.0 in openshift router

Created on 5 Apr 2017  路  39Comments  路  Source: openshift/origin

Need support HTTP/2.0

Version

oc v1.3.0
kubernetes v1.3.0+52492b4
features: Basic-Auth GSSAPI Kerberos SPNEGO

openshift v1.3.0
kubernetes v1.3.0+52492b4

Steps To Reproduce
  1. If use secured route passthought, then http2.0 is working, but without client headers:
    PC (42.76.65.54) -> :443 [openshift] -> secured passthought-route ->:443 [nginx pod]

have necessary http2.0_ (from nginx container), but have not headers $http_x_forwarded_for or $remote_addr - this headers have not PC public IP (only "172.17.0.1"(docker ip host))

  1. If use secured route EDGE, then - not work 400 Bad Request :
    PC (42.76.65.54) -> :443 [openshift] -> secured EDGE-route ->_:443_ [nginx pod]

  2. If use secured route EDGE, then source ip address is not change:
    PC (42.76.65.54) -> :443 [openshift] -> secured EDGE-route ->_:80_ [nginx pod] -> have only http1.1 and all client headers

Current Result

There is no way to have HTTP/2.0 and $remote_addr or other header with source/public ip client PC

Expected Result

Need support HTTP/2.0 in openshift router or proxy https headers via security route

componenrouting kinfeature lifecyclfrozen lifecyclstale prioritP2

Most helpful comment

HAProxy has support for HTTP/2 since 1.8

All 39 comments

Unfortunately there is no great solution at the moment. We are tracking http/2 support at https://trello.com/c/DdWdrz1o and waiting for haproxy to get it so that we can enable it.

Would love to see support for HTTP/2. Are you still considering Traefik, @knobunc?

Actually, the status is not too bad. We got HTTP/2 working when the public route uses a pass-through TLS termination strategy.

So:

  • H2 between pods works (with TLS)
  • H2C between pods works (H2C with and without TLS)
  • H2 behind a public route requires the route TLS termination to be "pass-through"
  • H2C behind a public route requires the same and SSL enabled on the server side

What do you think of replacing haproxy with nginx with http/2 support?
Would it work better?

From one specific angle; Nginx (free) (as far as I know) is not monitor-able to a comparable degree: I can throw a Datadog agent against HAProxy stats page and get all kinds of useful metrics, but I know of no way to do this with Nginx (free).

That being said, H2 support is wanted.

HAProxy has support for HTTP/2 since 1.8

What version of haproxy is router running?

@amon-ra Currently is the 1.5.x in use.

I have created a docker image based on ocp origin router with http/2 enabled, but I wasn't able to test it for now.

https://hub.docker.com/r/me2digital/openshift-ocp-router-hap18/

This image is based on this docker file

https://gitlab.com/aleks001/openshift-ocp-router-hap18/blob/master/Dockerfile

and this config in which h2 is enabled.

https://gitlab.com/aleks001/openshift-ocp-router-hap18/blob/master/containerfiles/var/lib/haproxy/conf/haproxy-config.template

https://gitlab.com/aleks001/openshift-ocp-router-hap18/blob/master/containerfiles/var/lib/haproxy/conf/haproxy-config.template#L223

Any feedback is welcome

Update 15.01.2018
This image have now a lua script which prints the incomming http header to the log at info level.
A example output is shown in this blog post https://www.me2digital.com/blog/2018/01/show-headers-in-haproxy/

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

/lifecycle frozen

Unfortunately there is no great solution at the moment. We are tracking http/2 support at https://trello.com/c/DdWdrz1o and waiting for haproxy to get it so that we can enable it.

HAProxy has had H2 support for a bit now. I see the latest update on the Trello board is 11th of Jan,
has there been any progress since then, @knobunc?

@andreasvirkus we need to work out how we want to allow it. https://trello.com/c/qzvlzuyx is tracking the work. Perhaps for 3.11.

Hasn't this been implemented by https://github.com/openshift/origin/pull/19968 ? Perhaps we can close this issue?

On 11/07/18 10:47, Sebastian 艁askawiec wrote:

Hasn't this been implemented by #19968
https://github.com/openshift/origin/pull/19968 ? Perhaps we can close
this issue?

+1

--
Cheers

Jean-Frederic

We enabled h2 in our infrastructure (its possible since 3.11) and it seems quite buggy (some https redirects are not working, e.g.).

Openshift uses HA Proxy 1.8.1 and looking into https://www.haproxy.org/download/1.8/src/CHANGELOG we will see a lot of fixes for h2 after the 1.8.1 release.

Technically this issue is fixed but the h2 feature inside haproxy should be used if openshift bumps the version of proxy to the latest 1.8 upstream.

Edit:

It maybe an opensource problem only

docker run -u 0 --rm -ti --entrypoint=sh registry.access.redhat.com/openshift3/ose-haproxy-router -c 'rpm -qa | grep hap'
haproxy18-1.8.14-2.el7.x86_64
docker run -u 0 --rm -ti --entrypoint=sh openshift/origin-haproxy-router -c 'rpm -qa | grep hap'
haproxy18-1.8.1-5.el7.x86_64

We enabled h2 in our infrastructure (its possible since 3.11) and it seems quite buggy (some https redirects are not working, e.g.).

@jkroepke, is this still the case? Have the specific issues been reported?

Since we upgrade the haproxy manually inside the container (https://github.com/adorsys/dockerhub-origin-haproxy-router) http2 looks almost stable.

Have the specific issues been reported?

No. We don't have a valid subscription and it feels like issues reported here don't get any attention.

it feels like issues reported here don't get any attention.

Exactly.

Haven't seen anywhere (even in a roadmap of Openshift 4.x releases), this issue has got more priority.

Official statement for OCP 3.11
https://access.redhat.com/solutions/2274201

Is it possible to enable HTTP/2 on OpenShift?
Resolution
Update to OpenShift 3.11 to enable HTTP/2 support in the OpenShift router (haproxy). Support for HTTP/2 is only available for inbound connections as the backend does not yet support this.

That being said, for me HTTP/2 is not working well in a default haproxy router in any Openshift release currently available on the market. If application needs HTTP/2 support , it has to be worked around by deploying alternative ingress controller (nginx, traefik, .. and maybe also some of the newer haproxy releases)

FWIW, openshift 4.3 (latest GA) still uses haproxy 1.8, but 4.4 and master switched to 2.0:
https://github.com/openshift/router/pull/71.

@frobware do you happen to know if 2.0 improves http/2 support?
Is there a clean way for interested folks outside Red Hat to try out 2.0 and report issues? Did the ugrade depend on any other 4.4 changes, or can it be swapped on an older cluster?

We've started enabling HTTP/2 support in 4.4 builds as of https://github.com/openshift/router/pull/75. Would love to get feedback on how far this support can get you...

Is there a clean way for interested folks outside Red Hat to try out 2.0 and report issues?

I think one way is to try 4.4.x nightly builds: https://openshift-release.svc.ci.openshift.org/

Did the ugrade depend on any other 4.4 changes, or can it be swapped on an older cluster?

I don't believe it did. This was a just a replacement of the RPM in the built image: https://github.com/openshift/router/pull/71/commits/087f8dc100d638b3743dccf1d8841ad3702f0b92

Is there a clean way for interested folks outside Red Hat to try out 2.0 and report issues?

Another way is to access the nightly builds here: https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/

You'll still need creds: https://cloud.redhat.com/openshift/install/pull-secret

Hello, has anybody succeeded to create an openshift route with HTTP/2 on openshift 4.4?
I am currently trying to connect 2 thanos query in 2 different cluster (the top query is on a 3.11 cluster and the "child" is openshift 4.4.10).

These component should communicate through gRPC. But I did not managed to make it work.

Hello, has anybody succeeded to create an openshift route with HTTP/2 on openshift 4.4?

HTTP/2 will be enabled in 4.5:

Procedure
Using the oc annotate command, enable HTTP/2 on an IngressController:

$ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

Procedure
Using the oc annotate command, enable HTTP/2 for all IngressControllers:

$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true

To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.

The connection from HAProxy to the application Pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction comes from the fact that HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the backend. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.

Hello, has anybody succeeded to create an openshift route with HTTP/2 on openshift 4.4?

HTTP/2 will be enabled in 4.5:

Procedure
Using the oc annotate command, enable HTTP/2 on an IngressController:

$ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

Procedure
Using the oc annotate command, enable HTTP/2 for all IngressControllers:

$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true

To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.

The connection from HAProxy to the application Pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction comes from the fact that HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the backend. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.

Maybe I'm just misunderstanding but the following seems contradicting; "The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated route" seems to imply h2 is supported on both "passthrough" and "re-encrypt". However, "To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate" clearly states a custom cert is needed, which is not possible with passthrough.

Also, if using re-encrypt on a cluster with a single HA proxy ingress controller, I'd think HAProxy would decrypt and inspect each h2 request in real time, then re-encrypt and reroute to a service that is matched by the ingress/route rules (host, path, etc), regardless the connection. What am I missing here ?

Whats wrong with h2c? Using h2 until haproxy and the connection between haptoxy and backend is cleartext (h2c). Isn't this edge termination?

Whats wrong with h2c? Using h2 until haproxy and the connection between haptoxy and backend is cleartext (h2c). Isn't this edge termination?

HA requires ALPN, which is a TLS extension, so TLS is required.

However, "To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate" clearly states a custom cert is needed, which is not possible with passthrough.

The wording could be clearer. The backend application needs to have a "custom" serving certificate, but you are right that the certificate is not specified on the route itself. HTTP/2 works with passthrough routes because the client negotiates directly with the backend, which should be using its own "custom" serving certificate. The important thing is that it is not using the default certificate.

Also, if using re-encrypt on a cluster with a single HA proxy ingress controller, I'd think HAProxy would decrypt and inspect each h2 request in real time, then re-encrypt and reroute to a service that is matched by the ingress/route rules (host, path, etc), regardless the connection. What am I missing here ?

Are you asking why connection coalescing is an issue? The problem is on the client side. Some clients assume that two hosts are one and the same if they belong to the same domain, resolve to the same IP address, and use the same certificate. This means that the client could open an HTTP connection for one route and then try to re-use the same connection for another route if both routes use the default certificate. HAProxy does not switch backends for an active HTTP connection, so it forwards the re-used connection to the backend for the first route.

Whats wrong with h2c? Using h2 until haproxy and the connection between haptoxy and backend is cleartext (h2c). Isn't this edge termination?

HA requires ALPN, which is a TLS extension, so TLS is required.

Right. We are looking at using the AppProtocol API to determine if a backend supports h2c, but right now AppProtocol is alpha, so we don't have a good way to determine if h2c can be used for a given backend.

Right. We are looking at using the AppProtocol API to determine if a backend supports h2c, but right now AppProtocol is alpha, so we don't have a good way to determine if h2c can be used for a given backend.

Is it possible to support it by adding a new annotation like nginx.ingress.kubernetes.io/backend-protocol instead?

Such a annotation would be nice.
It would makes then also possible to use the fcgi protocol and h2c within haproxy.

for example

....
  server {{$endpoint.ID}} {{$endpoint.IP}}:{{$endpoint.Port}} cookie {{$endpoint.IdHash}} weight {{$weight}} 
          {{- if eq ((index $cfg.Annotations "haproxy.router.openshift.io/backend-protocol") "fcgi") }} proto fcgi 
          {{- else if eq ((index $cfg.Annotations "haproxy.router.openshift.io/backend-protocol") "h2") }} proto h2
          {{- end }}
....

End-to-End HTTP/2

Are you asking why connection coalescing is an issue? The problem is on the client side. Some clients assume that two hosts are one and the same if they belong to the same domain, resolve to the same IP address, and use the same certificate. This means that the client could open an HTTP connection for one route and then try to re-use the same connection for another route if both routes use the default certificate. HAProxy does not switch backends for an active HTTP connection, so it forwards the re-used connection to the backend for the first route.

@Miciah Thanks for the response! I guess my confusion arose from the nature of connections in http2 vs http1. With http1, what you explained would make perfect sense, as each logical request/response will generally be served with one connection. With http2, since requests can be multiplexed in a single connection, I assumed the reverse proxy layer would have to inspect and route each logical request, regardless the connection. If this were the case, it wouldn't matter which connection the request was served on, assuming it hit the same reverse proxy instance, and that instance can route to the appropriate back end. This is what I was expecting to happen, at least.

I tested H2 out today. Was not able to get pass through working, but I was successful with re-encrypt. gRPC load balancing does not appear to work as I expected.

Updated - Add generated router config.

# Plain http backend or backend with TLS terminated at the edge or a
# secure backend with re-encryption.
backend be_secure:bd-broadcast-api:bd-broadcast-api
  mode http
  option redispatch
  option forwardfor
  balance leastconn

  timeout check 5000ms
  http-request set-header X-Forwarded-Host %[req.hdr(host)]
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
  http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)]
  cookie 0fa4104a957dfddffbf00bb86b97f5a2 insert indirect nocache httponly secure
  server pod:bd-broadcast-api-7b88c6666b-6js7f:bd-broadcast-api:10.128.2.104:8080 10.128.2.104:8080 cookie a08891df1183413aaf4f41e0f42de8a5 weight 256 ssl alpn h2,http/1.1 verify required ca-file /var/lib/haproxy/router/cacerts/bd-broadcast-api:bd-broadcast-api.pem check inter 5000ms
  server pod:bd-broadcast-api-7b88c6666b-gr8pz:bd-broadcast-api:10.131.0.28:8080 10.131.0.28:8080 cookie bb5768e31e709d8d26aebc0c88adb679 weight 256 ssl alpn h2,http/1.1 verify required ca-file /var/lib/haproxy/router/cacerts/bd-broadcast-api:bd-broadcast-api.pem check inter 5000ms

Looks like the router is assigning balance leastconn across all pods selected by the service. Is there anyway to round robbin?

Looks like the router is assigning balance leastconn across all pods selected by the service. Is there anyway to round robbin?

Try oc -n bd-broadcast-api annotate routes/bd-broadcast-api haproxy.router.openshift.io/balance=roundrobin.

Is it possible to support it by adding a new annotation like nginx.ingress.kubernetes.io/backend-protocol instead?

We could take that approach to supporting h2c, but using a standard API is preferable if AppProtocol can serve that need.

Hi again. Apologies if this is not the right issue for this question. I am happy to create a new issue if needed. During my testing of HTTP/2 (h2) and gRPC with the OCP router, I realized logical requests are not being load balanced, rather, only the h2 connection. My HAProxy configuration for my tests is posted above. I also tried annotating the route, per @Miciah advise, which did update the configuration to roundrobbin, but it only changes the load balancing behavior at the connection level. My test is rather simple, two gRPC servers are running in OCP 4.5. I have a client that calls multiple request (using the ingress route) in a tight loop. In the first test scenario, I create a connection per call, and I can see the requests (and connections) being load balanced across each server. In the second scenario, I reuse the same connection for all the client calls, and all requests are only received by a single server. When I run the same tests using Traefik 2.2 as the load balancer, all logical requests are correctly being load balanced against each host when the client is using a single connection.

@Aenima4six2 Well to route based on gRPC URL's are ACL's required as described in the following blog.
As far as I know is this not possible yet with the default OCP Router Template.

https://www.haproxy.com/blog/haproxy-1-9-2-adds-grpc-support/ => "HAProxy gRPC Support"

Copy from the blog post.

frontend fe_proxy
    bind :3001 ssl  crt /path/to/cert.pem  alpn h2
    acl is_codename_path path /CodenameCreator/KeepGettingCodenames
    acl is_otherservice_path path /AnotherService/SomeFunction
    use_backend be_codenameservers if is_codename_path
    use_backend be_otherservers if is_otherservice_path
    default_backend be_servers

@git001 I might be misunderstanding you but I'm not routing by URLs at all. I'm routing using the host header. Requests are correctly flowing from my ingress/route to my pods, its just the load balancing between replicated pods that is the issue. In short, the load balancing is acting like its done at the connection level and not at the h2 stream. See this link for a better description of the issue.

Route YAML for gRPC

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: bd-broadcast-api
  namespace: bd-broadcast-api
  labels:
    app.kubernetes.io/component: api
    app.kubernetes.io/instance: bd-broadcast-api-v1
    app.kubernetes.io/part-of: bd-broadcast-api
    bd/project-name: bd-broadcast-api
    environment: dev
spec:
  host: broadcast-api2.dev.bdreporting.local
  to:
    kind: Service
    name: bd-broadcast-api
    weight: 100
  port:
    targetPort: 8080-tcp
  tls:
    termination: reencrypt
    certificate: |
      -----BEGIN CERTIFICATE-----
      redacted
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
     redacted
      -----END RSA PRIVATE KEY-----
    destinationCACertificate: |
      -----BEGIN CERTIFICATE-----
      redacted
      -----END CERTIFICATE-----
    insecureEdgeTerminationPolicy: None
  wildcardPolicy: None

@Aenima4six2 sorry I miss read your comment https://github.com/openshift/origin/issues/13638#issuecomment-672889601.

The behavior which you describe works as designed because when a H2 connection ist established from one IP and the server is still reachable will all further communication be done to the same server as @frobware described in comment https://github.com/openshift/origin/issues/13638#issuecomment-655937837

It looks like you have the default behavior of cookie stickies which could also be the reason for the behavior what you describe.
Try to disable cookie stickiness.
oc -n bd-broadcast-api annotate route bd-broadcast-api haproxy.router.openshift.io/disable_cookies=true

This annotation is described in the doc Route-specific annotations

We are looking at using the AppProtocol API to determine if a backend supports h2c, but right now AppProtocol is alpha, so we don't have a good way to determine if h2c can be used for a given backend.

It's graduated to beta and enabled by default in Kubernetes 1.19, so I guess it can be added to Openshift/OKD 4.6?

ServiceAppProtocol feature gate is now beta and enabled by default, adding new AppProtocol field to Services and Endpoints. (#90023, @robscott) [SIG Apps and Network]
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#api-change-7

Was this page helpful?
0 / 5 - 0 ratings