Ingress-nginx: traffic split by request header or cookie

Created on 25 Apr 2018  路  22Comments  路  Source: kubernetes/ingress-nginx

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

NGINX Ingress controller version:

0.12.0

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:29:47Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

I have two services for the same host domain, but I want to split traffic by some keywords in the request header. How can I write the ingress config to achieve the target?

What you expected to happen:

split traffic by some keywords of the request header

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

kinfeature lifecyclstale

Most helpful comment

The feature is published as part of version 0.21.0: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary

I've written a blog post https://www.elvinefendi.com/2018/11/25/canary-deployment-with-ingress-nginx.html showing how to use it with examples.

All 22 comments

At first glance it seems like something you would have to implement using a custom nginx.conf template.

Yeah... But that can be configured by Ingress Controller will be better. And by the way, will we support this policy by ingress to split traffic by request header or cookie in the future?

@aledbf should be able to shed lights on that.

To me this sounds like a very specific feature that would require a proper design phase. There are more things to it than just matching _a header_. First of all you need to be able to express a (probably highly customized) routing strategy using annotations.

Maybe you can edit your original post and include more details about the scenario you have in mind?

For instance, there is a running service named svcA, and then we has deployed a new service version named svcB, and they are in the same domain host, but we want to split partial traffic from new clients by version keyword in the header to svcB.

In other words, we may call it A/B test scenario.

It seems like content based routing in istio https://istio.io/docs/tasks/traffic-management-v1alpha3/request-routing.html#content-based-routing

I was expecting more details like the content of the header and the routing strategy to infer from it. In other words: how would you do it manually on a standalone nginx instance.

For nginx along this task seems impossible. We did similar things before by design a DSL
to describe how a request should be route and use nginx with lua to parse the DSL rules and compare rules with every request

It seems like content based routing in istio https://istio.io/docs/tasks/traffic-management-v1alpha3/request-routing.html#content-based-routing

@oilbeater Yeah, the only difference is that we think about north-south traffic routing based on the content of the header, such as cookie or user-agent and so on.

I was expecting more details like the content of the header and the routing strategy to infer from it. In other words: how would you do it manually on a standalone nginx instance.

@antoineco For instance, my partial server conf is:

    ## start server mini-echo.io
    server {
        server_name mini-echo.io ;

        listen 80;

        listen [::]:80;

        set $proxy_upstream_name "-";

        location / {

            port_in_redirect off;

            set $proxy_upstream_name "default-old-nginx-80";

            set $namespace      "default";
            set $ingress_name   "echo";
            set $service_name   "old-nginx";

            client_max_body_size                    "20m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         "off";
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            if ($http_version = 2) {
                proxy_pass http://default-old-nginx-80;
        break;
        }
            #proxy_redirect                          off;
        return 503;
        }

    }
    ## end server mini-echo.io

If I curl http://mini-echo.io with request header 'Version: 2', it will proxy pass to upstream default-old-nginx-80, but if I curl without request header 'Version: 2', it will return 503 error page.

Yeah... But that can be configured by Ingress Controller will be better. And by the way, will we support this policy by ingress to split traffic by request header or cookie in the future?

We really want to add this feature but is not clear how to express it (yet)

+1

@aledbf I am working for this feature, and I will fire a PR in the next week.

@chenqz1987 you can raise a [WIP] PR first.

+1

+1

+1

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

+1

What would also be nice is something like "traffic split by custom snippet". Right now we use the configuration snippet in the ingress to do a bunch of logic to decide when someone is a spammer and send them to a different backend. It does work but does feel like it could be integrated in a much more cleaner way alongside with these other traffic shifting strategies.

The feature is published as part of version 0.21.0: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary

I've written a blog post https://www.elvinefendi.com/2018/11/25/canary-deployment-with-ingress-nginx.html showing how to use it with examples.

Was this page helpful?
0 / 5 - 0 ratings