Ingress-nginx: Ingress controller fails to reach a single Nginx service exposing 2 ports

Created on 31 Jan 2018  路  21Comments  路  Source: kubernetes/ingress-nginx

Hello,

I have a service which exposes:

  • /foo on port 5611
  • /bar on port 5612

I want the Ingress controller to load-balance both endpoint.

I have tried this,

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: my.example.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: my-service
          servicePort: 5611
      - path: /bar
        backend:
          serviceName: my-service
          servicePort: 5612

and this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: my.example.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: my-service
          servicePort: 5611
  - host: my.example.com
    http:
      paths:
      - path: /bar
        backend:
          serviceName: my-service
          servicePort: 5612

And both fails...

my-service is properly exposing /foo on port 5611 and /bar on port 5612, each of them being assigned a different .htaccess file for basic authentication.

When I try to access /foo, everything works fine

When I try to access /bar, the login challenge is the one from /foo.

If I remove basic auth, then, accessing /foo still works as expected while accessing /bar from the Ingress URL leads to HTTP 404.

If I change the service type of my-service from ClusterIP to NodePort in order to access it directly (without Ingress), then both endpoints are working as expected (with and without basic auth).

The problem clearly comes from the Ingress controller.

Shouldn't this be considered as defect of the Ingress controller?

Thanks.

Note: similar issue at https://github.com/kubernetes/ingress-nginx/issues/1655

Most helpful comment

IMHO, there is still a defect in here...

All 21 comments

Hi, just was reading about #1655 and saw you post.

I suppose you are passing your auth via a header parameters. Have you checked that the nginx behind the ingress is getting and forwarding this?

Even before authenticating, the label in the browser's login dialog is not the proper one.

My nginx.conf is as follows:

events {
  worker_connections  1024;
}
http {
  upstream foo-service {
    server foo-service:5001;
  }
  server {
    listen 5611;
    auth_basic              "Log on foo";
    auth_basic_user_file    /etc/nginx/.htaccess.foo;
    server_name             foo-service;
    ssl                     off;

    location ~ (/foo) {
      proxy_pass       http://foo-service;
    }
  }

  upstream bar-service {
    server bar-service:5001;
  }
  server {
    listen 5612;
    auth_basic              "Log on bar";
    auth_basic_user_file    /etc/nginx/.htaccess.bar;
    server_name             bar-service;
    ssl                     off;

    location ~ (/bar) {
      proxy_pass       http://bar-service;
    }
  }
}

If I access /foo, it works fine, and the dialog claims "Log on foo",
But if I access /bar, the dialog does not claim "Log on bar", but "Log on foo".

So to me, when using Ingress to route to different ports of the same service, the nginx generated on the Ingress side is probably wrong... Looks like a defect.

As a workaround, I have created 2 distinct Kubernetes services (one for foo, and another one for bar), both relying on the same deployment:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: proxy-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
        app: proxy-app
...
---
apiVersion: v1
kind: Service
metadata:
  name: foo-proxy-service
spec:
  selector:
    app: proxy-app
  ports:
    - protocol: TCP
      port: 5611
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: bar-proxy-service
spec:
  selector:
    app: proxy-app
  ports:
    - protocol: TCP
      port: 5612
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: foo-proxy-service
          servicePort: 5611
      - path: /bar
        backend:
          serviceName: bar-proxy-service
          servicePort: 5612

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

IMHO, there is still a defect in here...

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/reopen

@cam-cen: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/reopen

@JulienCarnec: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/reopen

@gWOLF3: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Thank you @aledbf

Was this page helpful?
0 / 5 - 0 ratings