Ingress-nginx: Ingress changes original content type

Created on 18 Mar 2020  路  9Comments  路  Source: kubernetes/ingress-nginx

I have problem that Ingress changes original mime type send by docker container.
It changes Content-Type: application/javascript to Content-Type: text/html
This breaks my angular application display.

I installed helm chart stable/nginx-ingress on AKS and applied very simple ingress description:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-nginx-ingress-controller
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: eps-backend
          servicePort: 80
        path: /api(/|$)(.*)
      - backend:
          serviceName: eps-frontend
          servicePort: 80
        path: /(.*)

Is there some configuration, that I did not find, that can turn this behavior off and retain the original content type?

kinsupport lifecyclrotten

Most helpful comment

I fixed this error by changing my nginx-ingress config file .

i changed my config from this :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: client-cluster-ip-service
              servicePort: 3000
          - path: /api(/|$)(.*)
            backend:
              serviceName: server-cluster-ip-service
              servicePort: 5000

to this :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - http:
        paths:
          - path: /?(.*)
            backend:
              serviceName: client-cluster-ip-service
              servicePort: 3000
          - path: /api/?(.*)
            backend:
              serviceName: server-cluster-ip-service
              servicePort: 5000

i hope this helps !

All 9 comments

We are getting the same thing did you figure anything out?

I'm getting the same thing, for a react js frontend !

getting the same thing

For anyone that experiences this problem, I did not manage to resolve it with this helm chart. Fortunately other helm charts worked for me. In my use case I used bitnami/nginx-ingress-controller, which after normal helm install worked out of the box.

I fixed this error by changing my nginx-ingress config file .

i changed my config from this :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: client-cluster-ip-service
              servicePort: 3000
          - path: /api(/|$)(.*)
            backend:
              serviceName: server-cluster-ip-service
              servicePort: 5000

to this :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - http:
        paths:
          - path: /?(.*)
            backend:
              serviceName: client-cluster-ip-service
              servicePort: 3000
          - path: /api/?(.*)
            backend:
              serviceName: server-cluster-ip-service
              servicePort: 5000

i hope this helps !

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings