Hi,
I'm seeing an odd behavior where using the rewrite annotation, a trailing slash is appended to the end of the path. Here's my YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /app/webapi
name: rewrite
namespace: dev
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /webapi
Here's what I'm seeing in nginx.conf
server {
server_name foo.bar.com;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location ~* ^/webapi\/?(?<baseuri>.*) {
set $proxy_upstream_name "upstream-default-backend";
set $namespace "dev";
set $ingress_name "rewrite";
set $service_name "";
port_in_redirect off;
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
rewrite /webapi/(.*) /app/webapi/$1 break;
proxy_pass http://upstream-default-backend;
}
Is this the expected behavior? I'm expecting location ~* /webapi ? I've tried several of the latest nginx controller versions and they all behave the same way.
Thanks.
Seems related to https://github.com/kubernetes/ingress/issues/1260
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
/reopen
@Wenzil: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue is still around
Most helpful comment
This issue is still around