I need help configuring NGINX to allow large entities. We are using SSL with a certificate and everything works fine until we try to send anything greater than 1mb.
NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.19
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Azure
Kuberenetes
SSL enabled
What happened:
When PUTing to a service endpoint I recieve the error "PayloadTooLargeError: request entity too large" with code 413. It appears this is coming from this nginx controller.
What you expected to happen:
Not to see the error "PayloadTooLargeError: request entity too large"
How to reproduce it (as minimally and precisely as possible):
Our annotations on the nginx controller and the ingress:
nginx.ingress.kubernetes.io/ssl-passthrough: false
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/enable-cors: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 25m
nginx.conf after deployment
daemon off;
worker_processes 2;
pid /run/nginx.pid;
worker_rlimit_nofile 523264;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
real_ip_header X-Forwarded-For;
real_ip_recursive on;
set_real_ip_from 0.0.0.0/0;
geoip_country /etc/nginx/GeoIP.dat;
geoip_city /etc/nginx/GeoLiteCity.dat;
geoip_proxy_recursive on;
sendfile on;
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
keepalive_requests 100;
client_header_buffer_size 1k;
client_header_timeout 60s;
large_client_header_buffers 4 8k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 4k;
http2_max_header_size 16k;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 128;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
include /etc/nginx/mime.types;
default_type text/html;
brotli on;
brotli_comp_level 4;
brotli_types application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip_proxied any;
gzip_vary on;
# Custom headers for response
server_tokens on;
# disable warnings
uninitialized_variable_warn off;
# Additional available variables:
# $namespace
# $ingress_name
# $service_name
log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.0.0.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_x_forwarded_for $the_real_ip {
default $remote_addr;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $http_x_forwarded_host $best_http_host {
default $http_x_forwarded_host;
'' $this_host;
}
map $pass_server_port $pass_port {
443 443;
default $pass_server_port;
}
# Obtain best http host
map $http_host $this_host {
default $http_host;
'' $host;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve auto;
proxy_ssl_session_reuse on;
upstream qa-qa-clipart-service-80 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.244.0.124:3571 max_fails=0 fail_timeout=0;
}
upstream qa-qa-zipkin-80 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.244.1.97:9411 max_fails=0 fail_timeout=0;
}
upstream upstream-default-backend {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.244.1.9:8080 max_fails=0 fail_timeout=0;
}
upstream qa-qa-design-service-80 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.244.1.113:3567 max_fails=0 fail_timeout=0;
}
upstream qa-qa-user-asset-service-80 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
keepalive 32;
server 10.244.0.122:3581 max_fails=0 fail_timeout=0;
}
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=511;
listen [::]:80 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
listen 443 default_server reuseport backlog=511 ssl http2;
listen [::]:443 default_server reuseport backlog=511 ssl http2;
# PEM sha: eec00703d75d09101e8032513c0fe787242365d7
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
location / {
set $proxy_upstream_name "upstream-default-backend";
set $namespace "";
set $ingress_name "";
set $service_name "";
port_in_redirect off;
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://upstream-default-backend;
}
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
}
## end server _
## start server qa.etchdesigner.com
server {
server_name qa.etchdesigner.com ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
listen 443 ssl http2;
listen [::]:443 ssl http2;
# PEM sha: 9a5e640327a36ee85934f2a9a5f3d14ac9cd6218
ssl_certificate /ingress-controller/ssl/qa-etchdesignerssl.pem;
ssl_certificate_key /ingress-controller/ssl/qa-etchdesignerssl.pem;
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
location ~* ^/zipkin\/?(?<baseuri>.*) {
set $proxy_upstream_name "qa-qa-zipkin-80";
set $namespace "qa";
set $ingress_name "qa-common";
set $service_name "";
# enforce ssl on server side
if ($pass_access_scheme = http) {
return 301 https://$best_http_host$request_uri;
}
port_in_redirect off;
client_max_body_size "25m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
rewrite /zipkin/(.*) /$1 break;
rewrite /zipkin / break;
proxy_pass http://qa-qa-zipkin-80;
}
location ~* ^/user-asset-service\/?(?<baseuri>.*) {
set $proxy_upstream_name "qa-qa-user-asset-service-80";
set $namespace "qa";
set $ingress_name "qa-common";
set $service_name "";
# enforce ssl on server side
if ($pass_access_scheme = http) {
return 301 https://$best_http_host$request_uri;
}
port_in_redirect off;
client_max_body_size "25m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
rewrite /user-asset-service/(.*) /$1 break;
rewrite /user-asset-service / break;
proxy_pass http://qa-qa-user-asset-service-80;
}
location ~* ^/design-service\/?(?<baseuri>.*) {
set $proxy_upstream_name "qa-qa-design-service-80";
set $namespace "qa";
set $ingress_name "qa-common";
set $service_name "";
# enforce ssl on server side
if ($pass_access_scheme = http) {
return 301 https://$best_http_host$request_uri;
}
port_in_redirect off;
client_max_body_size "25m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
rewrite /design-service/(.*) /$1 break;
rewrite /design-service / break;
proxy_pass http://qa-qa-design-service-80;
}
location ~* ^/clipart-service\/?(?<baseuri>.*) {
set $proxy_upstream_name "qa-qa-clipart-service-80";
set $namespace "qa";
set $ingress_name "qa-common";
set $service_name "";
# enforce ssl on server side
if ($pass_access_scheme = http) {
return 301 https://$best_http_host$request_uri;
}
port_in_redirect off;
client_max_body_size "25m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
rewrite /clipart-service/(.*) /$1 break;
rewrite /clipart-service / break;
proxy_pass http://qa-qa-clipart-service-80;
}
location / {
set $proxy_upstream_name "upstream-default-backend";
set $namespace "";
set $ingress_name "";
set $service_name "";
port_in_redirect off;
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert "";
proxy_set_header ssl-client-verify "";
proxy_set_header ssl-client-dn "";
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://upstream-default-backend;
}
}
## end server qa.etchdesigner.com
# default server, used for NGINX healthcheck and access to nginx stats
server {
# Use the port 18080 (random value just to avoid known ports) as default port for nginx.
# Changing this value requires a change in:
# https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
listen 18080 default_server reuseport backlog=511;
listen [::]:18080 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
location /healthz {
access_log off;
return 200;
}
location /nginx_status {
set $proxy_upstream_name "internal";
access_log off;
stub_status on;
}
location / {
set $proxy_upstream_name "upstream-default-backend";
proxy_pass http://upstream-default-backend;
}
}
}
stream {
log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
access_log /var/log/nginx/access.log log_stream;
error_log /var/log/nginx/error.log;
# TCP services
# UDP services
}
@Strandedpirate please make sure the location you are requesting contains the correct client_max_body_size. From the configuration you are sending request to / and that location is configured with 1m.
You can increase the global size using the configuration configmap https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/configmap.md#proxy-body-size
Closing. Please reopen if you have more questions
Hi, I've already set nginx.ingress.kubernetes.io/proxy-body-size: 25m as shown above and the nginx.conf shows that for the route I'm hitting the client_max_body_size is "25m" but yet I still get this error. Am I misunderstanding what your saying?
@Strandedpirate what I am saying is that you are sending a request to an URL not covered with client_max_body_size is "25m"
What URL are you trying?
@Strandedpirate please also share the ingress rule you created
Here is the ingress yaml.
The url I'm hitting is https://qa.etchdesigner.com/design-service/design/37S?tenantId=2&referenceUserId=0&modifiedByUserId=0
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/enable-cors: "false"
nginx.ingress.kubernetes.io/proxy-body-size: 25m
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
creationTimestamp: 2017-11-10T21:52:14Z
generation: 18
labels:
app: common
chart: common-0.176.0
environment: qa
heritage: Tiller
project: designer
provider: buildasign
release: qa-common
name: qa-common
namespace: qa
resourceVersion: "5798771"
selfLink: /apis/extensions/v1beta1/namespaces/qa/ingresses/qa-common
uid: 6986dea7-c661-11e7-ac8d-000d3a7212bc
spec:
rules:
- host: qa.etchdesigner.com
http:
paths:
- backend:
serviceName: qa-design-service
servicePort: 80
path: /design-service
- backend:
serviceName: qa-user-asset-service
servicePort: 80
path: /user-asset-service
- backend:
serviceName: qa-clipart-service
servicePort: 80
path: /clipart-service
- backend:
serviceName: qa-zipkin
servicePort: 80
path: /zipkin
tls:
- hosts:
- etchdesigner.com
- qa.etchdesigner.com
- stage.etchdesigner.com
secretName: etchdesignerssl
status:
loadBalancer:
ingress:
- {}
The max size is being restricted to 100kb not 1mb (the lowest defined value) like is defined in the nginx.conf. I'm confused.
I'm facing the same problem with k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15. Any news/solutions/workarounds?
Update: It works now
An hour later and it's working... I'm not sure if this was the solution, but I restarted the nginx-controller and the services that had trouble with the upload limit manually. This is what I currently have under annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 500m
ingress.kubernetes.io/proxy-body-size: 500m
I was stuck on this for a day or so!
It seems even though nginx.ingress.kubernetes.io/proxy-body-size is recommended, ingress.kubernetes.io/proxy-body-size actually works. Maybe I'll update my cluster and test again.
NGINX Ingress controller
Release: 0.10.2
Build: git-fd7253a
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
I would confirm this too:
This doesn't work: nginx.ingress.kubernetes.io/proxy-body-size: 500m
This does work: ingress.kubernetes.io/proxy-body-size: 500m
But what doesn't appear to have any solution right now, is what if you do not want a restriction at all, docker images for example can be hard to predict, should I just set some absurd high value that my server could never support if it wanted to and just leave it to the gods to figure out?
@christhomas 0 will turn off any limit
No, that isn't happening, I tried 0, "0", 0m, and "0m" and all the attempts ended up with "1m" (the default I expected).
I am having the same issue (status code 413) in Google Cloud Platform when my request header is large. I have tried all kinds of configuration changes.
I deployed quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 with mandatory commands and non-rbac.
EDIT1: Master node version 1.7.12-gke.1 and worker node version 1.7.10-gke.0
EDIT2: Problem solved. Google Cloud had old load balancer so I deleted it and readded Ingress with annotations.
We had the same issue in Amazon EKS using quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0.
In the end we actually found the opposite of what @christhomas found above:
nginx.ingress.kubernetes.io/proxy-body-size: 500m did workingress.kubernetes.io/proxy-body-size: 500m did not workWe also used kubernetes.io/ingress.class: "nginx" as an annotation on the ingress alongside the proxy-body-size one above. But I'm unsure if that's needed.
i have tested each one alone
nginx.ingress.kubernetes.io/proxy-body-size: 500m
ingress.kubernetes.io/proxy-body-size: 500m
but it only worked with both annotations, i don't know why.
@JamesLaverack & @christhomas
Generally you have nginx in the pod who serv your app, so i think that you have to add
client_max_body_size 500M;
to your nginx in the app pod configuration, or to your ConfigMap.
I just tested both independently:
nginx.ingress.kubernetes.io/proxy-body-size: 500m --> workedingress.kubernetes.io/proxy-body-size: 500m --> Did not workI believe it all depends on the version of nginx ingress controller installed in one's system. Here is my version quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0. I believe nginx.ingress.kubernetes.io/proxy-body-size: 500m should work for any version => 0.22.0
Both of the annotations does not work for me. I'm using version 0.26.1. How can I troubleshooting it?
BTW, I can upload files directly to my api service, so the issue must be with my ingress configuration
This is my configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "500m"
spec:
tls:
- secretName: web-tls
rules:
- host: api.example.com
http:
paths:
- path: /v1/?(.*)
backend:
serviceName: api
servicePort: 80
- host: api.example.com
http:
paths:
- path: /?(.*)
backend:
serviceName: legacy-api
servicePort: 80
I am also having issue on it. @poyingatsounon did you find any solution?
Make sure you are deploying the helm chart stable/nginx-ingress and not nginx/nginx-ingress as the latter doesn't seem to support the required annotations
Make sure you are deploying the helm chart stable/nginx-ingress and not nginx/nginx-ingress as the latter doesn't seem to support the required annotations
That's true. The stable/nginx-ingress works not the nginx/nginx-ingress
Most helpful comment
I'm facing the same problem with
k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15. Any news/solutions/workarounds?Update: It works now
An hour later and it's working... I'm not sure if this was the solution, but I restarted the nginx-controller and the services that had trouble with the upload limit manually. This is what I currently have under
annotations: