Charts: [stable/kong] Bad Request or problem with source IP

Created on 7 Mar 2019  Â·  14Comments  Â·  Source: helm/charts

Is this a request for help?:

Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
probably BUG REPORT

Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.5-eks-6bad6d", GitCommit:"6bad6d9c768dc0864dab48a11653aa53b5a47043", GitTreeState:"clean", BuildDate:"2018-12-06T23:13:14Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
stable/kong

What happened:

helm install stable/kong --name kong --namespace ingress --values kong-values.yaml

values:

ingressController:
  enabled: true
  replicaCount: 1

proxy:
  type: LoadBalancer

admin:
  useTLS: false
  type: ClusterIP


postgresql:
  enabled: false

env:
  database: postgres
  pg_user: kong-staging-user
  pg_password: xxxx
  pg_database: kong
  pg_host: psql-postgresql.ingress.svc.cluster.local
  pg_port: 5432


readinessProbe:
  httpGet:
    scheme: HTTP
livenessProbe:
  httpGet:
    scheme: HTTP


podAnnotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "8444"

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: prometheus-ip-restriction
  namespace: monitoring
config:
  whitelist:
  - x.x.x.x
plugin: ip-restriction

prometheus ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    configuration.konghq.com: strip-path
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: prometheus-ip-restriction

What you expected to happen:
When I visit URL from IP in the IP-restriction I get
{"message":"Your IP address is not allowed"}

from logs I see
10.1.39.108 - - [07/Mar/2019:14:24:39 +0000] "GET /favicon.ico HTTP/1.1" 403 44 "http://%URLredacted%/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36"

There isn't my IP, but IP of the node.

I have tried to add these annotations to kube-proxy service
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'

But when I do I get "Bad request".

When I change ELB port 80 from TCP to HTTP and
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

I get again {"message":"Your IP address is not allowed"}

I don't want to switch to HTTPS, since I want to terminate HTTPS inside kong ingress. When I switch to HTTPS on ELB, I need to terminate on ELB.

But I am not able to get IP restriction working.

Can you please help?

lifecyclstale

Most helpful comment

@edijsdrezovs Thanks!

so whole solution is:

proxy:
  type: LoadBalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'http'
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: %CERT_ARN%


env:
  trusted_ips: 0.0.0.0/0,::0
  real_ip_recursive: "on"
  real_ip_header: X-Forwarded-For

And then change ELB to forward HTTPS traffic to HTTP port on EKS node with HTTP port.

Seems working fine.

Thank you!

All 14 comments

@hbagdi Can I ask you for help?

Thank you.

@hbagdi Thanks.

I have tried:

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
  labels:
    app: kong
    chart: kong-0.9.2
    heritage: Tiller
    release: kong
  name: kong-kong-proxy
  namespace: ingress
spec:
  externalTrafficPolicy: Local

also with service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp

but with tcp (on ELB and Kong Ing) I get only "Bad Request"
I have to set HTTP on ELB, then it goes thru, but still no IP whitelisting.

???

If you want to terminate SSL at Kong, then you need to do the following:

  • Use TCP as LB protcol and instance protocol, Kong won't be able to see the client IP with this.
  • enable proxy protocol on the ELB, and then configure Kong to listen with proxy_protocol enabled (https://docs.konghq.com/1.0.x/configuration/#proxy_listen)

@hbagdi

My goal is to have SRC IP inside Kube . Can you be please more detailed If I want to have SRC IP inside Kong (I will terminate SSL on ELB classic)?

So what I need to do to get the SRC IP inside Kong?

ELB set to
HTTP - 80 -> HTTP - 30318
HTTPS - 443 -> HTTP - 30318 - terminate using SSL

or
HTTP - 80 -> HTTP - 30318
HTTPS - 443 -> HTTPS - 32287 - terminate using SSL

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:XXX:certificate/YYYY
  labels:
    app: kong
    chart: kong-0.9.2
    heritage: Tiller
    release: kong
  name: kong-kong-proxy
  namespace: ingress
spec:
  externalTrafficPolicy: Local

I don't know how to enable proxy_protocol using helm?????
env:
KONG_NGINX_PROXY_PROXY_LISTEN_PROXY_PROTOCOL=true

I edited nginx-kong.conf manually to:

server {
    server_name kong;
    listen 0.0.0.0:8000 proxy_protocol;
    listen 0.0.0.0:8443 ssl proxy_protocol;

    real_ip_header     proxy_protocol;

Anyway, now I am getting

2019/03/08 08:08:17 [error] 74#0: *12154 broken header: "■♥☺ �☺  �♥♥����K+,☼+/A��☻�l%�h��7� P_��↕�v☻  : � = 5 � � < / A ♣
 � � k j 9 8 � � � � g @ 3 2 E D ■ ‼ �☺  (" while reading PROXY protocol, client: 10.1.45.39, server: 0.0.0.0:8443

I have also tried
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

then I received

while reading PROXY protocol, client: 10.1.46.23, server: 0.0.0.0:8000
2019/03/08 07:55:12 [error] 74#0: *7013 **broken header**: "GET / HTTP/1.1
host: prometheus.qa.DOMAIN.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,cs-CZ;q=0.8,cs;q=0.7
Cookie: _ga=GA1.2.931025211.1546876625; intercom-id-wlcfd0bk=a5d164dd-8e32-424f-92fb-658e8956cab8; intercom-id-wozldjdl=d4518fc5-9c0d-4161-a1f1-43e713c3b1ba
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36
X-Forwarded-For: **REAL IP HERE**
X-Forwarded-Port: 80
X-Forwarded-Proto: http
Connection: keep-alive

I don't know what is wrong now.

Please advise. Thanks.

I've following setup:
ELB --> Nginx Ingress --> Kong proxy
Kong proxy is not getting client source ip but internal IP of nginx ingress controller.

ALL other pods which are exposed using nginx works as expected and client source IP is preserved.
@hbagdi Can you please advise where could be the problem?

@MilanDasek i figured out how to solve it.
Add these to > values.env:

trusted_ips: 0.0.0.0/0,::0
real_ip_recursive: on
real_ip_header: X-Forwarded-For

@edijsdrezovs Thanks!

so whole solution is:

proxy:
  type: LoadBalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'http'
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: %CERT_ARN%


env:
  trusted_ips: 0.0.0.0/0,::0
  real_ip_recursive: "on"
  real_ip_header: X-Forwarded-For

And then change ELB to forward HTTPS traffic to HTTP port on EKS node with HTTP port.

Seems working fine.

Thank you!

@MilanDasek I'd suggest to add additional annotation to ingress:

nginx.ingress.kubernetes.io/force-ssl-redirect: "true"

I guess you can close this issue @hbagdi

/close

@MilanDasek Glad it worked out for you. Please close this issue.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings