Istio: Istio 0.8.0 VirtualService missmatch hosts when port is added

Created on 21 Jun 2018  ·  65Comments  ·  Source: istio/istio

Describe the bug
When using the IngressGateway and defining a VirtualService, the hosts list makes difference between <host> and <host>:<port>

Expected behavior
By RFC, the Host: header of HTTP can be the plain <host> or <host>:<port>. The VirtualService should consider the two as beeing the same.

Steps to reproduce the bug
I started with the howto at https://istio.io/docs/tasks/traffic-management/ingress/

my domain name is authd.test.run1.k8s.xx.ca
I defined a Gateway :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: authd-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "authd.test.run1.k8s.xxx.ca"
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
      privateKey: /etc/istio/ingressgateway-certs/tls.key
    hosts:
    - "authd.test.run1.k8s.xxx.ca"

And a VirtualService :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
  name: authd-http
  namespace: test
spec:
  gateways:
  - authd-gateway
  hosts:
  - authd.test.run1.k8s.xxx.ca
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: authd-http
        port:
          number: 1080

Than try with curl :

curl -kvs -HHost:authd.test.run1.k8s.xxx.ca https://authd.test.run1.k8s.xxx.ca -I
HTTP/1.1 200 OK

while :

curl -kvs -HHost:authd.test.run1.k8s.xxx.ca:80 https://authd.test.run1.k8s.xxx.ca -I
HTTP/1.1 404 Not Found

In the 2nd case, I see in the logs :

ingressgateway [2018-06-21T15:57:28.956Z] "HEAD / HTTP/1.1" 404 NR 0 0 2 - "10.132.0.5" "curl/7.47.0" "98c6f630-3d4e-93a1-870b-6b620b50504a" "authd.test.run1.k8s.xxx.ca:80" "-"

To make it work I had to change the VirtualService to :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
  name: authd-http
  namespace: test
spec:
  gateways:
  - authd-gateway
  hosts:
  - authd.test.run1.k8s.xxx.ca
  - authd.test.run1.k8s.xxx.ca:80
  - authd.test.run1.k8s.xxx.ca:443
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: authd-http
        port:
          number: 1080

Either there is another way to match the hosts (like using * ) or the documentation should warn about that.
Note that Go http/gRPC seem to always send the port along the hostname.

Version
k8s 1.10.2
Istio 0.8.0

Is Istio Auth enabled or not?
No mTLS active

Environment
GKE

arenetworking kincustomer issue

Most helpful comment

I think the bug have not resolved. @prune998 @vadimeisenbergibm
The two merge https://github.com/istio/istio/pull/7994 and https://github.com/istio/istio/pull/7995 just add the port to virtualservice in the envoy,but I cannot create a virtualservice with \:\ scheme, admission webhook denied it. the error as flow:

admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: domain name "10.110.25.114.xip.io:443" invalid (label "io:443" invalid)

All 65 comments

cc @andraxylia

i met the same problem ,but i canot make it work by add host "host:port" in virtualservice hosts section, when create virtual service, err message is
"Error: configuration is invalid: 2 errors occurred:

  • domain name "xxx.xxx:9090" invalid (label "io:9090" invalid)
  • xxx.xxx:9090 is not a valid IP"

Environment
Kubernetes

@wansuiye please give the full virtualService Manifest.

@prune998
virtual service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: 
   consul
spec:
  hosts:
  - "config.xxx.yyy.io"
  - "config.xxx.yyy.io:80"
  gateways:
  - config-gateway
  - mesh
  http:
  - route:
    - destination:
        port:
          number: 9090
        host: config

when create it,the err msg is:

# istioctl create -f config.yml
Error: configuration is invalid: 2 errors occurred:

* domain name "xxx.yyy.io:80" invalid (label "io:80" invalid)
* config.xxx.yyy.io:80 is not a valid IP

env:
kubernetes 1.9.5
istio 0.8.0

It's working with kubectl apply -f config.yml but not with istioctl create -f config.yml
It may be a bug then... but not related to my issue.
Please, open another issue.

Hi everyone, I was also hit by this bug when following the httpbin tutorial... I ended up with this config that works for me (Also, I don't have a loadbalancer, so I'm accessing the ingress-gateway on the NodePort)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/citizenstig/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 8000
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "httpbin.example.no"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  gateways:
    - httpbin-gateway
  hosts:
    - "httpbin.example.no"
    - "httpbin.example.no:31380"
  http:
  - route:
    - destination:
        port:
          number: 8000
        host: httpbin
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: httpbin
  name: httpbin
spec:
  selector:
    app: httpbin
  ports:
  - name: http
    port: 8000
    protocol: TCP
    targetPort: 8000
  type: NodePort

Without the extra hosts I would receive this error in ingressgateway-pod:

[2018-07-11T12:02:34.478Z] "GET / HTTP/1.1" 404 - 0 0 0 - "10.244.1.1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36" "6825cfdb-2699-9d1f-b6c4-07387fb0d32e" "httpbin.example.no:31380" "-"

(notice the requested hostname "httpbin.example.no:31380")

Edit:
Also, if I modify the Host header with Curl, it works without the extra host in VirtualService:
curl -v -k -H Host:httpbin.example.no http://httpbin.example.no:31380

@andraxylia (as you were named for this bug), I'm just testing the latest release, using images release-1.0-latest-daily.

What I see now is the exact same behaviour as the one described first in this issue.
Here are the istio-ingressgateway logs for 2 tests, first without the port, second with the port :

[2018-07-19T13:57:39.038Z] "GET / HTTP/1.1" 307 - 0 42 3 1 "10.246.0.17" "curl/7.47.0" "d570e950-4d58-9b2e-8de9-44bb57f0a91d" "authd.dev.cluster2.k8s.xx.ca" "10.20.25.27:1080"

[2018-07-19T13:57:44.131Z] "GET / HTTP/1.1" 404 NR 0 0 2 - "10.246.0.4" "curl/7.47.0" "1d6866c5-48a4-95ef-9ebb-165e036aad51" "authd.dev.cluster2.k8s.xx.ca:443" "-"

BUT

I can't add a host that contains a port number anymore (<host>:<port>)

So this does not work anymore :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: authd-vs-ingress
spec:
  hosts:
  - "authd.dev.cluster2.k8s.xxx.ca"
  - "authd.dev.cluster2.k8s.xxx.ca:443"
  gateways:
  - ingress-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        port:
          number: 1080
        host: authd-http

The error is now :

Error from server: error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"kind\":\"VirtualService\",\"metadata\":{\"annotations\":{},\"name\":\"authd-vs-ingress\",\"namespace\":\"dev\"},\"spec\":{\"gateways\":[\"ingress-gateway\"],\"hosts\":[\"authd.dev.cluster2.k8s.xxx.ca\",\"authd.dev.cluster2.k8s.xxx.ca:443\"],\"http\":[{\"match\":[{\"uri\":{\"prefix\":\"/\"}}],\"route\":[{\"destination\":{\"host\":\"authd-http\",\"port\":{\"number\":1080}}}]}]}}\n"}},"spec":{"hosts":["authd.dev.cluster2.k8s.xxx.ca","authd.dev.cluster2.k8s.xxx.ca:443"]}}
to:
Resource: "networking.istio.io/v1alpha3, Resource=virtualservices", GroupVersionKind: "networking.istio.io/v1alpha3, Kind=VirtualService"
Name: "authd-vs-ingress", Namespace: "dev"
Object: &{map["apiVersion":"networking.istio.io/v1alpha3" "kind":"VirtualService" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"kind\":\"VirtualService\",\"metadata\":{\"annotations\":{},\"name\":\"authd-vs-ingress\",\"namespace\":\"dev\"},\"spec\":{\"gateways\":[\"ingress-gateway\"],\"hosts\":[\"authd.dev.cluster2.k8s.xxx.ca\"],\"http\":[{\"match\":[{\"uri\":{\"prefix\":\"/\"}}],\"route\":[{\"destination\":{\"host\":\"authd-http\",\"port\":{\"number\":1080}}}]}]}}\n"] "resourceVersion":"89496739" "uid":"7995d76a-8b57-11e8-971f-42010a8e0011" "clusterName":"" "creationTimestamp":"2018-07-19T13:27:25Z" "generation":'\x01' "name":"authd-vs-ingress" "namespace":"dev" "selfLink":"/apis/networking.istio.io/v1alpha3/namespaces/dev/virtualservices/authd-vs-ingress"] "spec":map["http":[map["match":[map["uri":map["prefix":"/"]]] "route":[map["destination":map["host":"authd-http" "port":map["number":'\u0438']]]]]] "gateways":["ingress-gateway"] "hosts":["authd.dev.cluster2.k8s.xxx.ca"]]]}
for: "/tmp/kube_deploy/dev-authd-virtualservice.yml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: domain name "dev.cluster2.k8s.xx.ca:443" invalid (label "ca:443" invalid)

So :

  • the bug is not resolved
  • the way to circumvent the bug does not work anymore

The conclusion is you can't use the istio-ingressgateway with gRPC microservices (using the GO library at least)

I tried removing the admission webhook, which allows me to define a host as <hostname>:<port>, but there may be some side effects...

We really need to investigate this issue

I stumpled upon the same issue when playing around with Istio on my local machine. I am using minikube, so I access my ingress gateway via the node port 31380.

With this local setup, any client that doesn't support specifying the host header (e.g. a web browser without extensions like Chrome Header Hacker) can not be used to access services in my service mesh. I am currently using the workaround mentioned by @prune998 which allows me to set an additional port in my gateway/virtualservice hosts configuration by using kubectl instead of istioctl. A fix would be much appreciated.

@denniseffing can you please provide your K8s and Istio versions ?
Did you tried any snapshot version ?

@prune998 I am currently using Istio 0.8.0 and K8s 1.10

I didn't try any snapshot version because of your comment that the latest snapshot release uses a new admission hook that also disables the currently used workaround with kubectl.

Do you want me to try a current snapshot version regardless?

ok thanks @denniseffing so there's nothing I can do more to help... waiting for Itio Team (@sakshigoel12 or @andraxylia ) to take over :)

in istio 1.10, kubectl is also not work to set the host with a port...

Hello @prune998 and @wansuiye

We have tried using istio 1.0, but host:port is not working using "kubectl" as well as using "istioctl"

Any Help?

@ronakpandya7 Nothing you can do except waiting for the Istio team to respond to this issue.

Hello @denniseffing ,

So how can we force them to make them look at this issue?
Can we add a feature slug in this issue?

Current milestone is v1.1, so I'd expect a fix in about a month if they keep up the monthly release schedule.

Hello @denniseffing,

Thanks for your help, we have to wait.
And can you please look into issue #7325 if possible..

@ronakpandya7 I replied to your other issue, which from my point of view is not an issue but a missconfiguration.
I would really URGE the Istio devs to look into it, as this issue just break the gRPC workflow using Istio !
I installed 1.0.0-release yesterday and will give it a try, but 1.0.0-snapshot-2 was still having the same issue, and even worse, Galley was blocking the declaration of a host with a :port attached...

I think I got it working with the 1.0.0 release, with a little twist...

I added the gateway as :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ingressgateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP2
  - hosts:
    - '*'
    port:
      name: https-default
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

and the Virtualservice as :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: hello-vs-ingress
spec:
  gateways:
  - ingressgateway
  hosts:
  - hello.test.xxx.ca
  - hello.test.xxx.ca:443
  - hello.test.xxx.ca:80
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: greeter-server
        port:
          number: 7788

I'm still not 100% sure it's OK as my gRPC connexion keeps closing... but I have to ensure it's not my code...

Ok, sounds everything is working AS LONG AS you don't deploy Galley service. This is the one enforcing the Webhook that denies the <hostname>:<port> scheme.

I'll try adding the ports in the Gateway instead of using *...

I can confirm the <hostname>:<port> scheme works as long as Galley is not installed.

Hello @prune998 ,
Nice to hear that you found the solution.
So did you create this virtualservice and gateway using kubectl or istioctl ?
And is there any side effects if we do not install galley ?

Hello @prune998 ,
After removing Galley we are able to create gateway and virtualservices with <hostname>:<port>, but using kubectl only.

I am worried that we removed one component from the istio mesh that is Galley, does it make any difference?

@ronakpandya7 I'm still waiting for an Istio core dev to answer this question... Galley is quite new and seems to only be enforcing the content of your Istio Manifests file. But I may be wrong...

Note that my Istio Install from the Helm chart like :

helm template install/kubernetes/helm/istio --name istio 
         --set tracing.enabled=false 
         --set ingress.enabled=false 
         --set servicegraph.enabled=false 
         --set prometheus.enabled=false
         --set grafana.enabled=false
         --set global.proxy.autoInject=disabled 
         --set global.k8sIngressSelector=ingressgateway
         --set global.k8sIngressHttps=true
         --set galley.enabled=false
         --namespace istio-system > install/kubernetes/generated.yaml

@vadimeisenbergibm I can only use the 0.8.0 version because of this problem,so hope to solve this problem as soon as possible, we have used istio in production…

@wansuiye just remove Galley and install 1.0.0

@prune998 thanks!it works well!

in istio 1.0.1 it seems also can not create virtualservice (host:port) using kubectl or istioctl if installed with galley.
If this issue has been solved, please tell how to configure it...

^^ just remove galley @wansuiye

@prune998 yeah, i saw the issues had been closed, so i thought 1.0.1 had solved it...

what should we do now to avoid this issue, it seems nobody will solve this

What's the function of galley? Is there a fix for this yet?

I think the bug have not resolved. @prune998 @vadimeisenbergibm
The two merge https://github.com/istio/istio/pull/7994 and https://github.com/istio/istio/pull/7995 just add the port to virtualservice in the envoy,but I cannot create a virtualservice with \:\ scheme, admission webhook denied it. the error as flow:

admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: domain name "10.110.25.114.xip.io:443" invalid (label "io:443" invalid)

I think the issue is solved in 1.0.1 (but I don't remember).
It is solved for sure in 1.0.2.
Upgrade :)

@prune998 hi,i found that if define an egressgateway and direct HTTP traffic through it, the external service's port is not 80 or 443, the egressgateway would missmatch the hosts.
ServiceEntry:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: bar.test.io
spec:
  hosts:
  - bar.test.io
  location: MESH_EXTERNAL
  ports:
  - number: 9080
    name: http
    protocol: HTTP
  resolution: DNS

virtualservice:

piVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-cnn-through-egress-gateway
spec:
  hosts:
  - bar.test.io
  gateways:
  - istio-egressgateway
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 9080
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        port:
          number: 80
  - match:
    - gateways:
      - istio-egressgateway
      port: 80
    route:
    - destination:
        host: bar.test.io
        port:
          number: 9080

when request with a host"bar.test.io:9080" such as

curl -v "bar.test.io:9080/health"

the egressgateway retuen 404:

GET /health?input=777HTTP/2" 404 NR 0 0 0 - "10.201.106.197" "curl/7.55.0" "1ab9fdd7-8fd9-4b6b-bfc4-6811dbc52431" "bar.test.io:9080" "-" - - 10.201.106.187:80 10.201.106.197:40574

when request with a host "bar.test.io" such as

curl -v "bar.test.io:9080/health" -H"host:bar.test.io"

the egressgateway return 200:

"GET /health?input=777HTTP/2" 200 - 0 3 3 2 "10.201.106.197" "curl/7.55.0" "ea78ce4c-8810-4b59-8515-cbbd66a3d837" "bar.test.io" "10.201.106.2:10834" outbound|9080||bar.test.io - 10.201.106.187:80 10.201.106.197:60682

@wansuiye you should check in other issues or open your own. This one was about Ingress, and is now closed.
To report your issue, don't forget to add your Istio and Kubernetes Version, along with how you installed Istio.

@wansuiye @dreadbird It seems as though we are all having the same issue. #9656 seems to be the fix we are waiting on.

you should NOT need to add the port anymore (so you should not trigger Galley validation)
Which version are you running @jbrongtr ?
Did you made a clean install or an upgrade ? (or how did you clean between versions) ?

I'm running 1.0.2 in a GKE cluster 1.10.9 and everything is working fine.

Gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gw-useredged.lab
  namespace: lab
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - useredged.lab.xxx.ca
    port:
      name: https-443-useredged.lab
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/cert-useredged.lab.key
      serverCertificate: /etc/istio/ingressgateway-certs/cert-useredged.lab.crt

Virtualservice

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: vs-useredged.lab
  namespace: lab
spec:
  gateways:
  - gw-useredged.lab
  hosts:
  - useredged.lab.xxxx.ca
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: useredged
        port:
          number: 1081

I'm using gRPC and I'm calling the HTTP/2 endpoint useredged.lab.xxxx.ca:443 as seen in the logs :

istio-ingressgateway-6b6d4499d5-zrnqw istio-proxy [2018-11-15T13:25:28.051Z] "POST /ra.useredgesvc.UserEdgeSVC/LoadDynamicTypes HTTP/2" 200 - 5 1912 6 2 "10.2.0.3" "grpc-go/1.14.0" "df2d9954-db4e-965a-8d89-d4bbcae175c8" "useredged.lab.xxx.ca:443" "10.20.4.18:1081"

Hey @prune998,

Currently running kubernetes 1.11 with istio version 1.0.3. This is running on cloud servers that we manage.

Trying to simply allow external web access to istio's grafana. See config below

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: grafana-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
spec:
  hosts:
  - "grafana.iddls.com"
  gateways:
  - grafana-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        port:
          number: 3000
        host: grafana.istio-system.svc.cluster.local

We don't have a load balancer in-front. Using Curl, i can get a good response by specifying what the header should be. When hitting this service with the node port 31380 by web browser, the url:31380 gets passed and blocked by ingress because it doesnt match just url. (is my thinking...)

Let me know your thoughts!

Could you also give the Ingress Gateway logs and the Envoy config generated.
You can do that by forwarding the port 15000 of the Ingress Gateway container and browser http://localhost:15000 and check the config option (don't remember the exact name)

Below you will see a curl request with edited headder:
curl -I -HHost:grafana.iddls.com https://grafana.iddls.com:31380

Then i tired to access it from a chrome browser.

[2018-11-15T14:30:46.229Z] "HEAD /HTTP/1.1" 200 - 0 0 6 4 "172.30.0.0" "curl/7.29.0" "a46ecdc2-1879-96b9-9862-01c1b54bf1b8" "grafana.iddls.com" "172.30.1.149:3000" outbound|3000||grafana.istio-system.svc.cluster.local - 172.30.1.148:80 172.30.0.0:39464
[2018-11-15T14:32:00.844Z] "GET /HTTP/1.1" 404 NR 0 0 2 - "172.30.0.0" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" "603abc71-3b13-9a1a-9ea4-c72ad460eaed" "grafana.iddls.com:31380" "-" - - 172.30.1.148:80 172.30.0.0:61866

I seem to be having issues getting the envoy config....

Additionally, when i try to apply a config file with host:port, pilot yells:

admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: domain name "iddls.com:31380" invalid (label "com:31380" invalid)

Well @jbrongtr :

  • due to Galley you can't create a Gateway with a host that includes a port. So your last issue is "normal". Don't add port !
  • it sounds you're trying to acces your server on port 31380 from the outside... but the IngressGateway is bound to port 80 (and/or 443)... so it won't work.

I don't think your problem is due to this port issue.
Try to curl -I -HHost:grafana.iddls.com:80 https://grafana.iddls.com versus curl -I -HHost:grafana.iddls.com https://grafana.iddls.com and see what happen.

For the config dump problem :
try kubectl -n istio-system port-forward deployment/istio-ingressgateway 15000
then open your browser at http://localhost:15000/config_dump

@prune998

  • While it may currently be normal for that error to be thrown, i think that the purpose of the aforementioned patch will allow us to access services within the mesh when specifying port. This may be particularly useful for on-prem kubernetes deployments and in development environments.

  • Port 80 on the istio-ingressgateway is mapped to node port 31380, to allow external traffic into the mesh. Port 80 is not opened on the node.

The curl command that you mentioned to try didn't work and i believe that is because HHost:grafana.iddls.com:80 specifies a port, and that doesn't match up to the host key:value pair on the istio gateway/virtualservice configuration.

it seems that in istio 1.1 the problem is also existed. it can only be solved by disable galley, but the galley is more important such as mcp

@prune998

  • While it may currently be normal for that error to be thrown, i think that the purpose of the aforementioned patch will allow us to access services within the mesh when specifying port. This may be particularly useful for on-prem kubernetes deployments and in development environments.
  • Port 80 on the istio-ingressgateway is mapped to node port 31380, to allow external traffic into the mesh. Port 80 is not opened on the node.

The curl command that you mentioned to try didn't work and i believe that is because HHost:grafana.iddls.com:80 specifies a port, and that doesn't match up to the host key:value pair on the istio gateway/virtualservice configuration.

i met the same problem from istio 0.7, i used a 30080 nodeport which is mapped to 80 in istio-ingressgateway. the only solved way is to close galley, which i tested for every version of istio.
however 1.1 also not solve it.

Istio developer think this behavior is right, gateway is only support dns format host, so I think put a proxy at the front of the ingressgateway in you cluster can solve this problem. This also can help you make you ingressgateway more ha. You can use keeplive + lvs/haproxy/nginx/envoy and so on.

Istio developer think this behavior is right, gateway is only support dns format host, so I think put a proxy at the front of the ingressgateway in you cluster can solve this problem. This also can help you make you ingressgateway more ha. You can use keeplive + lvs/haproxy/nginx/envoy and so on.

however the http protocol's host header include port, and adding a gateway in front of ingressgateway just to convert header's host is much complicated. and the proxy need support multi protocols

Istio developer think this behavior is right, gateway is only support dns format host, so I think put a proxy at the front of the ingressgateway in you cluster can solve this problem. This also can help you make you ingressgateway more ha. You can use keeplive + lvs/haproxy/nginx/envoy and so on.

however the http protocol's host header include port, and adding a gateway in front of ingressgateway just to convert header's host is much complicated. and the proxy need support multi protocols

Your front proxy only need support TCP protocol, you can proxy port 80 and 443 to the ingressgateway port 30080 or other port. If your client call http service by proxy and not carry the port number in the host header this will work fine.

@mgxian it's feasible.
however port 80 is not opened in our node, and it's just a very little change.

@wansuiye suppose you ingressgateway opened at nodeport 30080, there are two ways to deploy your proxy:

  1. deploy it in the k8s cluster, and expose it on the k8s node's port 80 by using specify hostNetwork or hostPort in the k8s yaml file.
  2. deploy it out of the k8s cluster, listening on port 80. proxy tcp traffic to node's 30080 port.

@wansuiye @mgxian This is actually how we got around the issue. We have a daemon set proxy deployment, that its only job is to accept port 80 and 443 traffic and pass it to the ingress gateway at the specific node port. While it would be nice to not have to deploy that extra piece, i guess its just how it has to be done for on-prem deployments.

Its an interesting thing that the gateway/proxies only support DNS format. Have you ever tried to use Prometheus-operator in a istio deployment? We had to create specific regex changes so that Prometheus would scrap using DNS instead of IP, because istio could not route IP. But i think this is more of a issue with Prometheus then istio.

Hi there.
Issue is closed as it's solved in 1.0.3, and we're now at 1.1. You should clearly upgrade instead of adding workaround...

@prune998 Can you confirm that you are able to create a VirtualService with hosts that include a port when using Istio 1.1? I am asking because wansuiye said that this is still not possible and mgxian said that this is indeed intended by the Istio team and therefore won't be fixed.

it's not possible, but it's not needed.
hosts like www.your-domain.com and www.your-domain.com:443 are now treated the same, so you don't need to add the :port section

Thanks for clearing that up! I guess that works fine too.

@prune998 Is other port except port 80 and 443 supported ? If I create a virtualservice with hosts www.your-domain.com referer to a gateway with hosts www.your-domain.com listen on port 80, but gateway actually listening on nodeport 30080. So can I access the service use curl www.your-domain.com:30080 command ?

Gateway filters the host/port and forward to the right Virtualservice.
Virtualservice does not care of the port part. The fact that it had-to was a bug which is now corrected.

@mgxian your question is more "can I access the gateway using a nodeport". I would say yes, it should work, but I never tried it, so you've better check by yourself. As long as the Gateway is matched and forward the request to your VirtualService, you can define the Virtualservice without port in the host matching pattern

I have confirmed that this is still an issue with non-standard ports(8443) on both 1.0.7 and 1.1.5. It's very easy for me to replicate with the following:

(In this example, we'll just terminate SSL at the ELB)

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway
  namespace: foo
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http-80
      protocol: HTTP
    hosts:
    - api.xxxx.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-virtual-service
  namespace: foo
spec:
  hosts:
  - api.xxxx.com
  gateways:
  - gateway
  http:
  - route:
    - destination:
        host: my-service.foo.svc.cluster.local
        subset: release

curl -I -H "Host: api.xxxx.com" https://api.xxxx.com:8443/foo
Returns a 404
curl -I -H "Host: api.xxxx.com" https://api.xxxx.com/foo
Returns a 200

Next, change the "hosts" to "*" on the VirtualService:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-virtual-service
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - gateway
  http:
  - route:
    - destination:
        host: my-service.foo.svc.cluster.local
        subset: release

curl -I -H "Host: api.xxxx.com" https://api.xxxx.com:8443/foo
Returns a 200
curl -I -H "Host: api.xxxx.com" https://api.xxxx.com/foo
Returns a 404

@blaketastic2 I'm not sure I clearly understand your setup.
If your gateway only listen on port 80, you clearly can't connect to it on port 8443, whatever you define on your VirtualService. Maybe you should show your full Istio setup so we clearly understand.

As I mentioned, for this example, we're doing TLS termination on the ELB, so 8443 and 443 map to 80.

Here's the config from the helm chart:

gateways:
  istio-ingressgateway:
    enabled: true
    ports:
    - port: 80
      name: http
      targetPort: 80
    - port: 443
      name: https
      targetPort: 80
    - port: 8443
      name: https-app
      targetPort: 80

I am experiencing the same with envy-1.10.0

If I use the following

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "api.xxxx.com"
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  - mesh
  http:
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: helloworld

Then in my helloworld pod istio-proxy instance config I get the entry

{
     "version_info": "2019-05-25T23:13:09Z/72",
     "route_config": {
      "name": "80",
      "virtual_hosts": [
       {
        "name": "api.xxxx.com:80",
        "domains": [
         "api.xxxx.com",
         "api.xxxx.com:80"
        ],
        "routes": [
         {
          "match": {
           "prefix": "/hello"
          },
          "route": {
           "cluster": "outbound|80||helloworld.default.svc.cluster.local",
           "timeout": "0s",
           "retry_policy": {
            "retry_on": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
            "num_retries": 2,
            "retry_host_predicate": [
             {
              "name": "envoy.retry_host_predicates.previous_hosts"
             }
            ],
            "host_selection_retry_max_attempts": "3",
            "retriable_status_codes": [
             503
            ]
           },
           "max_grpc_timeout": "0s"
          },
          "metadata": {
           "filter_metadata": {
            "istio": {
             "config": "/apis/networking/v1alpha3/namespaces/default/virtual-service/myapp"
            }
           }
          },
          "decorator": {
           "operation": "helloworld.default.svc.cluster.local:80/hello*"
          },
          "per_filter_config": {
           "mixer": {
            "forward_attributes": {
             "attributes": {
              "destination.service.host": {
               "string_value": "helloworld.default.svc.cluster.local"
              },
              "destination.service.uid": {
               "string_value": "istio://default/services/helloworld"
              },
              "destination.service.namespace": {
               "string_value": "default"
              },
              "destination.service.name": {
               "string_value": "helloworld"
              }
             }
            },
            "mixer_attributes": {
             "attributes": {
              "destination.service.namespace": {
               "string_value": "default"
              },
              "destination.service.name": {
               "string_value": "helloworld"
              },
              "destination.service.uid": {
               "string_value": "istio://default/services/helloworld"
              },
              "destination.service.host": {
               "string_value": "helloworld.default.svc.cluster.local"
              }
             }
            },
            "disable_check_calls": true
           }
          }
         }
        ]
       },

But I cannot access the resource

If i use the following

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "api.xxxx.com:8433"
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  - mesh
  http:
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: helloworld

then no entry is generated in the pods config.

I am using CloudFlare to proxy requests to my a development instance only accessible via port 8433

If I create an entry with kubectl edit gateway -n istio-system

  - hosts:
    - '*'
    port:
      name: https-default
      number: 8443
      protocol: HTTPS
    tls:
      credentialName: ingress-cert
      mode: SIMPLE
      privateKey: sds
      serverCertificate: sds

and entry via kubectl edit svc istio-ingressgateway -n istio-system

  - name: https
    nodePort: 31390
    port: 443
    protocol: TCP
    targetPort: 443
  - name: https2
    nodePort: 31391
    port: 8443
    protocol: TCP
    targetPort: 8443

and use

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "api.xxxx.com"
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  http:
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: helloworld

Still does not work if i use api.xxxx.com:8443

It is a workaround to my configuration use case. i,e port forward router: 8443 to istio-ingressgateway: 443.

Was this page helpful?
0 / 5 - 0 ratings