Bug description
I cannot find how to create an additional ingress gateway via istioctl.
It should create an internal load balancer in AWS, so k8s Service should have annotation like:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Please advise.
Expected behavior
Created an additional ingress gateway with an internal load balancer in AWS.
Steps to reproduce the bug
Version (include the output of istioctl version --remote
and kubectl version
and helm version
if you used Helm)
istioctl version --remote
client version: 1.4.0
control plane version: 1.4.0
data plane version: 1.4.0 (1 proxies)
How was Istio installed?
istioctl manifest apply
Environment where bug was observed (cloud vendor, OS, etc)
AWS
Thanks!
@ostromart multiple gateways is a really common requirement, we should make sure we support this case
We currently don't support user define gateways yet but we would for 1.5
+1. Knative Serving needs to deploy two ingress gateways so this feature is necessary.
+1. We tried to migrate using the new istioctl (to be prepared for future releases) instead of helm charts, but we have multiple ingress gateways and this is a blocking issue for us.
Marking this as P0, this is a must have for 1.5.
@richardwxn is there any way to accomplish this in 1.4? Users are doing this with helm today so this is a loss of functionality
@howardjohn I'm not sure this is possible, i have looked around installer code, it uses the helm charts "compiled" into the istioctl, those helm charts do not have support for multiple gateway (ingress/egress) like the normal helm charts have.
One thing that i noticed in the "IstioControlPlaneSpec" API is that there's a field named "installPackagePath", perhaps it is possible to temporary make a fix in the "gateways" helm charts and use them? there's not much explanation about this field.
The same for egress gateways, multiple egress gateways are also required.
We will have a temporary solution in place for 1.4.3 for both ingress and egress. In the meantime, you can indeed do a hack with the installPackagePath setting. Create a file with your gateway definition based on the minimal profile:
# Customize this to taste
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
profile: minimal
trafficManagement:
enabled: false
gateways:
enabled: true
components:
namespace: my-namespace
rm istio-1.4.0/install/kubernetes/operator/charts/base/templates/*
istioctl manifest generate --set installPackagePath=istio-1.4.0/install/kubernetes/operator/charts -f file-above.yaml
This will generate just the gateway YAML for my-namespace which you can kubectl apply.
You need the rm of the base chart files because otherwise istioctl will always dump out all the base resources.
@ostromart Thanks!
Perhaps for best practice it would be nice if it will be possible to also disable the the use of the "base" charts, you can disable the resource creation (service account, roles, etc..) but you can't disable the CRDs.
It seems that current gateway cannot change the name from istio-ingressgateway
.
Knative's second gateway is named cluster-local-gaeway and deployed in the same ns with istio-ingressgateway
. The helm value could generate the name for deployment,svc,etc.
If I should create separated ticket for this, please let me know.
We tried to migrate using the new istioctl (to be prepared for future releases) instead of helm charts, but we have multiple ingress gateways and this is a blocking issue for us.
+1. We face the same issue, we planned to used the new istioctl with custom IstioControlPlane manifest and i didn't find proper workaround.
Regarding the syntax, something like this should work on 1.5 ?
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
profile: default
gateways:
components:
ingressGateway:
enabled: true
#
# Handling second gateway throught IstioControlPlane
#
ingressGatewayInternal:
enabled: true
k8s:
serviceAnnotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
Or helm style ?
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
profile: default
values:
gateways:
istio-ingressgateway:
enabled: true
#
# Handling second gateway throught IstioControlPlane
#
istio-ingressgateway-internal:
enabled: true
serviceAnnotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
This helm style works in 1.4.2
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
values:
gateways:
istio-ingressgateway:
serviceAnnotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
@chris-free The issue is about handling multiple gateway, not about annotate the default gateway as internal.
@ostromart
We will have a temporary solution in place for 1.4.3 for both ingress and egress.
Did the temporary solution make it into 1.4.3? Can't find any mention of it?
I mange to create additional load balancer just by changing the name of the Service and by keeping rest of the parameters same :).
In my use case, we need both internal and external load balancer to expose service to both internet and internally over different load balancer but need the same DNS name for URLs
Happy to provide more details if you need any..
@deveshmehta
Yeah that worked. The downside being from a monitoring perspective it gives the impression all traffic is coming from a single source (istio-ingressgateway). Would be nice to see the separation between internal and external with actually having separate ingress gateways.
Not a big deal until 1.5 is out though.
Thanks
@deveshmehta
Played with this a bit further and that solution might actually be a security issue if I've understood you correctly.
Setup 2 services both pointing to the same ingress gateway. 1 for an internal LB and 1 for an external.
I was able to access internal services from the external load balancer just setting a "Host" header with the internal domain.
Only way to solve that would be separate ingress gateways which can then be assigned in the virtual services?
Fixed in https://github.com/istio/operator/pull/713. User ingress and egress gateways are now a first class part of the API, including service annotations.
@ostromart, I still can't find any documentation for the 1.4.3 multiple ingress gateways, is it there already? Can you please share some docs if there are any?
Does anyone have an example of a working IstioControlPlane
with multiple ingress-gateways
I've tried:
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
name: istio-control-plane
namespace: istio-operator
spec:
profile: default
gateways:
enabled: true
values:
gateways:
istio-ingressgateway:
enabled: true
istio-ingressgateway-another:
enabled: true
with no luck
EDIT: just found this out and gonna check it out: https://github.com/istio/operator/pull/713/files#diff-f08ee0447ca8f6e9c8edf9f0551023e7R6
@nemo83
There is a "hack" that's described above, but I'm not a fan of that.
What I ended up doing (For now) is that I followed this documentation, especially step 6
(Ignore the cert-manager for now), and it worked for me
But I'm still waiting for a different solution this.
Hi @IbraheemAlSaady, I've tried something similar with the yaml install, but it wasn't working for me.
Did you reuse the same istio-ingressgateway
deployment in the istio-system
namespace? Or did you have to create a new one? I tried with istio 1.4.5
, and the other ingress
es weren't _seen_
@nemo83, so basically the YAML deployment at STEP 6 of the documentation is gonna be your ingress gateway, I personally deployed it in a separate namespace. I also created an istio Gateway object and used the new (internal gateway) with it.
Note: that the istio: internal-ingressgateway
is the label in the service that points to the deployment file.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: internal-gateway
namespace: internal-ingress
spec:
selector:
istio: internal-ingressgateway
servers:
- hosts:
- '*.${CLUSTER_DNS}'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*.${CLUSTER_DNS}'
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
Thanks @IbraheemAlSaady , I appreciate a lot you answering my questions. Would you mind to share with me the Service
implementing the custom ingressgateway manifest (with the sensitiva data scrambled)? I'm interested in particular to the selectors:
spec:
type: LoadBalancer
selector:
app: my-ingressgateway
istio: my-ingressgateway
Can you also please kindly tell me the istio version you're using? Thanks 馃檹
@nemo83 nothing special I made, I followed the docs I shared the above. Its the same service implementation they have there.
I'm currently using 1.4.4, but I'm gonna use 1.4.5 and run my tests. I can share the details/findings after if you'd like
Thanks again for coming back to me @IbraheemAlSaady , I eventually managed to implement multiple ingress-gateways, following this: https://github.com/istio/istio/blob/17f6bfc3d7121ad527c2d617ffc27c758d6a7241/install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml#L36. If I just create the Service (LoadBalancer) and the Gateway that points to it, the LoadBalancer doesn't seem to be _talking_ to the ingress-gateways pods (the default one). Did you create a new Deployment for them?
@nemo83, The load balancer service will point to the deployment file. The deployment file is the one at STEP 6 provided by the Istio documentation I shared earlier.
@IbraheemAlSaady , sorry, I must have missed that step 馃う鈥嶁檪 . Thanks.
@richardwxn @ostromart Now that Istio 1.5 is released and this ticket is closed, what is the official process to create multiple ingress gateways? I cannot find any documentation on it in the release notes or change notes of 1.5
I am trying to create multiple ingress gateway in Istio 1.5 and found this test data: https://github.com/istio/istio/blob/master/operator/cmd/mesh/testdata/manifest-generate/input/gateways.yaml.
So with the use of IstioOperator
API, I am able to create multiple ingress gateways and set one of the gateway to use AWS NLB by applying this manifest:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: default
components:
ingressGateways:
- enabled: true
name: istio-ingress-1
namespace: istio-ingress-1
- enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: istio-ingress-2
namespace: istio-ingress-2
However, it seems to only work if the ingress gateways are in different namespace, as the k8s pods/deployments/services created are all with name istio-ingressgateway
.
The gateway name issue is supposed to be fixed by https://github.com/istio/istio/pull/22138 on v1.5.x I think,
yes as @nak3 mentioned, the naming issue would be fixed for 1.5.1, so it does not need to by in different ns then
I am trying to create multiple ingress gateway in Istio 1.5 and found this test data: https://github.com/istio/istio/blob/master/operator/cmd/mesh/testdata/manifest-generate/input/gateways.yaml.
So with the use of
IstioOperator
API, I am able to create multiple ingress gateways and set one of the gateway to use AWS NLB by applying this manifest:apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: example-istiocontrolplane spec: profile: default components: ingressGateways: - enabled: true name: istio-ingress-1 namespace: istio-ingress-1 - enabled: true k8s: serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb name: istio-ingress-2 namespace: istio-ingress-2
However, it seems to only work if the ingress gateways are in different namespace, as the k8s pods/deployments/services created are all with name
istio-ingressgateway
.
Thanks for this solution. It works!
@richardwxn Now that the naming issue is fixed and released in 1.5.1, how do we target specific ingresses?
I am creating two Istio Gateway resources. The first one is for the default Istio Ingress, and the second one is for my secondary custom ingress that lives in the same namespace (istio-system
):
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-ingressgateway-internal
namespace: istio-system
spec:
selector:
app: ??? # What to put here to select the other ingress?
...
md5-20a5cf8a18f319de81d6d5ee54fc8e83
$ kubectl -n istio-system describe service istio-ingressgateway
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.1
release=istio
...
Selector: app=istio-ingressgateway,istio=ingressgateway
...
md5-58cde88fc180c49e1c53d922e64b498f
$ kubectl -n istio-system describe service istio-ingressgateway-internal
Name: istio-ingressgateway-internal
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.1
release=istio
...
Selector: app=istio-ingressgateway,istio=ingressgateway
...
EDIT: Not sure if it matters, but all the ingress deployment resources also have the same labels and selectors.
Yes, I'm have the same problem. Now the svc points to both because of the selector. How to fix.
Have you tried using the label
map option in the GatewaySpec?
Would something like this work?
...
ingressGateways:
- enabled: true
name: foo
label:
istio: foo
...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: foobar
namespace: istio-system
spec:
selector:
istio: foo
...
@marshallford That does not work, the label does not get applied to the pods
There seems no way to accomplish this with the operator.
This issue should not be closed
Sounds like the only solution at the moment is to have each ingress in its own separate namespace like what pingnamo suggested here.
That's not really a good solution, this is not in line with knative.
As a temporary solution in 1.5.1 you could override names/selectors using k8s overlays.
I didn't fully test it but it should be a good starting point.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-control-plane
namespace: istio-system
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: istio-ingressgateway-2
enabled: true
k8s:
overlays:
- kind: Gateway
name: ingressgateway
patches:
- path: metadata.name
value:
ingressgateway-2
- path: spec.selector.istio
value:
ingressgateway-2
- kind: Service
name: istio-ingressgateway-2
patches:
- path: metadata.labels.istio
value:
ingressgateway-2
- kind: Deployment
name: istio-ingressgateway-2
patches:
- path: spec.selector.matchLabels.istio
value:
ingressgateway-2
- path: spec.template.metadata.labels.istio
value:
ingressgateway-2
It seems the custom gateway is not selectable still. anyone has a workable manifest for it?
I took LukaszRacon's solution and expanded upon it. This is the relevant section of my istio operator config:
overlays:
- kind: Deployment
name: istio-ingressgateway-internal
patches:
- path: metadata.labels.app
value: istio-ingressgateway-internal
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-internal
- path: spec.template.metadata.labels.app
value: istio-ingressgateway-internal
- kind: Service
name: istio-ingressgateway-internal
patches:
- path: metadata.labels.app
value: istio-ingressgateway-internal
- path: spec.selector.app
value: istio-ingressgateway-internal
- kind: Gateway
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-internal
- path: spec.selector.app
value: istio-ingressgateway-internal
Unlike LukaszRacon's solution, this one patches the app
labels and selectors instead of the istio
ones and I also found a few additional places that looked like they should be updated.
I also double checked the horizontal pod autoscaler but that one finds the deployment by name not by label so nothing to patch there.
And here is how I select the right ingress for each gateway:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-ingressgateway-internal
namespace: istio-system
spec:
selector:
app: istio-ingressgateway-internal
...
Im running this in my dev cluster right now and from what I can tell it seems to be working correctly.
One more item that needs to be patched - PodDisruptionBudget
:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-control-plane
namespace: istio-system
spec:
components:
ingressGateways:
- name: istio-ingressgateway-2
enabled: true
k8s:
overlays:
- kind: PodDisruptionBudget
name: ingressgateway
patches:
- path: metadata.name
value:
ingressgateway-2
- path: spec.selector.matchLabels.istio
value:
ingressgateway-2
extended LukasRacon's solution, on the Service part by adding selector part:
- kind: Service
name: istio-ingressgateway-private
patches:
- path: metadata.labels.istio
value:
ingressgateway-private
- path: spec.selector.istio
value:
ingressgateway-private
Following this thread also, for those like me who need a complete example, I've found this useful blog post: https://www.learncloudnative.com/blog/2020-01-09-deploying_multiple_gateways_with_istio/
Hope it could help
I compiled all the necessary patches from this thread and the blog post, cross checking the generated manifest to make sure all the instances were patched properly.
Here's a complete ingressGateways
entry modified from the default one that's working for me on 1.5.1
for an additional gateway called istio-ingressgateway-internal
:
```
- name: istio-ingressgateway-internal
enabled: true
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway-internal
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
service:
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
- name: kiali
port: 15029
targetPort: 15029
- name: prometheus
port: 15030
targetPort: 15030
- name: grafana
port: 15031
targetPort: 15031
- name: tracing
port: 15032
targetPort: 15032
- name: tls
port: 15443
targetPort: 15443
- name: tcp
port: 31400
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
overlays:
- kind: HorizontalPodAutoscaler
name: istio-ingressgateway-internal
patches:
- path: metadata.labels.app
value: istio-ingressgateway-internal
- path: metadata.labels.istio
value: ingressgateway-internal
- path: spec.scaleTargetRef.name
value: istio-ingressgateway-internal
- kind: Deployment
name: istio-ingressgateway-internal
patches:
- path: metadata.labels.app
value: istio-ingressgateway-internal
- path: metadata.labels.istio
value: ingressgateway-internal
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-internal
- path: spec.selector.matchLabels.istio
value: ingressgateway-internal
- path: spec.template.metadata.labels.app
value: istio-ingressgateway-internal
- path: spec.template.metadata.labels.istio
value: ingressgateway-internal
- kind: Service
name: istio-ingressgateway-internal
patches:
- path: metadata.labels.app
value: istio-ingressgateway-internal
- path: metadata.labels.istio
value: ingressgateway-internal
- path: spec.selector.app
value: istio-ingressgateway-internal
- path: spec.selector.istio
value: ingressgateway-internal
- kind: Gateway
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-internal
- path: spec.selector.istio
value: istio-ingressgateway-internal
- kind: PodDisruptionBudget
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-internal
- path: metadata.labels.app
value: istio-ingressgateway-internal
- path: metadata.labels.istio
value: ingressgateway-internal
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-internal
- path: spec.selector.matchLabels.istio
value: ingressgateway-internal
Great job. Just heads up: Google Internal Load Balancer supports up to 5 ports, so above won't work. Does anyone knows what are the most critical ports for multicluster communications ? @Foltik
My assumption (?)
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
- name: tls
port: 15443
targetPort: 15443
I would think you'd need tcp
as well if you're routing that type of service through the ingress gateway. status-port
seems like it's only for gateway health checks from what I could find, so if it's your first layer LB it's probably fine to remove that too, along with all the other monitoring and dashboard ports.
I've taken the work done by @LukaszRacon and @bishtawi and patched all pieces of each gateway added.
Change and replace "dev" with your namespace to have a an additional complete working istio gateway inside your namespace "dev"
I use this to have one istio gateway for each namespace and it works very well.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio
spec:
profile: default
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: istio-ingressgateway-dev
enabled: true
namespace: dev
k8s:
overlays:
- kind: HorizontalPodAutoscaler
name: istio-ingressgateway-dev
patches:
- path: metadata.labels.app
value: istio-ingressgateway-dev
- path: metadata.labels.istio
value: ingressgateway-dev
- kind: Deployment
name: istio-ingressgateway-dev
patches:
- path: metadata.labels.app
value: istio-ingressgateway-dev
- path: metadata.labels.istio
value: ingressgateway-dev
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-dev
- path: spec.selector.matchLabels.istio
value: ingressgateway-dev
- path: spec.template.metadata.labels.app
value: istio-ingressgateway-dev
- path: spec.template.metadata.labels.istio
value: ingressgateway-dev
- kind: Gateway
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-dev
- path: spec.selector.istio
value: ingressgateway-dev
- kind: PodDisruptionBudget
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-dev
- path: metadata.labels.app
value: istio-ingressgateway-dev
- path: metadata.labels.istio
value: ingressgateway-dev
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-dev
- path: spec.selector.matchLabels.istio
value: ingressgateway-dev
- kind: Service
name: istio-ingressgateway-dev
patches:
- path: metadata.labels.app
value: istio-ingressgateway-dev
- path: metadata.labels.istio
value: ingressgateway-dev
- path: spec.selector.app
value: istio-ingressgateway-dev
- path: spec.selector.istio
value: ingressgateway-dev
- kind: ServiceAccount
name: istio-ingressgateway-service-account
patches:
- path: metadata.labels.app
value: istio-ingressgateway-dev
- path: metadata.labels.istio
value: ingressgateway-dev
Then, when I have to expose a service in a specific gateway just select the right gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway-dev # use istio ingressgateway in dev
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
You're missing a few of the patches, see what I posted earlier. The namespace you put it in doesn't matter.
@Foltik I didn't see yuor post. The spec.scaleTargetRef.name in HPA has already the right name, it doesn't need a patch.
I know that namespace and other patches doesn't matter for the functionalities but it matters for cleaning and business reason.
If all you need is a custom label, this is supported without overrides: https://istio.io/docs/reference/config/istio.operator.v1alpha1/#GatewaySpec
I just tested using the operator 1.5.1
with knative and it's working as expected:
- enabled: true
name: cluster-local-gateway
namespace: istio-system
label:
istio: cluster-local-gateway
@tshak I recommended that solution a while back and @AceHack wasn't able to get it working. Can you expand on what worked for you and what didn't?
@tshak, please print out the resulting manifests and you will see many incorrect selectors.
@AceHack I see what you're saying. I didn't rebuild my cluster so it was a false positive (the old cluster-local-gateway
yaml was still in place which had the label). There is a fix in master
which will hopefully be back-ported to 1.5
soon: https://github.com/istio/istio/pull/23026. Apologies for the confusion.
If all you need is a custom label, this is supported without overrides: https://istio.io/docs/reference/config/istio.operator.v1alpha1/#GatewaySpec
I just tested using the operator
1.5.1
with knative and it's working as expected:- enabled: true name: cluster-local-gateway namespace: istio-system label: istio: cluster-local-gateway
This one also verified works on my istio 1.6.3, and the syntax is much neat than the k8s resources overlay hacks before.
Anyone here tried to use the internal ingress gateway in an ingress object?
I'm doing this:
- name: internal-ingressgateway
enabled: true
label:
istio: internal-ingressgateway
app: internal-istio-ingressgateway
k8s:
serviceAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
hpaSpec:
minReplicas: 2
overlays:
- apiVersion: apps/v1
kind: Deployment
name: internal-ingressgateway
patches:
- path: spec.template.spec.containers[0].args
value:
- --ingress-class=istio-internal
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
- --serviceCluster
- internal-ingressgateway
- --trust-domain=cluster.lo
I logged the pod for internal-ingressgateway and I see this:
ingressClass: istio
ingressControllerMode: STRICT
ingressService: istio-ingressgateway
Nothing really changed, this shouldn't be the case, it should use the internal-ingressgateway
as an ingressService
and the ingress class should be different.
I also tried it without the overlays, and I get the same logs output.
Any help would be appreciated
I'm using:
UPDATE
Just noticed this config map in the generated manifest
apiVersion: v1
kind: ConfigMap
metadata:
name: istio
namespace: istio-system
labels:
istio.io/rev: default
release: istio
data:
# Configuration file for the mesh networks to be used by the Split Horizon EDS.
meshNetworks: |-
networks: {}
mesh: |-
accessLogEncoding: TEXT
accessLogFile: ""
accessLogFormat: ""
defaultConfig:
concurrency: 2
configPath: ./etc/istio/proxy
connectTimeout: 10s
controlPlaneAuthPolicy: NONE
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
parentShutdownDuration: 1m0s
proxyAdminPort: 15000
proxyMetadata:
DNS_AGENT: ""
serviceCluster: istio-proxy
tracing:
zipkin:
address: zipkin.istio-system:9411
disableMixerHttpReports: true
disablePolicyChecks: true
enablePrometheusMerge: false
ingressClass: istio ## <--
ingressControllerMode: STRICT
ingressService: istio-ingressgateway ## <--
protocolDetectionTimeout: 100ms
reportBatchMaxEntries: 100
reportBatchMaxTime: 1s
sdsUdsPath: unix:/etc/istio/proxy/SDS
trustDomain: cluster.local
trustDomainAliases: null
It's being used in the ingressgateway pods. I can't seem to find a way to generate one for the internal pod
@IbraheemAlSaady you are changing the settings for the gateway. The Ingress is controlled by pilot/istiod, which is where the config needs to live. You would needto modify the configmap (or during install, --set meshConfig.ingressClass=istio-internal
@howardjohn thanks for the response. I'm using Istio Operator
, I want to keep the istio
class and have another class istio-internal
, that means though I need an Istiod probably in a different namespace, am I correct? how is that possible with the Istio Operator? or do I need to have a separate config for Istio Operator to achieve the goal I want?
In my case I was creating a public gateway. I wanted to have external-dns create records for two types of virtual services depending on the gateway. So after lots of research and help from this forum I got it working. BTW I found one thing that needed tweaking in @Foltik answer.
``` - kind: Gateway
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-internal
- path: spec.selector.istio
value: ingressgateway-internal #the prefix istio had to be removed so the gateway can match to the ingress-gateway
Anyway here is my full working setup
ingressGateways:
- name: istio-ingressgateway-public
enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
env:
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway-public
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
service:
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
- name: kiali
port: 15029
targetPort: 15029
- name: prometheus
port: 15030
targetPort: 15030
- name: grafana
port: 15031
targetPort: 15031
- name: tracing
port: 15032
targetPort: 15032
- name: tls
port: 15443
targetPort: 15443
- name: tcp
port: 31400
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
overlays:
- kind: HorizontalPodAutoscaler
name: istio-ingressgateway-public
patches:
- path: metadata.labels.app
value: istio-ingressgateway-public
- path: metadata.labels.istio
value: ingressgateway-public
- path: spec.scaleTargetRef.name
value: istio-ingressgateway-public
- kind: Deployment
name: istio-ingressgateway-public
patches:
- path: metadata.labels.app
value: istio-ingressgateway-public
- path: metadata.labels.istio
value: ingressgateway-public
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-public
- path: spec.selector.matchLabels.istio
value: ingressgateway-public
- path: spec.template.metadata.labels.app
value: istio-ingressgateway-public
- path: spec.template.metadata.labels.istio
value: ingressgateway-public
- kind: Service
name: istio-ingressgateway-public
patches:
- path: metadata.labels.app
value: istio-ingressgateway-public
- path: metadata.labels.istio
value: ingressgateway-public
- path: spec.selector.app
value: istio-ingressgateway-public
- path: spec.selector.istio
value: ingressgateway-public
- kind: Gateway
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-public
- path: spec.selector.istio
value: ingressgateway-public
- kind: PodDisruptionBudget
name: ingressgateway
patches:
- path: metadata.name
value: ingressgateway-public
- path: metadata.labels.app
value: istio-ingressgateway-public
- path: metadata.labels.istio
value: ingressgateway-public
- path: spec.selector.matchLabels.app
value: istio-ingressgateway-public
- path: spec.selector.matchLabels.istio
value: ingressgateway-public
I then add a Gateway with my custom certificate
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: default-public
spec:
selector:
istio: ingressgateway-public
servers:
Example Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-vs
spec:
gateways:
spec.selector.matchLabels
@konokimo This field is immutable and cannot be patched. Is there an other way?
// Ok, found it: Just make sure deployment was not created with wrong overlay before!
Most helpful comment
@ostromart multiple gateways is a really common requirement, we should make sure we support this case