Bug description
Gateway does not appear to be routing gRPC requests to the correct endpoint, or is somehow disrupting the request.
I tried using the discussion forums but cannot attach files.
istio-dump.tar.gz
Affected product area (please put an X in all that apply)
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
Expected behavior
I have a Gateway, VirtualService, Service and a Deployment configured. I am having issues identifying where the source of a routing or communication issue might be. When I try to send a gRPC request using the grpc_cli client as shown below it fails:
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
export CLUSTER_INGRESS=$INGRESS_HOST:$INGRESS_PORT
echo $CLUSTER_INGRESS
10.0.2.15:31380
grpc_cli ls $CLUSTER_INGRESS dev.cognizant_ai.experiment
Received an error when querying services endpoint.
Reflection request not implemented; is the ServerReflection service enabled?
On investigation I find these messages are hitting the istio-ingressgateway as follows:
kubectl logs -n istio-system istio-ingressgateway-859977c87-bnjz9 -
[libprotobuf INFO src/istio/mixerclient/check_cache.cc:160] Add a new Referenced for check cache: Absence-keys: destination.port, destination.service, destination.uid, source.ip, Exact-keys: context.protocol, context.reporter.kind, source.namespace, source.uid,
[2019-06-05T23:44:20.621Z] "POST /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfoHTTP/2" 200 NR 7 0 9 - "10.1.1.1" "grpc-c++/1.21.2 grpc-c/7.0.0 (linux; chttp2; gandalf)" "1d007a71-5d27-9686-abbc-a705a10c8213" "10.0.2.15:31380" "-" - - 10.1.1.131:80 10.1.1.1:34372
So my gRPC is getting into the system, all well and good.
istioctl proxy-status
Stderr when execute [/usr/local/bin/pilot-discovery request GET /debug/syncz ]: gc 1 @0.014s 10%: 0.38+2.7+1.3 ms clock, 2.3+0.17/1.2/1.8+8.3 ms cpu, 4->4->1 MB, 5 MB goal, 6 P
gc 2 @0.026s 13%: 0.005+1.3+1.8 ms clock, 0.033+0.33/0.97/1.2+11 ms cpu, 4->4->2 MB, 5 MB goal, 6 P
PROXY CDS LDS EDS RDS PILOT VERSION
downstream-v1-5675644554-pgpqg.default SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-5ffcbc484f-d4kc5 1.0.2
experiment-v1-f7bdc86db-s6fgg.default SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-5ffcbc484f-d4kc5 1.0.2
istio-egressgateway-6d5cfb474-pvznt.istio-system SYNCED SYNCED SYNCED (100%) NOT SENT istio-pilot-5ffcbc484f-d4kc5 1.0.2
istio-ingressgateway-859977c87-bnjz9.istio-system SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-5ffcbc484f-d4kc5 1.0.2
sleep-c76799b7-r4cj4.default SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-5ffcbc484f-d4kc5 1.0.2
However the request should be responding with the reflection results from my Deployment container but gets an error, I am assuming that the requests must not be terminating on the service I have configured. I looked at the istio-proxy for the service to determine that no messages are seen entering the pod, I can see my standard outbound traffic successfully being responded to with other services in the mesh.
The routes etc also look good. Here is the configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: grpc-ingress
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grpc-experiment-service
spec:
gateways:
- ingress-gateway
hosts:
- experiment.default.svc.cluster.local
http:
- match:
- uri:
prefix: "/"
# - uri:
# prefix: "/dev.cognizant_ai.experiment.Service/"
# - uri:
# prefix: "/grpc.reflection.v1alpha.ServerReflection/"
route:
- destination:
host: experiment.default.svc.cluster.local
port:
number: 30001
---
apiVersion: v1
kind: Service
metadata:
name: experiment
labels:
app: experiment
spec:
ports:
- port: 30001
name: grpc-exp
targetPort: 30001
selector:
app: experiment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: experiment-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: experiment
version: v1
spec:
containers:
- name: experiment
image: localhost:32000/platform-services/experimentsrv:0.5.1-feature-12-microk8s-attempt-aaaagjscroj
imagePullPolicy: Always
resources:
requests:
memory: "2048Mi"
cpu: "100m"
limits:
memory: "2048Mi"
cpu: "100m"
ports:
- containerPort: 30001
name: grpc-exp
env:
- name: "LOGXI_FORMAT"
value: "happy,maxcol=1024"
- name: "LOGXI"
value: "*=TRC"
- name: "IP_PORT"
value: ":30001,0.0.0.0:30001"
- name: "PGHOST"
valueFrom:
secretKeyRef:
name: postgres
key: host
- name: "PGPORT"
valueFrom:
secretKeyRef:
name: postgres
key: port
- name: "PGDATABASE"
valueFrom:
secretKeyRef:
name: postgres
key: database
- name: "PGUSER"
valueFrom:
secretKeyRef:
name: postgres
key: username
- name: "PGPASSWORD"
valueFrom:
secretKeyRef:
name: postgres
key: password
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: auth0-egress
spec:
hosts:
- "*.ignored.org"
addresses:
- 54.149.162.63/27
ports:
- name: auth-0
number: 443
protocol: tcp
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: psql-egress
spec:
hosts:
- "wondrous-sturgeon-postgresql.default.svc.cluster.local"
ports:
- name: psql
number: 5432
protocol: tcp
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: psql-egress
spec:
host: "wondrous-sturgeon-postgresql.default.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
Steps to reproduce the bug
Version (include the output of istioctl version --remote
and kubectl version
)
istioctl version
Version: 1.0.5
GitRevision: c1707e45e71c75d74bf3a5dec8c7086f32f32fad
User: root@6f6ea1061f2b
Hub: docker.io/istio
GolangVersion: go1.10.4
BuildStatus: Clean
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
How was Istio installed?
microk8s
Environment where bug was observed (cloud vendor, OS, etc)
Workstation
Do you have a DestinationRule configured for the service?
I believe we can close this.
After subsequent upgrades to 1.2 the problem has not re-appeared.
I dont know what the cause would have been but without a solid gRPC example/testing application I think its not worth it to return to the issue unless we seee it again.
Thanks for your message
Using Istio v1.3.4 I struggle to use the configuration you've provided. I get 404 when I attempt to run a HTTP request to the gateway and I get the following message "Received an error when querying services endpoint." when I use grpc_cli ls
any ideas?
Ideally there is an example of a GRPC service exposed to the public, but not really one anywhere.
Most helpful comment
Using Istio v1.3.4 I struggle to use the configuration you've provided. I get 404 when I attempt to run a HTTP request to the gateway and I get the following message "Received an error when querying services endpoint." when I use
grpc_cli ls
any ideas?Ideally there is an example of a GRPC service exposed to the public, but not really one anywhere.