Istio: Istio-proxy fails to start with Istio 1.1

Created on 24 Oct 2018  Â·  139Comments  Â·  Source: istio/istio

I have an AKS test-cluster, with a few sample apps deployed. With Istio 1.0.2 both the app container and the istio-proxy container start as expected. When removing Istio and the sample app, installing Istio daily build istio-release-1.1-20181021-09-15 from scratch, and then redeploying the sample apps, the sidecar proxy fails to start, while logging the following:

<*snip* start>

2018-10-24T12:29:06.122324Z info    Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster nginx-1-pod.default --service-node sidecar~10.244.0.17~nginx-1-pod-6d86955d8d-xhj79.default~default.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning --v2-config-only]
[2018-10-24 12:29:06.144][19][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, no healthy upstream
[2018-10-24 12:29:06.144][19][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:43] Unable to establish new stream
[2018-10-24 12:29:07.431][19][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-10-24 12:29:07.432][19][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:

<*snip* multiple identical log lines>

[2018-10-24 12:29:07.498][48][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-10-24 12:29:07.498][48][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
2018-10-24T12:29:07.997196Z info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090","10.244.0.17:8081","10.0.253.20:443","10.0.253.20:31400","10.0.184.140:15011","10.0.69.228:42422","10.0.178.7:443","10.0.0.1:443","10.0.132.238:80","10.0.165.166:443","10.0.225.39:443","0.0.0.0:15004","0.0.0.0:80","0.0.0.0:15030","0.0.0.0:8080","0.0.0.0:9093","0.0.0.0:8060","0.0.0.0:8081","0.0.0.0:9091","0.0.0.0:15029","0.0.0.0:15031","0.0.0.0:15010","0.0.0.0:9901","0.0.0.0:15032","0.0.0.0:9090","10.244.0.17:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-10-24T12:29:09.997207Z info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090","10.244.0.17:8081","10.0.253.20:443","10.0.253.20:31400","10.0.184.140:15011","10.0.69.228:42422","10.0.178.7:443","10.0.0.1:443","10.0.132.238:80","10.0.165.166:443","10.0.225.39:443","0.0.0.0:15004","0.0.0.0:80","0.0.0.0:15030","0.0.0.0:8080","0.0.0.0:9093","0.0.0.0:8060","0.0.0.0:8081","0.0.0.0:9091","0.0.0.0:15029","0.0.0.0:15031","0.0.0.0:15010","0.0.0.0:9901","0.0.0.0:15032","0.0.0.0:9090","10.244.0.17:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80

<etc., repeating every two seconds>

In the listener list, 10.244.0.17 is the pod IP. According to https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration, I should see _A virtual listener on the pod IP for each exposed port for inbound traffic._, but I can only find entries for 10.244.0.17:15020 and 10.244.0.17:8081, the latter being the _service_ port... What could have gone wrong here?

Sample app configuration: sample_app.yaml.txt

Version
Kubernetes: 1.11.2
Istio: Version:"release-1.1-20181021-09-15", GitRevision:"bd24a62648c07e24ca655c39727aeb0e4761919a"

Installation
Using Helm 2.9.1:

helm install ${ISTIO_HOME}/install/kubernetes/helm/istio --name istio --namespace istio-system --tls --wait \
    --set global.configValidation=true \
    --set sidecarInjectorWebhook.enabled=true \
    --set gateways.istio-ingressgateway.loadBalancerIP=${PUBLIC_IP}

Environment
MS Azure, AKS

arenetworking areuser experience

Most helpful comment

So the issue here is coming down to the list of application ports that the readiness probe is using.

Initially, the readiness probe just waited for the 1st update from Pilot before marking the container as "ready". That wasn't quite enough, however, since k8s would occasionally send partial lists of endpoints to Pilot/Envoy, which means that some inbound ports were not configured by the time the container would go "ready". In this case, there would be a small period of 503s until the rest of the endpoints were configured.

To address this, we added the --applicationPorts flag, which when set will require that all ports in the list be received by Envoy before marking the container as "ready". By default, this is set to the list of ports exposed by the container.

This seems to work when service ports match deployment ports. When there's a mismatch, however, this seems to be causing issues.

You can work around this by setting the readiness.status.sidecar.istio.io/applicationPorts annotation in your deployment. It's just a comma-separated list of port numbers. If empty (""), this part of the readiness check will be skipped entirely and the readiness probe will go back to just looking for the first update from pilot.

All 139 comments

We are seeing this as well in Amazon EKS using the prelim istio 1.1 snapshot 2 build.

cc @costinm @rshriram for further triage

May or may not be related to this issue https://github.com/istio/istio/issues/9472

@fhoy, @disophisis Is this issue resolved for you? I am still running into this error after upgrading to latest daily build.

I haven't had the opportunity to retest this after #9633 was merged, unfortunately. Will try to do so soon.

@fhoy To clarify, your attached sample app seems to be working, but I am seeing the error still when I create a deployment without a service. Maybe I wrong, but i didn't think a service needs to be defined in order for istio-proxy to be healthy?

@hzxuzhonghu @rshriram
I'm still seeing this error using the latest 1.1 build that includes #9633.

Update:
Ah, It seems my application suddenly works, it's weird the same error is appearing at first, see my log below:

$ kubectl logs -f httpbin-8586cdb55c-m6zgt -c istio-proxy
2018-11-08T03:16:03.358924Z     info    FLAG: --appLiveUrl=""
2018-11-08T03:16:03.358996Z     info    FLAG: --appReadyUrl=""
2018-11-08T03:16:03.359006Z     info    FLAG: --applicationPorts="[80]"
2018-11-08T03:16:03.359010Z     info    FLAG: --binaryPath="/usr/local/bin/envoy"
2018-11-08T03:16:03.359014Z     info    FLAG: --concurrency="0"
2018-11-08T03:16:03.359043Z     info    FLAG: --configPath="/etc/istio/proxy"
2018-11-08T03:16:03.359048Z     info    FLAG: --connectTimeout="10s"
2018-11-08T03:16:03.359053Z     info    FLAG: --controlPlaneAuthPolicy="MUTUAL_TLS"
2018-11-08T03:16:03.359057Z     info    FLAG: --customConfigFile=""
2018-11-08T03:16:03.359087Z     info    FLAG: --disableInternalTelemetry="false"
2018-11-08T03:16:03.359092Z     info    FLAG: --discoveryAddress="istio-pilot.istio-system:15011"
2018-11-08T03:16:03.359095Z     info    FLAG: --domain=""
2018-11-08T03:16:03.359098Z     info    FLAG: --drainDuration="45s"
2018-11-08T03:16:03.359100Z     info    FLAG: --help="false"
2018-11-08T03:16:03.359109Z     info    FLAG: --id=""
2018-11-08T03:16:03.359111Z     info    FLAG: --ip=""
2018-11-08T03:16:03.359114Z     info    FLAG: --lightstepAccessToken=""
2018-11-08T03:16:03.359116Z     info    FLAG: --lightstepAddress=""
2018-11-08T03:16:03.359119Z     info    FLAG: --lightstepCacertPath=""
2018-11-08T03:16:03.359121Z     info    FLAG: --lightstepSecure="false"
2018-11-08T03:16:03.359124Z     info    FLAG: --log_as_json="false"
2018-11-08T03:16:03.359127Z     info    FLAG: --log_caller=""
2018-11-08T03:16:03.359129Z     info    FLAG: --log_output_level="default:info"
2018-11-08T03:16:03.359132Z     info    FLAG: --log_rotate=""
2018-11-08T03:16:03.359136Z     info    FLAG: --log_rotate_max_age="30"
2018-11-08T03:16:03.359145Z     info    FLAG: --log_rotate_max_backups="1000"
2018-11-08T03:16:03.359153Z     info    FLAG: --log_rotate_max_size="104857600"
2018-11-08T03:16:03.359161Z     info    FLAG: --log_stacktrace_level="default:none"
2018-11-08T03:16:03.359179Z     info    FLAG: --log_target="[stdout]"
2018-11-08T03:16:03.359188Z     info    FLAG: --parentShutdownDuration="1m0s"
2018-11-08T03:16:03.359203Z     info    FLAG: --proxyAdminPort="15000"
2018-11-08T03:16:03.359212Z     info    FLAG: --proxyLogLevel="warning"
2018-11-08T03:16:03.359220Z     info    FLAG: --serviceCluster="httpbin.default"
2018-11-08T03:16:03.359228Z     info    FLAG: --serviceregistry="Kubernetes"
2018-11-08T03:16:03.359236Z     info    FLAG: --statsdUdpAddress=""
2018-11-08T03:16:03.359255Z     info    FLAG: --statusPort="15020"
2018-11-08T03:16:03.359263Z     info    FLAG: --templateFile=""
2018-11-08T03:16:03.359272Z     info    FLAG: --zipkinAddress="zipkin.istio-system:9411"
2018-11-08T03:16:03.359298Z     info    Version [email protected]/istio-249d214ff77f68048d1f97a18193e641dd3c4ce8-dirty-249d214ff77f68048d1f97a18193e641dd3c4ce8-dirty-Modified
2018-11-08T03:16:03.359354Z     info    Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.12.0.111", ID:"httpbin-8586cdb55c-m6zgt.default", Domain:"default.svc.cluster.local", Metadata:map[string]string(nil)}
2018-11-08T03:16:03.359949Z     info    Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10s
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istio-pilot.istio-system:15011
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: httpbin.default
statNameLength: 189
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2018-11-08T03:16:03.359976Z     info    Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2018-11-08T03:16:03.360104Z     info    Opening status port 15020

2018-11-08T03:16:03.360333Z     info    Starting proxy agent
2018-11-08T03:16:03.360771Z     info    Received new config, resetting budget
2018-11-08T03:16:03.360791Z     info    Reconciling retry (budget 10)
2018-11-08T03:16:03.360814Z     info    Epoch 0 starting
2018-11-08T03:16:03.361471Z     info    Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster httpbin.default --service-node sidecar~10.12.0.111~httpbin-8586cdb55c-m6zgt.default~default.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning --v2-config-only]
[2018-11-08 03:16:03.390][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, no healthy upstream
[2018-11-08 03:16:03.390][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:43] Unable to establish new stream
[2018-11-08 03:16:03.635][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2018-11-08T03:16:04.859688Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
[2018-11-08 03:16:05.193][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2018-11-08T03:16:06.859478Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
[2018-11-08 03:16:08.830][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2018-11-08T03:16:08.859184Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:10.859269Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:12.859637Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:14.859329Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:16.859926Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
[2018-11-08 03:16:18.112][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2018-11-08T03:16:18.859350Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:20.859470Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:22.859726Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:24.859629Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:26.859231Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:28.859710Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:30.859344Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:32.859488Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:34.859691Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:36.859326Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:38.860022Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:40.859471Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:42.859791Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:44.859961Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
2018-11-08T03:16:46.859615Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
[2018-11-08 03:16:46.861][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 13,
[2018-11-08 03:16:47.308][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:16:47.744][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2018-11-08T03:16:48.859522Z     info    Envoy proxy is NOT ready: 3 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 80
[2018-11-08 03:16:49.732][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:188] Ignoring unwatched type URL type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
[2018-11-08 03:16:49.740][16][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-11-08 03:16:49.743][29][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-11-08 03:16:49.743][28][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-11-08 03:16:49.745][32][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-11-08 03:16:49.747][27][warning][config] src/envoy/utils/mixer_control.cc:171] ExtractInfo  metadata missing:
[2018-11-08 03:16:49.956][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 13,
[2018-11-08 03:16:49.989][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:16:50.629][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2018-11-08T03:16:50.863223Z     info    Envoy proxy is ready
[2018-11-08 03:16:52.000][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:16:58.609][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:07.926][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 13,
[2018-11-08 03:17:08.268][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:08.274][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:11.389][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:17.297][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:24.680][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:41.836][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 13,
[2018-11-08 03:17:41.854][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:42.107][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:44.947][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:48.822][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:17:56.425][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:15.389][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:25.185][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:41.135][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 13,
[2018-11-08 03:18:41.194][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:41.216][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:42.993][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:44.464][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:18:47.442][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:19:05.544][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:19:15.404][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:19:44.431][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:19:45.880][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:20:15.567][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 13,
[2018-11-08 03:20:15.916][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:20:16.710][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:20:17.891][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:20:20.594][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:20:33.457][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:20:46.384][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:21:16.090][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:21:22.735][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
[2018-11-08 03:21:24.306][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:243] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers

It sounds like we are good to close this if someone can clarify if it's expected for istio-proxy to fail to start without a service object.

Nate: can you take a look ? It seems the readiness is causing this - the case is a pod that has container ports, but not service, which can happen if they open a port for prometheus scraping for example.

You should be able to disable readiness by setting the annotation status.sidecar.istio.io/port: "0" in your deployment

Me too facing this issue with istio-1.1.0-snapshot.3

kubectl version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3"
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.9-gke.5"

Surprisingly, it was working for a day or so and it suddenly stopped working today.

So the issue here is coming down to the list of application ports that the readiness probe is using.

Initially, the readiness probe just waited for the 1st update from Pilot before marking the container as "ready". That wasn't quite enough, however, since k8s would occasionally send partial lists of endpoints to Pilot/Envoy, which means that some inbound ports were not configured by the time the container would go "ready". In this case, there would be a small period of 503s until the rest of the endpoints were configured.

To address this, we added the --applicationPorts flag, which when set will require that all ports in the list be received by Envoy before marking the container as "ready". By default, this is set to the list of ports exposed by the container.

This seems to work when service ports match deployment ports. When there's a mismatch, however, this seems to be causing issues.

You can work around this by setting the readiness.status.sidecar.istio.io/applicationPorts annotation in your deployment. It's just a comma-separated list of port numbers. If empty (""), this part of the readiness check will be skipped entirely and the readiness probe will go back to just looking for the first update from pilot.

I confirm the workaround.
On my example, I used a internal workload to invoke some services within the namespace but without exposing a service.

I encountered the same issue when running the task with snapshot-6: https://preliminary.istio.io/docs/tasks/traffic-management/circuit-breaking/, the fortio-deploy pod not working with sidecar proxy error.

* failed checking application ports. listeners="0.0.0.0:15090","10.104.244.195:42422","10.99.42.176:14267","10.107.210.165:15011","10.108.133.225:16686","10.96.0.1:443","10.110.243.225:15443","10.98.124.160:443","10.98.163.205:31400","10.98.163.205:15030","10.98.163.205:443","10.98.163.205:15032","10.96.65.52:443","10.98.163.205:15031","10.110.243.225:443","10.98.163.205:15443","10.98.163.205:15029","10.99.42.176:14268","0.0.0.0:9091","0.0.0.0:15004","0.0.0.0:15010","0.0.0.0:9090","0.0.0.0:3000","0.0.0.0:8060","0.0.0.0:8000","0.0.0.0:80","0.0.0.0:9411","0.0.0.0:15014","0.0.0.0:8088","0.0.0.0:9901","0.0.0.0:8080","172.17.0.8:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 8080
* envoy missing listener for inbound application port: 8079
2019-02-19T02:45:20.973618Z info    Envoy proxy is NOT ready: 5 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090","10.104.244.195:42422","10.99.42.176:14267","10.107.210.165:15011","10.108.133.225:16686","10.96.0.1:443","10.110.243.225:15443","10.98.124.160:443","10.98.163.205:31400","10.98.163.205:15030","10.98.163.205:443","10.98.163.205:15032","10.96.65.52:443","10.98.163.205:15031","10.110.243.225:443","10.98.163.205:15443","10.98.163.205:15029","10.99.42.176:14268","0.0.0.0:9091","0.0.0.0:15004","0.0.0.0:15010","0.0.0.0:9090","0.0.0.0:3000","0.0.0.0:8060","0.0.0.0:8000","0.0.0.0:80","0.0.0.0:9411","0.0.0.0:15014","0.0.0.0:8088","0.0.0.0:9901","0.0.0.0:8080","172.17.0.8:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 8080
* envoy missing listener for inbound application port: 8079

Saw this myself just today on 1.1.0-snapshot.6. I had installed bookinfo demo which was working... then I deleted the namespace and reinstalled it, and then a bunch of sidecar proxies failed with the errors reported by others here.

I would delete one pod individually (productpage for example) it would restart fine and the proxy container would start fine. But if I deleted all the pods at once, most would come back up with the proxies reporting those errors again.

@knrc FYI We have a reproducer for this, as mentioned on #11979, and I'm actively investigating

@knrc will do this. Thanks Kevin

This happens 100% of the time on my Azure cluster with the manifest in https://github.com/istio/istio/issues/11979.

Cheers
-steve

Not sure if this is helpful for everyone, but it may be helpful for some: https://github.com/istio/istio/issues/11979#issuecomment-466589814

TLDR - these errors will result if the service and deployment label don't match. The error looks like a readiness check problem.

This readiness.status.sidecar.istio.io/applicationPorts annotation in my deployment, as stated above, has resolved the issue of:

* envoy missing listener for inbound application port: 0
* envoy missing listener for inbound application port: 8888

(here's the annotation)

annotations:
        readiness.status.sidecar.istio.io/applicationPorts: "8888"

And this is now what I see on the istio-proxy logs

2019-02-25T07:22:14.018623Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-02-25T07:22:16.024383Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-02-25 07:22:17.686][23][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-02-25T07:22:18.019660Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-02-25T07:22:20.019287Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

Which exactly matches, the output in egressgateway

kubectl logs -n istio-system istio-egressgateway-5b464d55c6-knshx

The problem is my service is still dead and inaccessible. When I remove the injection label and reinstall (upgrade) my Helm chart for the service, it works (but of course without Istio).

When I did kubectl describe pods <pod name>, I saw an error where the probe check can't access the node's ip.

Not sure what else is causing the pod to die.

Also are these statuses normal?

$ kubectl get pods -n istio-system
NAME                                      READY     STATUS      RESTARTS   AGE
grafana-57586c685b-2k8w9                  1/1       Running     0          17h
istio-citadel-5fb6bd6d7f-2pgs8            1/1       Running     0          17h
istio-egressgateway-5b464d55c6-knshx      0/1       Running     0          17h
istio-galley-867c55cf6-zw8l4              1/1       Running     0          17h
istio-ingressgateway-5979b99885-vrvxh     0/1       Running     0          17h
istio-init-crd-10-v42fz                   0/1       Completed   0          17h
istio-init-crd-11-6xf2h                   0/1       Completed   0          17h
istio-init-crd-certmanager-10-4xbx4       0/1       Completed   0          17h
istio-pilot-7d954dccdd-xw89j              0/2       Pending     0          17h
istio-policy-6bc4b8c6c8-44lfh             0/2       Pending     0          17h
istio-sidecar-injector-5769c94686-vcw72   1/1       Running     0          17h
istio-telemetry-dd88858cb-x8wnc           0/2       Pending     0          17h
prometheus-68c5594ddc-5gvtt               1/1       Running     0          17h
servicegraph-7bc568bfd9-6n5sh             1/1       Running     0          17h

@markserrano915 before we spend too much time investigating the sidecar, it looks like Pilot, itself, isn't starting. See anything interesting in the Pilot logs?

@markserrano915 before we spend too much time investigating the sidecar, it looks like Pilot, itself, isn't starting. See anything interesting in the Pilot logs?

Unfortunately there is nothing in the logs:

$ kubectl logs -n istio-system istio-pilot-7d954dccdd-xw89j -c istio-proxy
$ kubectl logs -n istio-system istio-pilot-7d954dccdd-xw89j -c discovery

Hmm ... I wonder if this is related to what @sdake was seeing?

@nmittler @sdake I tracked our issue down to a race between the pod connecting to pilot and SetServiceInstances being called. If SetServiceInstances was called before the endpoint was associated with the service then it would set node.ServiceInstances to [].

I came up with a solution but it looks as if I've just been beaten to it, https://github.com/istio/istio/pull/11999 implemented a similar fix. There's one minor logic bug not caught in that pull request so I'll submit that.

@nmittler @sdake Submitted https://github.com/istio/istio/pull/12053/files to round it off

@nmittler @sdake I spoke too soon, that fix reduces the window but doesn't fix the race. I've now reproduced it on a build from early this morning. I'm still investigating.

Update from @knrc

PRs that fix this:

https://github.com/istio/istio/pull/11999 from @hzxuzhonghu
https://github.com/istio/istio/pull/12053 from @knrc

And will investigate more and possibly submit additional PRs.

CC: @hzxuzhonghu, @rshriram

@sdake @nmittler @duderino FYI yesterday was my fault, I had switched back to release-1.1 but it did not yet have #12053 merged so was still seeing the race. I rebuilt the containers yesterday afternoon and have not seen any failures since so I believe our race is addressed by #11999 and #12053.

I'm continuing soak testing today and will raise another issue if it recurs, in the meantime assume the issue we have been seeing is addressed.

Looks like this is fixed until proven guilty. @knrc please reopen if your investigation finds anything

Unfortunately I didn't see any difference after upgrading to the latest release. See https://github.com/istio/istio/releases/tag/1.1.0-rc.1.

To upgrade, I followed the instructions here at https://istio.io/docs/setup/kubernetes/upgrading-istio/

helm upgrade istio install/kubernetes/helm/istio --namespace istio-system

When I do kubectl get pods, I get the following status

0/2     Init:CrashLoopBackOff   7          12m

Before the upgrade, the istio-system pods are:

kubectl get pods -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-57586c685b-2k8w9                  1/1     Running     0          4d10h
istio-citadel-5fb6bd6d7f-2pgs8            1/1     Running     0          4d11h
istio-egressgateway-5b464d55c6-knshx      0/1     Running     0          4d11h
istio-galley-867c55cf6-zw8l4              1/1     Running     2          4d11h
istio-ingressgateway-5979b99885-vrvxh     0/1     Running     0          4d11h
istio-init-crd-10-v42fz                   0/1     Completed   0          4d11h
istio-init-crd-11-6xf2h                   0/1     Completed   0          4d11h
istio-init-crd-certmanager-10-4xbx4       0/1     Completed   0          4d11h
istio-pilot-7d954dccdd-xw89j              0/2     Pending     0          4d11h
istio-policy-6bc4b8c6c8-44lfh             0/2     Pending     0          4d11h
istio-sidecar-injector-5769c94686-vcw72   1/1     Running     0          4d11h
istio-telemetry-dd88858cb-x8wnc           0/2     Pending     0          4d11h
prometheus-68c5594ddc-5gvtt               1/1     Running     0          4d11h
servicegraph-7bc568bfd9-6n5sh             1/1     Running     0          4d10h

After the upgrade, I see the following pods for the istio-system namespace:
Note: egressgateway no longer shows

kubectl get pods -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-57586c685b-2k8w9                  1/1     Running     0          4d11h
istio-citadel-567fbcc54d-nhdnm            1/1     Running     0          36m
istio-galley-696b6c5c9f-vwv6j             1/1     Running     0          36m
istio-ingressgateway-854d4fc448-vn65g     0/1     Running     0          36m
istio-init-crd-10-v42fz                   0/1     Completed   0          4d12h
istio-init-crd-11-6xf2h                   0/1     Completed   0          4d12h
istio-init-crd-certmanager-10-4xbx4       0/1     Completed   0          4d12h
istio-pilot-7d954dccdd-xw89j              0/2     Pending     0          4d12h
istio-pilot-85dfd87d48-v764h              0/2     Pending     0          36m
istio-policy-6958588bd9-8nbqg             0/2     Pending     0          36m
istio-policy-6bc4b8c6c8-44lfh             0/2     Pending     0          4d12h
istio-sidecar-injector-58d9f48c69-kpvmn   1/1     Running     0          36m
istio-telemetry-5d968fdf5-5sbtp           0/2     Pending     0          36m
istio-telemetry-dd88858cb-x8wnc           0/2     Pending     0          4d12h
prometheus-66c9f5694-rvvxp                1/1     Running     0          36m
servicegraph-6cb964f46d-zdlbv             1/1     Running     0          36m

services

kubectl get services -n istio-system
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                                                                                      AGE
grafana                  ClusterIP      10.245.240.205   <none>          3000/TCP                                                                                                                                     4d10h
istio-citadel            ClusterIP      10.245.63.102    <none>          8060/TCP,15014/TCP                                                                                                                           4d11h
istio-galley             ClusterIP      10.245.181.106   <none>          443/TCP,15014/TCP,9901/TCP                                                                                                                   4d11h
istio-ingressgateway     LoadBalancer   10.245.49.89     159.89.223.24   80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30567/TCP,15030:30743/TCP,15031:30648/TCP,15032:31165/TCP,15443:31046/TCP,15020:30533/TCP   4d11h
istio-pilot              ClusterIP      10.245.157.224   <none>          15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       4d11h
istio-policy             ClusterIP      10.245.32.238    <none>          9091/TCP,15004/TCP,15014/TCP                                                                                                                 4d11h
istio-sidecar-injector   ClusterIP      10.245.94.91     <none>          443/TCP                                                                                                                                      4d11h
istio-telemetry          ClusterIP      10.245.90.99     <none>          9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       4d11h
prometheus               ClusterIP      10.245.205.144   <none>          9090/TCP                                                                                                                                     4d11h
servicegraph             ClusterIP      10.245.248.102   <none>          8088/TCP      

ingress logs

 kubectl logs -n istio-system istio-ingressgateway-854d4fc448-vn65g
2019-03-01T01:03:26.872073Z info    FLAG: --applicationPorts="[]"
2019-03-01T01:03:26.872120Z info    FLAG: --binaryPath="/usr/local/bin/envoy"
2019-03-01T01:03:26.872124Z info    FLAG: --concurrency="0"
2019-03-01T01:03:26.872127Z info    FLAG: --configPath="/etc/istio/proxy"
2019-03-01T01:03:26.872133Z info    FLAG: --connectTimeout="10s"
2019-03-01T01:03:26.872135Z info    FLAG: --controlPlaneAuthPolicy="NONE"
2019-03-01T01:03:26.872139Z info    FLAG: --controlPlaneBootstrap="true"
2019-03-01T01:03:26.872141Z info    FLAG: --customConfigFile=""
2019-03-01T01:03:26.872143Z info    FLAG: --disableInternalTelemetry="false"
2019-03-01T01:03:26.872146Z info    FLAG: --discoveryAddress="istio-pilot:15010"
2019-03-01T01:03:26.872148Z info    FLAG: --domain="istio-system.svc.cluster.local"
2019-03-01T01:03:26.872151Z info    FLAG: --drainDuration="45s"
2019-03-01T01:03:26.872153Z info    FLAG: --help="false"
2019-03-01T01:03:26.872155Z info    FLAG: --id=""
2019-03-01T01:03:26.872157Z info    FLAG: --ip=""
2019-03-01T01:03:26.872159Z info    FLAG: --lightstepAccessToken=""
2019-03-01T01:03:26.872161Z info    FLAG: --lightstepAddress=""
2019-03-01T01:03:26.872163Z info    FLAG: --lightstepCacertPath=""
2019-03-01T01:03:26.872165Z info    FLAG: --lightstepSecure="false"
2019-03-01T01:03:26.872178Z info    FLAG: --log_as_json="false"
2019-03-01T01:03:26.872192Z info    FLAG: --log_caller=""
2019-03-01T01:03:26.872194Z info    FLAG: --log_output_level="info"
2019-03-01T01:03:26.872196Z info    FLAG: --log_rotate=""
2019-03-01T01:03:26.872198Z info    FLAG: --log_rotate_max_age="30"
2019-03-01T01:03:26.872200Z info    FLAG: --log_rotate_max_backups="1000"
2019-03-01T01:03:26.872202Z info    FLAG: --log_rotate_max_size="104857600"
2019-03-01T01:03:26.872205Z info    FLAG: --log_stacktrace_level="default:none"
2019-03-01T01:03:26.872211Z info    FLAG: --log_target="[stdout]"
2019-03-01T01:03:26.872213Z info    FLAG: --parentShutdownDuration="1m0s"
2019-03-01T01:03:26.872217Z info    FLAG: --proxyAdminPort="15000"
2019-03-01T01:03:26.872219Z info    FLAG: --proxyLogLevel="warning"
2019-03-01T01:03:26.872221Z info    FLAG: --serviceCluster="istio-ingressgateway"
2019-03-01T01:03:26.872224Z info    FLAG: --serviceregistry="Kubernetes"
2019-03-01T01:03:26.872226Z info    FLAG: --statsdUdpAddress=""
2019-03-01T01:03:26.872228Z info    FLAG: --statusPort="15020"
2019-03-01T01:03:26.872230Z info    FLAG: --templateFile=""
2019-03-01T01:03:26.872232Z info    FLAG: --trust-domain=""
2019-03-01T01:03:26.872234Z info    FLAG: --zipkinAddress="zipkin:9411"
2019-03-01T01:03:26.872246Z info    Version [email protected]/istio-1.1.0-rc.1-cdc39e70054be670d2c141dec7b8517a0812b021-Clean
2019-03-01T01:03:26.872360Z info    Obtained private IP [10.244.2.202 fe80::ccbc:21ff:fe77:7c97]
2019-03-01T01:03:26.872390Z info    Proxy role: &model.Proxy{ClusterID:"", Type:"router", IPAddresses:[]string{"10.244.2.202", "10.244.2.202", "fe80::ccbc:21ff:fe77:7c97"}, ID:"istio-ingressgateway-854d4fc448-vn65g.istio-system", Locality:(*core.Locality)(nil), DNSDomains:[]string(nil), ConfigNamespace:"", TrustDomain:"cluster.local", Metadata:map[string]string(nil), SidecarScope:(*model.SidecarScope)(nil), ServiceInstances:[]*model.ServiceInstance(nil)}
2019-03-01T01:03:26.872396Z info    PilotSAN []string(nil)
2019-03-01T01:03:26.872765Z info    Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot:15010
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: istio-ingressgateway
statNameLength: 189
tracing:
  zipkin:
    address: zipkin:9411

2019-03-01T01:03:26.872776Z info    Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2019-03-01T01:03:26.872782Z info    PilotSAN []string(nil)
2019-03-01T01:03:26.872920Z info    Opening status port 15020

2019-03-01T01:03:26.873199Z info    Starting proxy agent
2019-03-01T01:03:26.873334Z info    Received new config, resetting budget
2019-03-01T01:03:26.873339Z info    Reconciling retry (budget 10)
2019-03-01T01:03:26.873347Z info    Epoch 0 starting
2019-03-01T01:03:26.873977Z info    Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-ingressgateway --service-node router~10.244.2.202~istio-ingressgateway-854d4fc448-vn65g.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning]
[2019-03-01 01:03:26.935][11][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-03-01 01:03:26.935][11][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.Cluster.hosts'. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-03-01 01:03:26.935][11][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.Cluster.hosts'. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-03-01 01:03:26.935][11][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.Cluster.hosts'. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-03-01 01:03:26.935][11][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.config.trace.v2.Tracing.Http.config'. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-03-01 01:03:26.945][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, no healthy upstream
[2019-03-01 01:03:26.945][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:56] Unable to establish new stream
2019-03-01T01:03:27.802218Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:29.813306Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:31.819079Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:33.805326Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:35.823124Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:03:37.591][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:03:37.816912Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:39.802928Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:41.802529Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:43.803267Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:45.802791Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:47.803750Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:03:49.055][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:03:49.805805Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:51.804380Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:53.805937Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:55.804155Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:57.803793Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:03:59.801502Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:01.805775Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:04:03.279][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:04:03.805784Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:05.801286Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:07.811353Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:09.803309Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:11.801892Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:13.807438Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:04:15.632][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:04:15.802799Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:17.819318Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:19.808990Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:21.802347Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:23.803915Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:25.805041Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:27.829089Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:29.802470Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:31.803587Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:04:33.257][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:04:33.806747Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:35.801898Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:37.814848Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:39.804483Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:41.807476Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:43.802915Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:45.801976Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:47.813126Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:49.802561Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:04:51.045][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:04:51.801451Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:53.805589Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:55.806793Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:57.803803Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:04:59.807510Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:01.803457Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:03.805377Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:05.804689Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:07.803979Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:05:09.231][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:05:09.801583Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:11.808223Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:13.804451Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:15.803119Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:17.801850Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:19.803264Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:05:20.982][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:05:21.802590Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:23.805972Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:25.802951Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:27.810960Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:29.801644Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:31.805301Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:33.807143Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:35.806207Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:37.809763Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:39.801658Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:05:40.070][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:05:41.801335Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:43.809733Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:45.806383Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:47.807025Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:49.805622Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:51.801892Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:05:51.810][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:05:53.806010Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:55.804044Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:57.811131Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:05:59.801242Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:01.802243Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:06:02.598][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:06:03.809943Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:05.806377Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:07.812813Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:09.810482Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:11.808046Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:13.806770Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:15.804032Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:17.805442Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:19.802325Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:21.801657Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:23.805503Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:25.800889Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:27.811688Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:29.803148Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:31.802747Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:33.807107Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:06:34.630][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:06:35.806104Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:37.812526Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:39.803827Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:41.803684Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:43.801179Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:45.801680Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:47.805334Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:49.801301Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:51.801715Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:53.801282Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:55.808458Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:57.813067Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:06:59.801813Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:01.814660Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:03.804861Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:07:03.833][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:07:05.806509Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:07.824590Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:09.803744Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:11.802633Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:13.803898Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:15.801942Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:17.802791Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:19.808787Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:21.806765Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:23.807930Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:25.802735Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:07:26.612][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:07:27.812173Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:29.802304Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:31.808524Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:33.808825Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:35.807715Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:37.807806Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:07:38.247][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:07:39.801693Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:41.806466Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:43.806204Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:45.803412Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:47.808003Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:49.804144Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:51.802063Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:53.803778Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:55.802850Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:57.823397Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:07:59.808234Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:01.801476Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:03.804797Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:08:05.367][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:08:05.801293Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:07.827435Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:09.808531Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:11.800954Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:13.806271Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:15.805296Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:17.804924Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:19.803365Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:21.805416Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:23.809312Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:25.805753Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:27.826359Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:29.804100Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:31.801821Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:33.804925Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:35.804119Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:37.811445Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:39.805677Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:08:41.615][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:08:41.801599Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:43.809854Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:45.801726Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:47.804092Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:49.803383Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:51.801456Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-03-01T01:08:53.806968Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
[2019-03-01 01:08:54.232][11][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers
2019-03-01T01:08:55.807421Z info    Envoy proxy is NOT ready: cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

pilot logs

s6:istio-1.1.0-rc.1 > kubectl logs -n istio-system istio-pilot-7d954dccdd-xw89j -c discovery
s6:istio-1.1.0-rc.1 > kubectl logs -n istio-system istio-pilot-7d954dccdd-xw89j -c istio-proxy

s6:istio-1.1.0-rc.1 > kubectl logs -n istio-system istio-pilot-85dfd87d48-v764h -c discovery
s6:istio-1.1.0-rc.1 > kubectl logs -n istio-system istio-pilot-85dfd87d48-v764h -c istio-proxy

helm logs for my pod

Error from server (BadRequest): container "istio-proxy" in pod "free-apis-7bf7759dc-lsqh9" is waiting to start: PodInitializing

Not sure what else is the problem.

Upgrade logs

helm upgrade istio install/kubernetes/helm/istio --namespace istio-system
Release "istio" has been upgraded. Happy Helming!
LAST DEPLOYED: Thu Feb 28 19:03:14 2019
NAMESPACE: istio-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                                                DATA  AGE
istio-galley-configuration                                          1     4d11h
istio-grafana-custom-resources                                      2     4d10h
istio-grafana-configuration-dashboards-galley-dashboard             1     4d10h
istio-grafana-configuration-dashboards-mixer-dashboard              1     4d10h
istio-grafana-configuration-dashboards-pilot-dashboard              1     4d10h
istio-grafana-configuration-dashboards-istio-mesh-dashboard         1     4d10h
istio-grafana-configuration-dashboards-istio-workload-dashboard     1     4d10h
istio-grafana-configuration-dashboards-istio-performance-dashboard  1     4d10h
istio-grafana-configuration-dashboards-istio-service-dashboard      1     4d10h
istio-grafana                                                       2     4d10h
prometheus                                                          1     4d11h
istio                                                               2     4d11h
istio-sidecar-injector                                              1     4d11h

==> v1/ClusterRoleBinding
NAME                                                    AGE
istio-galley-admin-role-binding-istio-system            4d11h
istio-ingressgateway-istio-system                       4d11h
istio-grafana-post-install-role-binding-istio-system    4d10h
istio-mixer-admin-role-binding-istio-system             4d11h
istio-pilot-istio-system                                4d11h
prometheus-istio-system                                 4d11h
istio-citadel-istio-system                              4d11h
istio-sidecar-injector-admin-role-binding-istio-system  4d11h
istio-multi                                             4d11h

==> v1alpha2/attributemanifest
NAME        AGE
istioproxy  4d11h
kubernetes  4d11h

==> v1alpha2/kubernetes
NAME        AGE
attributes  4d11h

==> v1alpha3/DestinationRule
NAME             AGE
istio-telemetry  4d11h
istio-policy     4d11h

==> v1alpha2/handler
NAME           AGE
kubernetesenv  4d11h
prometheus     4d11h

==> v1alpha2/metric
NAME                  AGE
tcpconnectionsclosed  4d11h
tcpbytereceived       4d11h
responsesize          4d11h
requestcount          4d11h
requestsize           4d11h
requestduration       4d11h
tcpconnectionsopened  4d11h
tcpbytesent           4d11h

==> v1beta1/PodDisruptionBudget
NAME                  MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
istio-galley          1              N/A              0                    4d11h
istio-ingressgateway  1              N/A              0                    4d11h
istio-telemetry       1              N/A              0                    4d11h
istio-policy          1              N/A              0                    4d11h
istio-pilot           1              N/A              0                    4d11h

==> v1/ServiceAccount
NAME                                    SECRETS  AGE
istio-galley-service-account            1        4d11h
istio-ingressgateway-service-account    1        4d11h
istio-grafana-post-install-account      1        4d10h
istio-mixer-service-account             1        4d11h
istio-pilot-service-account             1        4d11h
prometheus                              1        4d11h
istio-citadel-service-account           1        4d11h
istio-sidecar-injector-service-account  1        4d11h
istio-multi                             1        4d11h

==> v1/Role
NAME                      AGE
istio-ingressgateway-sds  4d11h

==> v1/Service
NAME                    TYPE          CLUSTER-IP      EXTERNAL-IP    PORT(S)                                                                                                                                     AGE
istio-galley            ClusterIP     10.245.181.106  <none>         443/TCP,15014/TCP,9901/TCP                                                                                                                  4d11h
istio-ingressgateway    LoadBalancer  10.245.49.89    159.89.223.24  80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30567/TCP,15030:30743/TCP,15031:30648/TCP,15032:31165/TCP,15443:31046/TCP,15020:30533/TCP  4d11h
grafana                 ClusterIP     10.245.240.205  <none>         3000/TCP                                                                                                                                    4d10h
istio-policy            ClusterIP     10.245.32.238   <none>         9091/TCP,15004/TCP,15014/TCP                                                                                                                4d11h
istio-telemetry         ClusterIP     10.245.90.99    <none>         9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                      4d11h
istio-pilot             ClusterIP     10.245.157.224  <none>         15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                      4d11h
prometheus              ClusterIP     10.245.205.144  <none>         9090/TCP                                                                                                                                    4d11h
istio-citadel           ClusterIP     10.245.63.102   <none>         8060/TCP,15014/TCP                                                                                                                          4d11h
servicegraph            ClusterIP     10.245.248.102  <none>         8088/TCP                                                                                                                                    4d10h
istio-sidecar-injector  ClusterIP     10.245.94.91    <none>         443/TCP                                                                                                                                     4d11h

==> v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
istio-galley            1        2        1           1          4d11h
istio-ingressgateway    1        1        1           0          4d11h
grafana                 1        1        1           1          4d10h
istio-telemetry         1        2        1           0          4d11h
istio-policy            1        2        1           0          4d11h
istio-pilot             1        2        1           0          4d11h
prometheus              1        1        1           0          4d11h
istio-citadel           1        2        1           1          4d11h
servicegraph            1        1        1           0          4d10h
istio-sidecar-injector  1        2        1           1          4d11h

==> v2beta1/HorizontalPodAutoscaler
NAME                  REFERENCE                        TARGETS        MINPODS  MAXPODS  REPLICAS  AGE
istio-ingressgateway  Deployment/istio-ingressgateway  <unknown>/80%  1        5        1         4d11h
istio-policy          Deployment/istio-policy          <unknown>/80%  1        5        1         4d11h
istio-telemetry       Deployment/istio-telemetry       <unknown>/80%  1        5        1         4d11h
istio-pilot           Deployment/istio-pilot           <unknown>/80%  1        5        1         4d11h

==> v1alpha2/rule
NAME                     AGE
promhttp                 4d11h
promtcpconnectionopen    4d11h
promtcp                  4d11h
kubeattrgenrulerule      4d11h
promtcpconnectionclosed  4d11h
tcpkubeattrgenrulerule   4d11h

==> v1/Pod(related)
NAME                                     READY  STATUS             RESTARTS  AGE
istio-galley-696b6c5c9f-vwv6j            0/1    ContainerCreating  0         5s
istio-galley-867c55cf6-zw8l4             1/1    Running            2         4d11h
istio-ingressgateway-5979b99885-vrvxh    0/1    Terminating        0         4d11h
istio-ingressgateway-854d4fc448-vn65g    0/1    ContainerCreating  0         5s
grafana-57586c685b-2k8w9                 1/1    Running            0         4d10h
istio-telemetry-5d968fdf5-5sbtp          0/2    Pending            0         5s
istio-telemetry-dd88858cb-x8wnc          0/2    Pending            0         4d11h
istio-policy-6958588bd9-8nbqg            0/2    Pending            0         4s
istio-policy-6bc4b8c6c8-44lfh            0/2    Pending            0         4d11h
istio-pilot-7d954dccdd-xw89j             0/2    Pending            0         4d11h
istio-pilot-85dfd87d48-v764h             0/2    Pending            0         4s
prometheus-66c9f5694-rvvxp               0/1    Init:0/1           0         4s
prometheus-68c5594ddc-5gvtt              1/1    Terminating        0         4d11h
istio-citadel-567fbcc54d-nhdnm           0/1    ContainerCreating  0         3s
istio-citadel-5fb6bd6d7f-2pgs8           1/1    Running            0         4d11h
servicegraph-6cb964f46d-zdlbv            0/1    ContainerCreating  0         3s
servicegraph-7bc568bfd9-6n5sh            1/1    Terminating        0         4d10h
istio-sidecar-injector-5769c94686-vcw72  1/1    Running            0         4d11h
istio-sidecar-injector-58d9f48c69-kpvmn  0/1    ContainerCreating  0         3s

==> v1/ClusterRole
NAME                                     AGE
istio-galley-istio-system                4d11h
istio-ingressgateway-istio-system        4d11h
istio-grafana-post-install-istio-system  4d10h
istio-mixer-istio-system                 4d11h
istio-pilot-istio-system                 4d11h
prometheus-istio-system                  4d11h
istio-citadel-istio-system               4d11h
istio-sidecar-injector-istio-system      4d11h
istio-reader                             4d11h

==> v1/RoleBinding
NAME                      AGE
istio-ingressgateway-sds  4d11h

==> v1alpha1/MeshPolicy
NAME     AGE
default  4d11h

==> v1beta1/MutatingWebhookConfiguration
NAME                    AGE
istio-sidecar-injector  4d11h


NOTES:
Thank you for installing istio.

Your release is named istio.

To get started running application with Istio, execute the following steps:
1. Label namespace that application object will be deployed to by the following command (take default namespace as an example)

$ kubectl label namespace default istio-injection=enabled
$ kubectl get namespace -L istio-injection

2. Deploy your applications

$ kubectl apply -f <your-application>.yaml

For more information on running Istio, visit:
https://istio.io/

@markserrano915

You will see many of your containers in the pending state. If you run kubectl describe on them, I would speculate you will see errors relating to insufficient meemory or cpu resources. This can be fixed by increasing the number of worker nodes in the system. Can you report the describe results on some of these errors?

When these pods do not enter the running state, a nonsensical error about port 0 and port xxxx is printed in the application sidecars. In addition, it appears there is a readiness check problem. I don't totally understand why the readiness check fails, but without the microservices running, the system will show these two results.

In 1.1 series, the defaults for CPU and memory were increased. I am not sure what these new minimum requirements are but I think @mandarjog does.

Cheers
-steve

@sdake You're correct. I ran out of resources though the command I use is kubectl get pod instead of kubectl describe

kubectl get pod -n istio-system istio-policy-6958588bd9-8nbqg --output=yaml
 kubectl get pods -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-57586c685b-2k8w9                  1/1     Running     0          4d14h
istio-citadel-567fbcc54d-nhdnm            1/1     Running     0          3h31m
istio-galley-696b6c5c9f-vwv6j             1/1     Running     0          3h31m
istio-ingressgateway-854d4fc448-vn65g     0/1     Running     0          3h31m
istio-init-crd-10-v42fz                   0/1     Completed   0          4d14h
istio-init-crd-11-6xf2h                   0/1     Completed   0          4d14h
istio-init-crd-certmanager-10-4xbx4       0/1     Completed   0          4d14h
istio-pilot-7d954dccdd-xw89j              0/2     Pending     0          4d14h
istio-pilot-85dfd87d48-v764h              0/2     Pending     0          3h31m
istio-policy-6958588bd9-8nbqg             0/2     Pending     0          3h31m
istio-policy-6bc4b8c6c8-44lfh             0/2     Pending     0          4d14h
istio-sidecar-injector-58d9f48c69-kpvmn   1/1     Running     0          3h31m
istio-telemetry-5d968fdf5-5sbtp           0/2     Pending     0          3h31m
istio-telemetry-dd88858cb-x8wnc           0/2     Pending     0          4d14h
prometheus-66c9f5694-rvvxp                1/1     Running     0          3h31m
servicegraph-6cb964f46d-zdlbv             1/1     Running     0          3h31m
s6:~ > kubectl get pod -n istio-system istio-policy-6958588bd9-8nbqg --output=yaml
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:16Z"
    message: '0/3 nodes are available: 3 Insufficient cpu.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable
s6:~ > kubectl get pod -n istio-system istio-ingressgateway-854d4fc448-vn65g --output=yaml
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:15Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:15Z"
    message: 'containers with unready status: [istio-proxy]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:15Z"
    message: 'containers with unready status: [istio-proxy]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:15Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://f918f05d0c4fbcd77d2b007634e8ada1fb72e38866a28de34982c70c6ba1f13d
    image: istio/proxyv2:1.1.0-rc.1
    imageID: docker-pullable://istio/proxyv2@sha256:0d9b3e434740608f42dfa0ddd2b90855424f0dc1fb3acdd4881b4711eadebc2e
    lastState: {}
    name: istio-proxy
    ready: false
    restartCount: 0
    state:
      running:
        startedAt: "2019-03-01T01:03:26Z"
  hostIP: 10.138.102.72
  phase: Running
  podIP: 10.244.2.202
  qosClass: Burstable
  startTime: "2019-03-01T01:03:15Z"

kubectl get pod -n istio-system istio-telemetry-5d968fdf5-5sbtp --output=yaml
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:16Z"
    message: '0/3 nodes are available: 3 Insufficient cpu.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable

kubectl get pod -n istio-system istio-pilot-85dfd87d48-v764h --output=yaml

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-03-01T01:03:16Z"
    message: '0/3 nodes are available: 3 Insufficient memory.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable

The requested resources for this one are:

Containers:
  discovery:
    Image:       docker.io/istio/pilot:1.1.0-rc.1
    Ports:       8080/TCP, 15010/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      discovery
      --monitoringAddr=:15014
      --domain
      cluster.local
      --secureGrpcAddr

      --keepaliveMaxServerConnectionAge
      30m
    Requests:
      cpu:      500m
      memory:   2Gi
    Readiness:  http-get http://:8080/ready delay=5s timeout=5s period=30s #success=1 #failure=3
    Environment:
      POD_NAME:                   istio-pilot-85dfd87d48-v764h (v1:metadata.name)
      POD_NAMESPACE:              istio-system (v1:metadata.namespace)
      GODEBUG:                    gctrace=2
      PILOT_PUSH_THROTTLE_COUNT:  100
      PILOT_TRACE_SAMPLING:       1
    Mounts:
      /etc/certs from istio-certs (ro)
      /etc/istio/config from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from istio-pilot-service-account-token-ht5vd (ro)
  istio-proxy:
    Image:       docker.io/istio/proxyv2:1.1.0-rc.1
    Ports:       15003/TCP, 15005/TCP, 15007/TCP, 15011/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      proxy
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --serviceCluster
      istio-pilot
      --templateFile
      /etc/istio/proxy/envoy_pilot.yaml.tmpl
      --controlPlaneAuthPolicy
      NONE
    Limits:
      cpu:     2
      memory:  128Mi
    Requests:
      cpu:     100m
      memory:  128Mi
    Environment:
      POD_NAME:       istio-pilot-85dfd87d48-v764h (v1:metadata.name)
      POD_NAMESPACE:  istio-system (v1:metadata.namespace)
      INSTANCE_IP:     (v1:status.podIP)
    Mounts:
      /etc/certs from istio-certs (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from istio-pilot-service-account-token-ht5vd (ro)

@sdake So the discovery container alone needs 2Gi of memory and the proxy needs 128Mi. I added 3 more nodes but I got the same issue

kubectl describe pods -n istio-system istio-pilot-85dfd87d48-v764h

Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  16m (x5 over 16m)   default-scheduler  0/4 nodes are available: 4 Insufficient memory.
  Warning  FailedScheduling  15m (x6 over 16m)   default-scheduler  0/5 nodes are available: 5 Insufficient memory.
  Warning  FailedScheduling  15m (x12 over 15m)  default-scheduler  0/6 nodes are available: 6 Insufficient memory.

Each node I am using (from DigitalOcean) has the following spec:

kubectl describe nodes

Capacity:
 attachable-volumes-csi-dobs.csi.digitalocean.com:  7
 cpu:                                               1
 ephemeral-storage:                                 51572172Ki
 hugepages-1Gi:                                     0
 hugepages-2Mi:                                     0
 memory:                                            2043436Ki
 pods:                                              110
Allocatable:
 attachable-volumes-csi-dobs.csi.digitalocean.com:  7
 cpu:                                               1
 ephemeral-storage:                                 47528913637
 hugepages-1Gi:                                     0
 hugepages-2Mi:                                     0
 memory:                                            1941036Ki
 pods:                                              110
System Info:
 Machine ID:                 4ea2d61487de4f069df4b08a83dbbcdc
 System UUID:                4ea2d614-87de-4f06-9df4-b08a83dbbcdc
 Boot ID:                    05359898-ca58-4142-9c64-8b7f15962da0
 Kernel Version:             4.19.0-0.bpo.1-amd64
 OS Image:                   Debian GNU/Linux 9 (stretch)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.2
 Kubelet Version:            v1.13.2
 Kube-Proxy Version:         v1.13.2

It seems none of the nodes will satisfy the 2Gi + 128Mi requirement unless the nodes get more memory.

I dropped the extra three 2GB nodes in favor of three 4GB nodes and things started working again.

s6:istio-1.1.0-rc.1 > kubectl describe pods -n istio-system istio-pilot-85dfd87d48-v764h
Events:
  Type     Reason            Age                     From                         Message
  ----     ------            ----                    ----                         -------
  Warning  FailedScheduling  32m (x5 over 32m)       default-scheduler            0/4 nodes are available: 4 Insufficient memory.
  Warning  FailedScheduling  31m (x6 over 32m)       default-scheduler            0/5 nodes are available: 5 Insufficient memory.
  Warning  FailedScheduling  30m (x12 over 31m)      default-scheduler            0/6 nodes are available: 6 Insufficient memory.
  Warning  FailedScheduling  5m27s (x2 over 6m8s)    default-scheduler            0/6 nodes are available: 3 Insufficient memory, 3 node(s) were unschedulable.
  Warning  FailedScheduling  5m20s (x14 over 4h13m)  default-scheduler            0/3 nodes are available: 3 Insufficient memory.
  Warning  FailedScheduling  3m27s (x3 over 3m27s)   default-scheduler            0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient memory.
  Normal   Pulling           2m41s                   kubelet, sweet-mclaren-u4x4  pulling image "docker.io/istio/pilot:1.1.0-rc.1"
  Normal   Pulled            2m30s                   kubelet, sweet-mclaren-u4x4  Successfully pulled image "docker.io/istio/pilot:1.1.0-rc.1"
  Normal   Created           2m29s                   kubelet, sweet-mclaren-u4x4  Created container
  Normal   Started           2m29s                   kubelet, sweet-mclaren-u4x4  Started container
  Normal   Pulling           2m29s                   kubelet, sweet-mclaren-u4x4  pulling image "docker.io/istio/proxyv2:1.1.0-rc.1"
  Normal   Pulled            2m18s                   kubelet, sweet-mclaren-u4x4  Successfully pulled image "docker.io/istio/proxyv2:1.1.0-rc.1"
  Normal   Created           2m18s                   kubelet, sweet-mclaren-u4x4  Created container
  Normal   Started           2m17s                   kubelet, sweet-mclaren-u4x4  Started container
s6:istio-1.1.0-rc.1 > kubectl describe pods -n istio-system istio-telemetry-5d968fdf5-5sbtp
Events:
  Type     Reason            Age                   From                         Message
  ----     ------            ----                  ----                         -------
  Warning  FailedScheduling  39m (x6 over 39m)     default-scheduler            0/4 nodes are available: 4 Insufficient cpu.
  Warning  FailedScheduling  38m (x6 over 39m)     default-scheduler            0/5 nodes are available: 5 Insufficient cpu.
  Warning  FailedScheduling  37m (x12 over 38m)    default-scheduler            0/6 nodes are available: 6 Insufficient cpu.
  Warning  FailedScheduling  12m (x2 over 13m)     default-scheduler            0/6 nodes are available: 3 Insufficient cpu, 3 node(s) were unschedulable.
  Warning  FailedScheduling  12m (x15 over 4h20m)  default-scheduler            0/3 nodes are available: 3 Insufficient cpu.
  Warning  FailedScheduling  10m (x2 over 10m)     default-scheduler            0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient cpu.
  Warning  FailedScheduling  9m (x2 over 9m50s)    default-scheduler            0/5 nodes are available: 1 Insufficient memory, 5 Insufficient cpu.
  Warning  FailedScheduling  4m2s                  default-scheduler            0/6 nodes are available: 1 Insufficient memory, 1 node(s) had taints that the pod didn't tolerate, 5 Insufficient cpu.
  Normal   Pulling           2m53s                 kubelet, sweet-mclaren-u4xi  pulling image "docker.io/istio/mixer:1.1.0-rc.1"
  Normal   Pulled            2m45s                 kubelet, sweet-mclaren-u4xi  Successfully pulled image "docker.io/istio/mixer:1.1.0-rc.1"
  Normal   Created           2m44s                 kubelet, sweet-mclaren-u4xi  Created container
  Normal   Started           2m44s                 kubelet, sweet-mclaren-u4xi  Started container
  Normal   Pulling           2m44s                 kubelet, sweet-mclaren-u4xi  pulling image "docker.io/istio/proxyv2:1.1.0-rc.1"
  Normal   Pulled            2m28s                 kubelet, sweet-mclaren-u4xi  Successfully pulled image "docker.io/istio/proxyv2:1.1.0-rc.1"
  Normal   Created           2m28s                 kubelet, sweet-mclaren-u4xi  Created container
  Normal   Started           2m27s                 kubelet, sweet-mclaren-u4xi  Started container

Reopening and tagging as UX on the advice of @frankbu because I had the same problem with the Fortio sample. It should be sufficient to solve this with documentation. See https://github.com/istio/istio/issues/12333 and https://github.com/istio/istio/pull/12353#pullrequestreview-212336586 for the discussion with Frank.

I'm getting envoy missing listener for inbound application port: 80 after switching to istio 1.1 on 1.12.5-gke.10. My webserver service looks like

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
    - name: http
      port: 80
    - name: https
      port: 443
      targetPort: 80
  selector:
    app: nginx

It's that targetPort: 80 line that seem to break it for me. If I comment that line out itio-proxy will report ready, then I can re-apply it and it works. I can't seem to reproduce it on book info or anything though... :(

@TimBozeman this is because we're using the container ports to populate the default application port list. You can override the list of application ports that the sidecar will wait for by setting the readiness.status.sidecar.istio.io/applicationPorts annotation in your deployment.

This has become an issue for an increasing number of folks. We could consider defaulting this list to empty, but then users will see occasional 503s as the applications are starting (since readiness will pass earlier than it should).

@duderino @costinm @rshriram thoughts?

@nmittler I am struggling to understand your comment. The user reports that it works without targetPort: 80. Without the explicit targetPort wouldn't the real ports be expected to be 80 and 443? And with the explicit targetPort the real ports would be only 80?

@nmittler Thank you!
I don't have 443 defined in the deployment. I tried adding it as - containerPort: 443 and also in the annotation readiness.status.sidecar.istio.io/applicationPorts: "443,80", but I didn't get it going. If I set the annotation to empty the istio-proxy sidecar reports ready, but doesn't actually work until I make a change to the service like toggling the targetPort.

Sorry I don't have a reproduceable bug. Just thought I was on to something.

@TimBozeman can you list the pod spec?

Sure, I'm on 1.1 rc.4 and 1.12.5-gke.10.

apiVersion: v1
kind: Namespace
metadata:
  name: example
  labels:
    istio-injection: enabled
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: example
  namespace: example
  labels:
    app: example
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: example
        version: v1
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: example
  namespace: example
  labels:
    app: example
spec:
  ports:
    - name: http
      port: 80
    - name: https
      port: 443
      targetPort: 80
  selector:
    app: example

and then I get

2019-03-13T21:25:04.264825Z info    Envoy proxy is NOT ready: 2 errors occurred:
* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 80

If I comment out the targetPort line istio-proxy reports ready and then I can add it back.

@TimBozeman interesting ... the log dump you provided doesn't contain inbound listeners for either of the configured ports. How long did you wait until you captured that log? I wonder if you wait a bit, Envoy might receive more inbound listeners from Pilot.

@costinm @rshriram this seems odd ... andy thoughts on why the inbound listeners generated by Pilot seem to be different when targetPort is specified?

@TimBozeman interesting ... the log dump you provided doesn't contain inbound listeners for either of the configured ports. How long did you wait until you captured that log? I wonder if you wait a bit, Envoy might receive more inbound listeners from Pilot.

I let it run for a long time. When I look at the listeners it's mostly empty.

istioctl -n example pc listeners example-77b778948c-sv5h7
ADDRESS     PORT      TYPE
0.0.0.0     15090     HTTP

@nmittler I started Tim's example, the istio-proxy log reports

[2019-03-13 22:07:34.116][20][warning][config]
[bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70]
gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: duplicate listener 172.30.150.220_80 found

Thanks, @esnible ... you beat me to it. It seems that Envoy is dropping both listeners when the conflict occurs. I suspect we should be guarding against this in Pilot.

@rshriram @costinm thoughts?

@nmittler The output of istioctl proxy-status is

NAME                                                   CDS        LDS                            EDS               RDS          PILOT                            VERSION
example-5c47cfdc9c-7mj8s.example                       SYNCED     STALE (Never Acknowledged)     SYNCED (50%)      NOT SENT     istio-pilot-67d7bdcbc8-7x9x9     1.1.0

Pilot should avoid sending duplicates. I found nothing in the logs indicating that it realized it was sending anything bad; the first Warning from Pilot is

ADS:LDS: ACK ERROR 172.30.150.238:49254 istio-ingressgateway-6d9b5c9486-ns7kk.istio-system ... type_url:"type.googleapis.com/envoy.api.v2.Listener" response_nonce:"a69f121f-df3c-4c1b-befe-888d0fa90228" error_detail:<code:13 message:"Error adding/updating listener 0.0.0.0_80: cannot bind '0.0.0.0:80': Address already in use" >

I've tried reproducing this in a unit test with an in-memory pilot (to aid debuggability): https://github.com/nmittler/istio/tree/test-targetport. This commit is based on 1.1.0-rc.4.

So far unsuccessful ... it seems to only create a single listener on port 80.

@nmittler I like the idea of an in-memory Pilot. Could it be generalized beyond a new test case? Something that would help me troubleshoot would be a CLI incantation that would read a YaML file into an in-memory Pilot and dump an Envoy configuration. Something like gen-envoy-config -f foo.yaml. We could pass in this user's Service and his Deployment (or injected output from kubectl get <pod> -o yaml) and see what Pilot would generate.

@esnible yeah that's a good idea and should be easy enough to cook up. Want to raise an issue for post 1.1?

Congrats on the release!

It looks like the target port issue is still happening on 1.1.0 on 1.12.5-gke.5.

I have the same issue. My deployment has a service bound to it. Funny fact is:

Liveness probe failed: Get http://10.100.8.167:50501/health?lp: dial tcp 10.100.8.167:50501: connect: connection refused

But my deployment's service has another IP completely...

Also, checking all IPs from kubectl get svc --all-namespaces, and NONE of them have this IP...

Everything was working smoothly before upgrading from 1.0.6 to 1.1.0.

I've also tried the readiness.status.sidecar.istio.io/applicationPorts trick and double checked that my deployment/service have the same labels on them. Any ideas?

You should be able to disable readiness by setting the annotation status.sidecar.istio.io/port: "0" in your deployment

@nmittler That got me a step further! But, is it safe to do that for a production release? Doesn't it seem more like a hacky-ish solution?

When I did that the istio-proxy sidecar reports ready, but doesn't work.

@TimBozeman same for me! Trying to get bookinfo app to work now

Right... in that case, you're disabling the entire readiness probe altogether. There are a couple of annotations that are at play here:

status.sidecar.istio.io/port: defines the port for the readiness/liveness probes for the sidecar. If this is set to 0, the readiness probe is disabled and the sidecar will always be "ready".

Assuming that is non-zero, readiness is enabled for the sidecar. The readiness check is composed of 2 parts:

  1. Check whether or not configuration (LDS, CDS) was received from Pilot.
  2. Determine whether or not configuration for all of the application's inbound ports have been received.

For 2, the default is to check all inbound ports on the application's container, since the webhook injector doesn't have service information available when generating the sidecar config. However, there are cases where not all container ports will receive inbound configuration from Pilot. In this situation, you can override this default behavior with another annotation:

readiness.status.sidecar.istio.io/applicationPorts: If set to "", will disable part 2 of the readiness check. Otherwise, it's a comma-separated list of ports for which the readiness probe will await inbound configuration.

I suspect that this may be the annotation that you want to disable or modify.

@TimBozeman I also had this problem with Istio 1.1.0. targetPort doesn't seem to work in a manifest. I'm not really sure its needed - my application seemed to function correctly without it.

Sure, I'm on 1.1 rc.4 and 1.12.5-gke.10.

apiVersion: v1
kind: Namespace
metadata:
  name: example
  labels:
    istio-injection: enabled
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: example
  namespace: example
  labels:
    app: example
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: example
        version: v1
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: example
  namespace: example
  labels:
    app: example
spec:
  ports:
    - name: http
      port: 80
    - name: https
      port: 443
      targetPort: 80
  selector:
    app: example

and then I get

2019-03-13T21:25:04.264825Z   info    Envoy proxy is NOT ready: 2 errors occurred:
* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 80

If I comment out the targetPort line istio-proxy reports ready and then I can add it back.

@TimBozeman I also had this problem with Istio 1.1.0. targetPort doesn't seem to work in a manifest. I'm not really sure it is needed - my application seemed to function correctly without it.

Hmm, yeah that's strange. It does seem to be working without it.
¯_(ツ)_/¯
Nice!

/assign

I find this envoy received dumplicate listeners

[2019-03-30 01:40:08.192][21][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: duplicate listener 172.17.0.11_80 found

First, it is caused by same targetPort(80) of service

@hzxuzhonghu awesome that fix works. Using release-1.1-latest-daily removed errors like:

* envoy missing listener for inbound application port: 8080

Just to wait for 1.1.2 to be cut now :)

@hzxuzhonghu, Unfortunately I can easily reproduce the problem on a fresh GKE (v1.11.8-gke.5) cluster even with the release-1.1-latest-daily images with the following resources:

```apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echo
labels:
k8s-app: echo
namespace: default
spec:
replicas: 20
selector:
matchLabels:
k8s-app: echo
template:
metadata:
labels:
k8s-app: echo
spec:
terminationGracePeriodSeconds: 1
containers:
- name: echo-service
image: k8s.gcr.io/echoserver:1.10
ports:

- containerPort: 8080

apiVersion: v1
kind: Service
metadata:
name: echo
labels:
k8s-app: echo
namespace: default
spec:
ports:

  • name: http
    port: 8080
    selector:
    k8s-app: echo

NAME READY STATUS RESTARTS AGE
echo-95894744f-72kdn 2/2 Running 0 2m
echo-95894744f-7dwst 2/2 Running 0 2m
echo-95894744f-c84fv 1/2 Running 0 2m
echo-95894744f-cdsc5 1/2 Running 0 2m
echo-95894744f-fr9bn 1/2 Running 0 2m
echo-95894744f-ftbfw 2/2 Running 0 2m
echo-95894744f-hfbgm 2/2 Running 0 2m
echo-95894744f-hvpk7 2/2 Running 0 2m
echo-95894744f-ks7vf 2/2 Running 0 2m
echo-95894744f-kvbd5 2/2 Running 0 2m
echo-95894744f-mdl86 2/2 Running 0 2m
echo-95894744f-nckdf 1/2 Running 0 2m
echo-95894744f-nfm8f 2/2 Running 0 2m
echo-95894744f-nz4mn 1/2 Running 0 2m
echo-95894744f-p74mq 1/2 Running 0 2m
echo-95894744f-stz7s 2/2 Running 0 2m
echo-95894744f-vrfjl 1/2 Running 0 2m
echo-95894744f-whn64 2/2 Running 0 2m
echo-95894744f-zcsbl 2/2 Running 0 2m
echo-95894744f-zjcp7 1/2 Running 0 2m

$ k logs -f echo-95894744f-vrfjl -c istio-proxy

  • failed checking application ports. listeners="0.0.0.0:15090","10.0.5.201:443","10.0.2.108:15029","10.0.10.235:42422","10.0.9.253:443","10.0.11.84:8443","10.0.5.153:15011","10.0.2.108:443","10.0.2.108:31400","10.0.2.108:15443","10.0.2.210:443","10.0.0.10:53","10.0.3.24:443","10.0.0.1:443","10.0.0.13:443","10.0.2.108:15030","10.0.4.88:443","10.0.2.108:15031","10.0.2.108:15032","10.0.3.222:80","10.0.2.108:15020","10.0.3.24:15443","0.0.0.0:15010","0.0.0.0:15004","0.0.0.0:80","0.0.0.0:9901","0.0.0.0:9091","0.0.0.0:8060","0.0.0.0:8080","0.0.0.0:15014","0.0.0.0:15001"
  • envoy missing listener for inbound application port: 8080

@waynz0r This looks like a separate bug. The logic in the probe does not look for 0.0.0.0, but it should. It's not immediately clear if this is an existing bug or due to changes in Pilot, but we should fix it. I've opened #13067

I am seeing this issue as well with 1.1. It is unclear if this really is fixed in the next point release or if this is still unresolved?

@cleverguy25 A couple days ago I downloaded 1.1.1 and then 1.1.2 and on clean installs of both this error occurred. I made sure that the service and deployment ports matched but still the error persisted. The only workaround that worked for me was to set the annotation readiness.status.sidecar.istio.io/applicationPorts to "". I've reverted back to 1.0.7 because of this.

@billyrob Can you provide your deployment and service yaml?

@hzxuzhonghu please consider #13106 as root cause.

@hzxuzhonghu Thanks for responding! I actually just realized that I was getting this error because I was also messing with the oneNamespace option. Setting that back to false allows the sidecars to come up without issue in 1.1.2. So, my bad, sorry!

I am still seeing this issue with 1.1.2, even if I set the annotation readiness.status.sidecar.istio.io/applicationPorts to the right set of ports. I see this as a command line to the istio-proxy container:

--applicationPorts 8250,19000,8260,80,9700,7483,27142,18000,18003

Still seeing this in logs:
`envoy missing listener for inbound application port: 8250

  • envoy missing listener for inbound application port: 19000
  • envoy missing listener for inbound application port: 8260
  • envoy missing listener for inbound application port: 80
  • envoy missing listener for inbound application port: 9700
  • envoy missing listener for inbound application port: 7483
  • envoy missing listener for inbound application port: 27142
  • envoy missing listener for inbound application port: 18000
  • envoy missing listener for inbound application port: 18003`

1.1.3 also fixed it for me

Still seeing this issue with 1.1.3 on GKE v1.12.7-gke.7

still seeing the issue in 1.1.3

Same here, and application ports are still not working for me even as a workaround.

Ingress gateway and other envoy proxies don't start

2019-04-19T00:19:41.793893Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:43.793777Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:45.794051Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:47.793979Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:49.793837Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:51.793920Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:53.793980Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:55.793823Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:57.793914Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:19:59.793790Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2019-04-19T00:20:01.793854Z info    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

Containers do start up, if the readiness probes are disabled/removed. But, the envoy instance in ingressgateway and sidecars are not able to connect to istio-pilot.

[2019-04-19 04:18:40.788][19][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
[2019-04-19 04:19:04.967][19][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream

Telemetry and Policy pods however are running without issue. They do have proxy attached.
Created new setup with 1.1.3 and encountered this. Tried 1.1.2 and the errors are still there.

Happening with default installation of 1.1.2, both ingressgateway and pilot are not running. what is the workaround please?

istio-citadel-796597fcdc-mmgpx 1/1 Running 2 5d
istio-galley-8fdc5b775-t8bmn 1/1 Running 15 5d
istio-ingressgateway-788c96cd5f-456vj 0/1 Running 0 10m
istio-pilot-6df489c64d-ptwwk 0/2 Pending 0 10m
istio-policy-8d9656979-hjwrq 2/2 Running 47 5d
istio-sidecar-injector-9f5d5bfd-l7xn7 1/1 Running 22 5d
istio-telemetry-7cf98bf6f7-wwkbq 2/2 Running 39 11d
prometheus-67cdb66cbb-jnp6s 1/1 Running 4 5d

Could any one test with #13228 ?

I recreated the setup from scratch again and don't see the errors. All containers started successfully. Do not know why I saw errors previously. Currently using 1.1.3.

applied the virtual service resolved the issue to me.
If no virtual service bind to the proxy, i will get the error.

applied the virtual service resolved the issue to me.
If no virtual service bind to the proxy, i will get the error.

I do have an internal only service without VirtualService. It starts up fine.

I am still seeing this issue with 1.1.3, even if I set the annotation readiness.status.sidecar.istio.io/applicationPorts to the right set of ports. I see this as a command line to the istio-proxy container:

--applicationPorts 8250,19000,8260,80,9700,7483,27142,18000,18003

Still seeing this in logs:
envoy missing listener for inbound application port: 8250
envoy missing listener for inbound application port: 19000
envoy missing listener for inbound application port: 8260
envoy missing listener for inbound application port: 80
envoy missing listener for inbound application port: 9700
envoy missing listener for inbound application port: 7483
envoy missing listener for inbound application port: 27142
envoy missing listener for inbound application port: 18000
envoy missing listener for inbound application port: 18003

Notice there is no VirtualService, this is internal only

Try to test with this image zhhxu2011/pilot:debug built from #13228

when using hostport, I encountered same issue, istio version is a recent daily build, after removing hostport from port definition it works without problem.

I noticed I was getting this error with deployments that did not have services, once I exposed them istio started successfully.

It was a big surprise to me that I even couldn't run the official helloworld app in Istio v1.1.3.

To reproduce it:

$ kubectl apply -f <(istioctl kube-inject -f samples/helloworld/helloworld.yaml) -n demo

$ kubectl get po -n demo
NAME                             READY   STATUS    RESTARTS   AGE
helloworld-v1-679647dd9f-r6g42   1/2     Running   0          9m
helloworld-v2-59bbb74d76-5z26b   1/2     Running   0          9m

$ kubectl logs helloworld-v1-679647dd9f-r6g42 -c istio-proxy -n demo
...
2019-04-28T09:23:32.665994Z info    Envoy proxy is NOT ready: 2 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 5000
...

$ istioctl version
version.BuildInfo{Version:"1.1.3", GitRevision:"d19179769183541c5db473ae8d062ca899abb3be", User:"root", Host:"fbd493e1-5d72-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.2-56-gd191797"}

Is it related issue or did I do anything wrong?

Met same issue using Istio 1.1.3, the problem happened more than 50% possibility, and we cannot figure out any clue on how it happens.
We use the workaround to put annotation readiness.status.sidecar.istio.io/applicationPorts: "" on application deployment spec to skip it for now, but this will have the known issue 503 service unavailable if you access the application port right after deployment.

This workaround did not work for me at all.


From: mailzyok notifications@github.com
Sent: Monday, April 29, 2019 3:14:44 AM
To: istio/istio
Cc: Cleve Littlefield; Mention
Subject: Re: [istio/istio] Istio-proxy fails to start with Istio 1.1 (#9504)

Met same issue using Istio 1.1.3, the problem happened more than 50% possibility, and we cannot figure out any clue on how it happens.
We use the workaround to put annotation readiness.status.sidecar.istio.io/applicationPorts: "" on application deployment spec to skip it for now, but this will cause 503 service unavailable if you access the application port right after deployment.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/istio/istio/issues/9504#issuecomment-487526569, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABJAMIONWJTA6ZJVYH23TX3PS3DBJANCNFSM4F7ASHDQ.

Would someone that is suffering from this issue with Istio 1.1.3 please try @hzxuzhonghu 's PR? He has built an image for testing.

Simply render the manifest as follows:

sdake@beast-01:~/go/src/istio.io/istio/install/kubernetes/helm$ helm template --set pilot.image="docker.io/zhhxu2011:debug" istio > $HOME/rendered.yaml

and thenkubectl apply -f $HOME/rendered.yaml

Cheers
-steve

It was a big surprise to me that I even couldn't run the official helloworld app in Istio v1.1.3.

To reproduce it:

$ kubectl apply -f <(istioctl kube-inject -f samples/helloworld/helloworld.yaml) -n demo

$ kubectl get po -n demo
NAME                             READY   STATUS    RESTARTS   AGE
helloworld-v1-679647dd9f-r6g42   1/2     Running   0          9m
helloworld-v2-59bbb74d76-5z26b   1/2     Running   0          9m

$ kubectl logs helloworld-v1-679647dd9f-r6g42 -c istio-proxy -n demo
...
2019-04-28T09:23:32.665994Z   info    Envoy proxy is NOT ready: 2 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090"
* envoy missing listener for inbound application port: 5000
...

$ istioctl version
version.BuildInfo{Version:"1.1.3", GitRevision:"d19179769183541c5db473ae8d062ca899abb3be", User:"root", Host:"fbd493e1-5d72-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.2-56-gd191797"}

Is it related issue or did I do anything wrong?

@brightzheng100 1.1.3 works for me. This issue seems intermittent and is effecting a large portion of deploymments.

I'd encourage folks suffering from this problem to please try @hzxuzhonghu's debug image if you have a staging environment.

sdake@beast-01:~/istio-1.1.3$ kubectl get po -n demo
NAME                             READY   STATUS    RESTARTS   AGE
helloworld-v1-75474857c6-c4pwz   2/2     Running   0          46s
helloworld-v2-5db66fd485-cvvgt   2/2     Running   0          46s
sdake@beast-01:~/istio-1.1.3$ istioctl version
version.BuildInfo{Version:"1.1.3", GitRevision:"d19179769183541c5db473ae8d062ca899abb3be", User:"root", Host:"fbd493e1-5d72-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.2-56-gd191797"}

I get this issue pretty reliably with one of my helm charts that updates multiple kubernetes deployments in parallel. The issue never seems to occur when individual pods are coming up, only when a bunch are launched at the same time.

I'm now testing the image zhhxu2011/pilot:debug in my dev environment and it looks good at first glance, but will need to wait for several deploys to go through before I can be sure it fixed anything. Don't have the bandwidth to simulate some deploys right now.

I upgraded to v1.1.4 but it's still reproducible if we installed Istio by -f values-istio-minimal.yaml.

For example:

$ helm install install/kubernetes/helm/istio \
    --name istio \
    --namespace istio-system \
    -f install/kubernetes/helm/istio/values-istio-minimal.yaml

It works if we enabled more features, for example:

$ helm install install/kubernetes/helm/istio \
    --name istio \
    --namespace istio-system \
    -f install/kubernetes/helm/istio/values-istio-demo.yaml

So the issue might be caused by missing something required and the file under install/kubernetes/helm/istio/values-istio-minimal.yaml must be updated to remove the confusion.

this occurs for us in 1.1.4

I've just figured out this issue occurs when there is no service pointing to one of pod's containerPorts.

istio : 1.1.5
k8s : v1.12.7-gke.10

Right, I think that was mentioned earlier in the issue.


From: ismail BASKIN notifications@github.com
Sent: Saturday, May 4, 2019 1:32:34 AM
To: istio/istio
Cc: Cleve Littlefield; Mention
Subject: Re: [istio/istio] Istio-proxy fails to start with Istio 1.1 (#9504)

I've just figured out this issue occurs when there is no service pointing to one of pod's containerPorts.

istio : 1.1.5
k8s : v1.12.7-gke.10

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/istio/istio/issues/9504#issuecomment-489306902, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABJAMIK4YBMOQXLIHU62WGTPTVC2DANCNFSM4F7ASHDQ.

I list three scenarios (reproducible) below for readiness probe failure of sidecar.

  1. No service deployed for a deployment
  2. Service deployed, however, the service doesn't expose port for any containerport defined in Pod
  3. The label defined in Pod template and service doesn't match, https://github.com/istio/istio/issues/11979

Please add if there are other scenarios. Thanks.

FYI. Troubleshooting tips

  1. Use the following command to check logs of sidecar
    kubect logs Pod_Name -n Namespace -cistio-proxy
  2. Use the following command to list all the listeners.
    istioctl proxy-config listeners -n Namespace Pod_Name
  3. Check whether the containerport listened by local addresses of this Pod

/

How can I help to resolve this bug? This is blocking us from moving forward with Istio. We need the new Sidecar resource, but this bug is keeping us from moving to 1.1.x as the applicationPorts annotation workaround is not working for us. Notice this defect is not present in 1.07 but that version is lacking some of the features we need to move forward.

Not prepared to dig into the code but can help with testing.

I encountered a similar problem recently, and I received errors similar to those in: https://github.com/istio/istio/issues/12659. After tuning the load shedding settings, everything started working fine. The default value in Helm is pretty low.

@riponbanik your pilot pod is pending that is why it can't connect, probably need to lower resource requests/increase cluster size
@cleverguy25 what happens when you set readiness.status.sidecar.istio.io/applicationPorts: "" that it "doesn't work"? Is it this gRPC config stream closed: 14, no healthy upstream? Or Envoy is NOT read waiting on pilot config? or other

Pasted from earlier in the thread:

even if I set the annotation readiness.status.sidecar.istio.io/applicationPorts to the right set of ports. I see this as a command line to the istio-proxy container:

--applicationPorts 8250,19000,8260,80,9700,7483,27142,18000,18003

Still seeing this in logs:
envoy missing listener for inbound application port: 8250
envoy missing listener for inbound application port: 19000
envoy missing listener for inbound application port: 8260
envoy missing listener for inbound application port: 80
envoy missing listener for inbound application port: 9700
envoy missing listener for inbound application port: 7483
envoy missing listener for inbound application port: 27142
envoy missing listener for inbound application port: 18000
envoy missing listener for inbound application port: 18003

Notice there is no VirtualService, this is internal only

@cleverguy25 Set it to empty readiness.status.sidecar.istio.io/applicationPorts: "" then the check will be skipped

I have tried that as well, it does not work.

What does "does not work" mean was my question. Is it this gRPC config stream closed: 14, no healthy upstream? Or Envoy is NOT read waiting on pilot config? or other?

It is always:
envoy missing listener for inbound application port: 18003

@cleverguy25 I don't see how that can happen if you set applicationPorts to empty. From the code here we skip the check if that is the case: https://github.com/istio/istio/blob/release-1.1/pilot/cmd/pilot-agent/status/ready/probe.go#L46. Can you describe/get the logs of the proxy and look for the --applicationPort flag? Perhaps the annotation is not being applied correctly, otherwise I don't know how it could still be doing the check

Maybe he's adding readiness.status.sidecar.istio.io/applicationPorts: "" to the deployment annotations, instead of the pod template annotations, so it doesn't take effect?

I am doing it at the deployment. Where should it be done, exactly? This is very unclear and there is not much documentation. Earlier in this thread it mentions it as an annotation on the deployment.

If I set readiness.status.sidecar.istio.io/applicationPorts to the right set of ports. I see this as a command line to the istio-proxy container:

--applicationPorts 8250,19000,8260,80,9700,7483,27142,18000,18003

So if setting it on the deployment works for specific ports, why wouldn't setting it to "" work there too?

I am doing it at the deployment. Where should it be done, exactly? This is very unclear and there is not much documentation. Earlier in this thread it mentions it as an annotation on the deployment.

If I set readiness.status.sidecar.istio.io/applicationPorts to the right set of ports. I see this as a command line to the istio-proxy container:

--applicationPorts 8250,19000,8260,80,9700,7483,27142,18000,18003

So if setting it on the deployment works for specific ports, why wouldn't setting it to "" work there too?

Just tried it out to verify:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotation:
     something: something

Does not work.

You need the annotation in the template part. This is a kubernetes thing by the way, not Istio -- the annotations on the deployment level don't get passed down to the pod

@cleverguy25 it should be done in your deployment.yaml yes, but in the pod template's annotations:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: dep_name
  namespace: dep_namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my_app
  template:
    name: dep_name
    metadata:
      annotations:
        # HERE HERE HERE!
        readiness.status.sidecar.istio.io/applicationPorts: ""
      labels:
        app: my_app
    spec:
      containers:
        ...

Yes, putting the annotation in the right place worked so this is not a blocking issue, but still occurring.

I'm actually still seeing this on 1.1.8. Even with service and containerPort matching.

* failed checking application ports. listeners="0.0.0.0:15090","10.107.41.130:31400","10.107.41.130:15030","10.107.41.130:15032","10.108.151.229:443","10.98.44.187:443","10.107.41.130:15031","10.105.91.232:15011","10.110.252.193:53","10.107.41.130:15020","10.107.41.130:443","10.107.41.130:15029","10.107.41.130:15443","10.100.33.113:42422","0.0.0.0:15014","0.0.0.0:80","0.0.0.0:15010","0.0.0.0:9091","0.0.0.0:8080","0.0.0.0:15004","0.0.0.0:9901","0.0.0.0:8060","0.0.0.0:15001"
* envoy missing listener for inbound application port: 3001
2019-06-18T18:45:04.742168Z     info    Envoy proxy is NOT ready: 2 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090","10.107.41.130:31400","10.107.41.130:15029","10.107.41.130:15443","10.98.44.187:443","10.110.252.193:53","10.107.41.130:15031","10.100.33.113:42422","10.107.41.130:15020","10.108.151.229:443","10.105.91.232:15011","10.107.41.130:15032","10.107.41.130:443","10.107.41.130:15030","0.0.0.0:8060","0.0.0.0:15010","0.0.0.0:15004","0.0.0.0:9901","0.0.0.0:8080","0.0.0.0:80","0.0.0.0:9091","0.0.0.0:15014","0.0.0.0:15001"
* envoy missing listener for inbound application port: 3001
2019-06-18T18:45:05.989025Z     info    Envoy proxy is NOT ready: 2 errors occurred:

* failed checking application ports. listeners="0.0.0.0:15090","10.107.41.130:31400","10.107.41.130:15030","10.107.41.130:15032","10.108.151.229:443","10.98.44.187:443","10.107.41.130:15031","10.105.91.232:15011","10.110.252.193:53","10.107.41.130:15020","10.107.41.130:443","10.107.41.130:15029","10.107.41.130:15443","10.100.33.113:42422","0.0.0.0:15014","0.0.0.0:80","0.0.0.0:15010","0.0.0.0:9091","0.0.0.0:8080","0.0.0.0:15004","0.0.0.0:9901","0.0.0.0:8060","0.0.0.0:15001"
* envoy missing listener for inbound application port: 3001

It gets resolve eventually like around 5 minutes, but until then Spinnaker doesn't recognize the pods as being stable so this actually mess up deployments for us.
Something that previously took less than a minute is now costing more than 5 minutes.

upgraded to 1.1.9 and still seeing this issue.

sometimes it works, sometimes it doesn't. this readiness check from envoy seems to be pretty flaky?
it seems to pass the checks fine for the initial pod that comes up in the replicaset but the following pods that comes up will fail the applicationPort check.

sometimes even if there's a service that match the containerPort.

I've been using the readiness.status.sidecar.istio.io/applicationPorts: "" to workaround this issue until today, when I found that I was getting random 502's from the pods in a deployment. Seems some of the envoy proxies truly weren't ready, but without the check they were still in the pool.

Removing the annotation and killing the pods manually until I got a full set that were truly healthy resolved the issue.

I highly recommend that anyone considering using the readiness.status.sidecar.istio.io/applicationPorts: "" does so carefully as it's certainly not a real fix.

One thing I've observed (and I believe others have to) is that killing the istio-pilot pod seems to kick things into a working state - not immediately, but it starts pilot handing out the correct configs within a few seconds.

Without kicking pilot I've seen pods that still aren't healthy after 20 minutes.

This is all on a relatively low usage cluster (maybe 200 pods, only about 6 nodes), and Pilot's cpu/memory usage is very low, even when it's (apparently) screwing up.

Can someone please address this properly? It's been present from 1.1.0 and I've tested every version up to and including 1.1.9 so far with no change.

@gdhagger @sdake and I have tried to reproduce this using a single node cluster to start 300 pods simultaneously , all pods become ready after several minutes(4~6), And I tied to see why it is slow for several minutes , and the result shows it is kubelet start containers and report status too slow.

@hzxuzhonghu what do you mean by kubelet reporting status too slow? could you clarify on that?

I don't even deploy 300 pods simultaneously, I was seeing this issue when just deploying 5 pods.

@hzxuzhonghu what do you mean by kubelet reporting status too slow? could you clarify on that?

I don't even deploy 300 pods simultaneously, I was seeing this issue when just deploying 5 pods.

In my env, the kubelet will start containers one by one and then update status, so it is very slow.

It is really weird for you to see this deploying 5 pods.

my env, the kubelet will start containers one by one and then update status, so it is very slow.
It is really weird for you to see this deploying 5 pods.

To clarify, kubelet schedule things simultaneously in my setup with or without the Envoy proxy.

This issue should be pods not getting into a ready state because Envoy doesn't correct itself and persist its weird state, and I'm not sure how kubelet could be related to that.

To be honest, I don't know the internals of kubelet so I just don't know how it relates, but everything works absolutely fine when running without the Envoy proxy as the sidecar, so there's likely something with Envoy and Pilot going on here.

Are you familiar with pilot code, especially the k8s serviceregistry part? Basically, you can add some log on when pilot get informed of the occur of endpoints including the unready pod.

To provide some additional data points, I am on 1.2.0 and also experience these issues. In my case, it occurs when there are one or more DestinationRules which refer to a Service that does not exist. For example, this can happen when you have a DestinationRule to split traffic between versions of the same service, but not all of those versions are actually deployed in your cluster.

Interestingly, if I turn off PodSecurityPolicys for the cluster, the issue disappears. It also does not occur if I assign thew psp:privileged role to the service accounts for the pods in the application. In cases where I do observe the issue, sometimes it resolves itself after approximately 30 minutes (but this does not always happen).

Why does it has a relation with PSP, which is an admission controller, will allow/deny pod creation?

It is not clear to me why the two are related, but nonetheless, they appear to be. Possibly it is because the sidecar is injected by an admission controller...

But only when all admit passes, it can come to stage starting up containers.

Right, I understand how that works. I am simply reporting observed behavior.

On Mon, Jul 1, 2019 at 11:51 PM Zhonghu Xu notifications@github.com wrote:

But only when all admit passes, it can come to stage starting up
containers.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/istio/istio/issues/9504?email_source=notifications&email_token=AAMYPYTXI3SNOEEY47OUTY3P5LGEFA5CNFSM4F7ASHD2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODY77FQY#issuecomment-507507395,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAMYPYXWL6YUXANMS4C4X53P5LGEFANCNFSM4F7ASHDQ
.

Just saw this behaviour when the pilot took a little while longer than expected to spin up.

Edit: Appreciate this is expected.

I'm seeing similar issues with a new GKE+Istio cluster and a single deployment. Adding readiness.status.sidecar.istio.io/applicationPorts: "" seems to resolve the issue of the proxy not being ready but it starts emitting a number of log errors in the proxy as follows:

[2019-07-05 18:48:39.474][14][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13, 
[2019-07-05 18:48:39.672][14][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.

Prior to adding this annotation I was seeing similar log entries as this comment and the proxy being in a continuous state of "not ready":

https://github.com/istio/istio/issues/9504#issuecomment-464957812

Cluster Details

The above deployment lacked an associated service. Adding a headless deployment seems to negate the need for readiness.status.sidecar.istio.io/applicationPorts: "" and put the proxy into a valid ready state.

There does seem to be a new set of warnings (or warnings I didn't notice previously) with the istio-proxy sidecars:

[2019-07-05 19:42:47.665][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.

We same issue... For us it is somehow related to enabling security (running citadel pod, even without using any security). How it works for me:

  1. install istio with security
  2. install service which uses istio sidecar proxy and wait for it to be ready
  3. everything is working as expected
  4. remove security from istio (f.e. by setting istio.enabled: false)
  5. try to delete some pods - they will come back to life normally - everything works as expected (still using istio proxy)
  6. reinstall service using helm (delete --purge -> install)
  7. service won't work - envoy proxy will face well know, described here issue
  8. restore istio security - desired service should become ready (if not - recreate pods)
  9. everything is working as expected...

It's super intuitive and broken, because:

  1. we aren't using istio security at all
  2. it's required only for the first service's pods start! Then istio security can be removed and everything will work as expected (f.e. service upgrades using helm).
    It worked for us for weeks until we had to add new service which uses istio proxy...

I hope it will help somebody

I just ran into the same issue. Using the readiness.status.sidecar.istio.io/applicationPorts annotation didn't work for me.

The resolution i ended up with was to define a readinessProbe on the container.

readinessProbe:
  httpGet:
    path: /
    port: 80
    scheme: HTTP

Ran into the same problem trying to deploy OpenCensus agent as a Daemonset, with a set up similar to this example: https://github.com/census-instrumentation/opencensus-service/blob/master/example/k8s.yaml

Doing the workaround to set an applicationPort annotation directly works to suppress the error, but client services can still not connect to the agent on the specified port. (Deploying it instead as a Service and Deployment allows them to connect, so it's not the client.)

@majelbstoat I can confirm that using both a service and a deployment solves the issue for me.

Yeah, works for me too, but I would prefer to deploy the agent as a DaemonSet.

@majelbstoat I am able to configure the agent and the collector properly. Pods for agent and the collector show 2/2.
However I am unable to post stats and traces from an application to the collector backend. Any suggestions ?

Sorry if I'm repeating something already discussed but I noticed this same error occurs when I set a Default Policy on a namespace enabling mtls.

Background
I have mtls enabled cluster-wide however my services are unreachable even with Traffic Policy defined in the Destination Rules. However if I deploy a default or service-specific Policy enabling mtls at the namespace level, I can start to reach the service but, if I delete this pod and attempt to start another pod, Istio-proxy fails to start until I delete the Policy. I need to then apply it again for the service to be reachable.

Namespace Policy

apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  name: default
  namespace: sw-system
spec:
  peers:
  - mtls: {}

Mesh policy

apiVersion: v1
items:
- apiVersion: authentication.istio.io/v1alpha1
  kind: MeshPolicy
  metadata:
    labels:
      app: security
      chart: security
      heritage: Tiller
      release: istio
    name: default
  spec:
    peers:
    - mtls: {}

I'm having the same issue running this example when deploying v2:
https://github.com/EdwinVW/pitstop/wiki/Run%20the%20application%20using%20a%20Service%20Mesh

After increasing the amount of memory available to my minikube, the issue went away.

I also see this same error.

When I describe my pod:

Readiness probe failed: HTTP probe failed with statuscode: 503

The istio-proxy log:

Envoy proxy is NOT ready: 2 errors occurred:
* failed checking application ports.
* envoy missing listener for inbound application port: 80

Here's the setup i'm using:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-one
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world-one
      version: "1.0"
  template:
    metadata:
      labels:
        app: hello-world-one
        version: "1.0"
    spec:
      containers:
      - name: hello-world-one
        image: paulbouwer/hello-kubernetes:1.5
        ports:
        - containerPort: 8080
        env:
        - name: MESSAGE
          value: Hello from 'hello-world-one'
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world-one
  namespace: default
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
    targetPort: 8080
  selector:
    app: hello-world-one
    version: "1.0"

This is fixed on Istio 1.3. Istio 1.1 is EOL in 2 days: https://istio.io/blog/2019/announcing-1.1-eol/.

If you see this issue on Istio 1.3+ please open a new issue

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ldemailly picture ldemailly  Â·  87Comments

prune998 picture prune998  Â·  65Comments

kyessenov picture kyessenov  Â·  72Comments

linsun picture linsun  Â·  58Comments

blackbaud-brandonstirnaman picture blackbaud-brandonstirnaman  Â·  57Comments