Envoy: Envoy translates localhost to "::1" when no IPv6 stack available

Created on 27 Aug 2019  路  4Comments  路  Source: envoyproxy/envoy

Title: Envoy translates localhost to "::1" when no IPv6 stack available

Description:

Hi!

I am trying to get an Envoy configuration to support IPv4 and IPv6 stacks. In a first approach, I only wanted to test that the same configuration is valid for pure IPv4 and IPv6 hosts (not needed dualstack).

First I checked that you were introducing IPv6 support recently, and you already have some openened tickets that maybe represents my issue (#1005, for example).

My scenario is as following:

  • Envoy uses grpc envoy.ext_authz filter to authoritate and then routes to a cluster depending on some rules.
  • Envoy, auth server and clusters are all running in the same container.
  • Using Envoy v1.11.0

So, my first attempt was to bind the listener to "::" and use strict_dns in the auth server and clusters, all of them pointing to "localhost" to let the DNS resolver to select "127.0.0.1" or "::1" depending on the stack selected.

Config:

This is the Envoy configuration I am using:

node:
  id: "1"
  cluster: test

admin:
  access_log_path: /admin_access.log
  address:
    socket_address: { address: "::", port_value: 9901, ipv4_compat: true }
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address:
        address: "::"
        port_value: 8888
        ipv4_compat: true
    filter_chains:
      - filters:
        - name: envoy.http_connection_manager
          config:
            codec_type: auto
            stat_prefix: ingress_http
            access_log:
              - name: envoy.file_access_log
                config:
                  path: "/access.log"
            rds:
               route_config_name: local_route
               config_source:
                  path: /etc/envoy/rds.yaml
            http_filters:
              - name: envoy.ext_authz
                config:
                  grpc_service:
                    envoy_grpc:
                      cluster_name: auth-server
                    timeout: 3s
                  clear_route_cache: true
                  with_request_body:
                    max_request_bytes: 2048
                    allow_partial_message: true
              - name: envoy.router
dynamic_resources:
  cds_config:
    path: "/etc/envoy/cds.yaml"

The content of the Cluster section would be like this:

version_info: "0"
resources:
  - "@type": type.googleapis.com/envoy.api.v2.Cluster
    name: auth-server
    connect_timeout: 5s
    type: strict_dns
    http2_protocol_options:  {}
    load_assignment:
      cluster_name: auth-server
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: "localhost"
                port_value: 50051
                ipv4_compat: true

Logs:

With this configuration, in a container that only supports IPv4, Envoy translates "localhost" to "::1" so, the connection to auth-server fails (as it is only accepting connections from IPv4 stack):

[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:366] [C2] parsing 78 bytes
[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:479] [C2] message begin
[2019-08-27 17:16:35.968][313][debug][http] [source/common/http/conn_manager_impl.cc:246] [C2] new stream
[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:334] [C2] completed header: key=Host value=127.0.0.1:8888
[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:334] [C2] completed header: key=User-Agent value=curl/7.60.0
[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:445] [C2] headers complete
[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:334] [C2] completed header: key=Accept value=*/*
[2019-08-27 17:16:35.968][313][trace][http] [source/common/http/http1/codec_impl.cc:466] [C2] message complete
[2019-08-27 17:16:35.968][313][debug][http] [source/common/http/conn_manager_impl.cc:600] [C2][S17340692137313621699] request headers complete (end_stream=true):
':authority', '127.0.0.1:8888'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.60.0'
'accept', '*/*'

[2019-08-27 17:16:35.968][313][debug][http] [source/common/http/conn_manager_impl.cc:1092] [C2][S17340692137313621699] request end stream
[2019-08-27 17:16:35.968][313][trace][filter] [source/extensions/filters/http/ext_authz/ext_authz.cc:72] [C2][S17340692137313621699] ext_authz filter calling authorization server
[2019-08-27 17:16:35.968][313][debug][router] [source/common/router/router.cc:401] [C0][S6842458295467923966] cluster 'auth-server' match for URL '/envoy.service.auth.v2.Authorization/Check'
[2019-08-27 17:16:35.968][313][debug][router] [source/common/router/router.cc:514] [C0][S6842458295467923966] router decoding headers:
':method', 'POST'
':path', '/envoy.service.auth.v2.Authorization/Check'
':authority', 'auth-server'
':scheme', 'http'
'te', 'trailers'
'grpc-timeout', '3000m'
'content-type', 'application/grpc'
'x-envoy-internal', 'true'
'x-forwarded-for', '192.168.0.17'
'x-envoy-expected-rq-timeout-ms', '3000'

[2019-08-27 17:16:35.969][313][debug][client] [source/common/http/codec_client.cc:26] [C3] connecting
[2019-08-27 17:16:35.969][313][debug][connection] [source/common/network/connection_impl.cc:702] [C3] connecting to [::1]:50051
[2019-08-27 17:16:35.969][313][debug][connection] [source/common/network/connection_impl.cc:715] [C3] immediate connection error: 99
[2019-08-27 17:16:35.969][313][debug][http2] [source/common/http/http2/codec_impl.cc:726] [C3] setting stream-level initial window size to 268435456
[2019-08-27 17:16:35.969][313][debug][http2] [source/common/http/http2/codec_impl.cc:748] [C3] updating connection-level initial window size to 268435456
[2019-08-27 17:16:35.969][313][debug][pool] [source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-08-27 17:16:35.969][313][trace][router] [source/common/router/router.cc:1393] [C0][S6842458295467923966] buffering 317 bytes
[2019-08-27 17:16:35.969][313][trace][http] [source/common/http/conn_manager_impl.cc:857] [C2][S17340692137313621699] decode headers called: filter=0x2a80000 status=4
[2019-08-27 17:16:35.969][313][trace][http] [source/common/http/http1/codec_impl.cc:387] [C2] parsed 78 bytes
[2019-08-27 17:16:35.969][313][trace][connection] [source/common/network/connection_impl.cc:288] [C2] readDisable: enabled=true disable=true
[2019-08-27 17:16:35.969][313][trace][connection] [source/common/network/connection_impl.cc:456] [C3] socket event: 2
[2019-08-27 17:16:35.969][313][debug][connection] [source/common/network/connection_impl.cc:467] [C3] raising immediate error
[2019-08-27 17:16:35.969][313][debug][connection] [source/common/network/connection_impl.cc:188] [C3] closing socket: 0
[2019-08-27 17:16:35.969][313][debug][client] [source/common/http/codec_client.cc:82] [C3] disconnect. resetting 0 pending requests
[2019-08-27 17:16:35.969][313][debug][pool] [source/common/http/http2/conn_pool.cc:149] [C3] client disconnected
[2019-08-27 17:16:35.969][313][debug][router] [source/common/router/router.cc:868] [C0][S6842458295467923966] upstream reset: reset reason connection failure
[2019-08-27 17:16:35.969][313][debug][http] [source/common/http/async_client_impl.cc:91] async http request response headers (end_stream=true):
':status', '200'
'content-type', 'application/grpc'
'grpc-status', '14'
'grpc-message', 'upstream connect error or disconnect/reset before headers. reset reason: connection failure'

[2019-08-27 17:16:35.969][313][trace][filter] [source/extensions/filters/http/ext_authz/ext_authz.cc:218] [C2][S17340692137313621699] ext_authz filter rejected the request with an error. Response status code: 403
[2019-08-27 17:16:35.969][313][debug][http] [source/common/http/conn_manager_impl.cc:1167] [C2][S17340692137313621699] Sending local reply with details ext_authz_error
[2019-08-27 17:16:35.969][313][debug][http] [source/common/http/conn_manager_impl.cc:1359] [C2][S17340692137313621699] encoding headers via codec (end_stream=true):
':status', '403'
'date', 'Tue, 27 Aug 2019 15:16:35 GMT'
'server', 'envoy'

[2019-08-27 17:16:35.969][313][trace][connection] [source/common/network/connection_impl.cc:392] [C2] writing 97 bytes, end_stream false
[2019-08-27 17:16:35.969][313][trace][connection] [source/common/network/connection_impl.cc:288] [C2] readDisable: enabled=false disable=false
[2019-08-27 17:16:35.969][313][debug][pool] [source/common/http/http2/conn_pool.cc:171] [C3] destroying primary client

This is the log where envoy is trying to send the auth qyery:

[2019-08-27 17:16:35.969][313][debug][connection] [source/common/network/connection_impl.cc:702] [C3] connecting to [::1]:50051

I was a little surprised so, I used nslookup to figure out how the container was translating the "localhost" dns and no "::1" was returned:

# nslookup localhost
Server:     10.0.2.3
Address:    10.0.2.3#53

Non-authoritative answer:
Name:   localhost.mycompany.com
Address: 127.0.0.1

Even, I checked /etc/hosts to be sure any IPv6 address was there:

# cat /etc/hosts
127.0.0.1   localhost
192.168.0.17    a931aae25c13

Not sure if this is an issue related to the environment I am using for the test (docker containers), so I would like to ask you if you have any clue if Envoy should translate the "localhost" properly depending on the stack or if this a current limitation.

question stale

Most helpful comment

For those trying to run a Sidecar in AWS ECS Fargate with envoy - (not in AWS AppMesh)

This issue is the reason that why you might be getting this error immediate connection error: 99

I ran an envoy sidecar, version envoyproxy/envoy:v1.14.2, and after setting the address to 127.0.0.1, in clusters[].hosts[].socket_address, the connection occurred normally

All 4 comments

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

Probably coming from https://github.com/moby/moby/issues/33099
@kikogolk Does your docker version match the issue above?

For those trying to run a Sidecar in AWS ECS Fargate with envoy - (not in AWS AppMesh)

This issue is the reason that why you might be getting this error immediate connection error: 99

I ran an envoy sidecar, version envoyproxy/envoy:v1.14.2, and after setting the address to 127.0.0.1, in clusters[].hosts[].socket_address, the connection occurred normally

Was this page helpful?
0 / 5 - 0 ratings