I'm trying to add a CORS policy to my only virtual_host in the ingress listener:
"route_config": {
"virtual_hosts": [
{
"routes": [
{
"route": {
"cluster": "my-service",
"timeout": "60s"
},
"match": {
"prefix": "/"
}
}
],
"cors": {
"allow_headers": "Authorization,Content-Type, correlationid",
"allow_origin": [
"my-domain.com"
],
"allow_methods": "GET, POST, PUT, HEAD, OPTIONS"
},
...
Based on implementation details for CORS, having this should allow me to do an OPTIONS request and get a 200 OK back with the corresponding CORS headers in the response. However, it looks like my OPTIONS request is bypassing this filter altogether and getting to my application (which then responds with a 405 error):
# curl -v https://my-service -X OPTIONS -H 'Origin: foo.com'
* Trying 172.20.244.140...
* TCP_NODELAY set
* Connected to my-service (172.20.244.140) port 443 (#0)
> OPTIONS / HTTP/1.1
> Host: my-service
> User-Agent: curl/7.61.0
> Accept: */*
> Origin: foo.com
>
< HTTP/1.1 405 Method Not Allowed
< content-type: application/json; charset=UTF-8
< date: Tue, 30 Oct 2018 20:30:41 GMT
< content-length: 32
< x-envoy-upstream-service-time: 3
< server: my-service
<
* Connection #0 to host my-service left intact
{"message":"Method Not Allowed"}
In this case my-service is a K8s service listening on 443 and sending to Envoy's ingress listener, which has only one virtual_host defined with the CORS policy listed above. Any ideas as to what I might be missing here?
@codesuki @dschaller
Did you add the CORS filter to the http_filters list?
Similar to this.
listeners:
filter_chains:
http_filters:
- name: envoy.cors
It appears you're missing the Access-Control-Request-Method header in your options request
Good catch!
For more information check step 3 here:
https://www.w3.org/TR/cors/#resource-preflight-requests
Unfortunately still not working even with Access-Control-Request-Method in the request header. Here's my full listener config (converted to yaml for readability):
static_resources:
listeners:
- address:
socket_address:
address: '0.0.0.0'
port_value: 12345
name: ingress
filter_chains:
filters:
- config:
route_config:
virtual_hosts:
- routes:
- route:
cluster: my-service
timeout: 60s
match:
prefix: /
require_tls: ALL
name: local_service
cors:
allow_origin:
- my-domain.com
allow_methods: GET, POST, PUT, HEAD, OPTIONS
allow_headers: Authorization,Content-Type,correlationid
domains:
- '*'
name: local_route
server_name: my-service
stat_prefix: ingress_http
http_filters:
- name: envoy.cors
- name: envoy.router
codec_type: AUTO
name: envoy.http_connection_manager
I'm referencing envoy.cors as part of http_filters in envoy.http_connection_manager. Does it need to be defined elsewhere?
No. That is correct.
I tried but could not reproduce the problem here.
Can you confirm that you curl with the same origin that you set in the config?
Only when I remove the Access-Control-Request-Method I get the same output as you.
@codesuki boom, that was it. I was using an origin different than the one listed in my config, and my lack of understanding in how CORS works expected me to see Envoy returning some sort of error (saying origin doesn't match or something like that), which is not the case. Passing in an origin that matched the config returned a 200 OK with all the other access-control headers in the response, as expected. Perfect, thank you!
Ok, I might have found a legit (but minor) bug though: it looks like for allow_origin_regex if the regex pattern is invalid, Envoy crashes at startup without any error. This can probably be fixed by checking to make sure each string can be parsed before adding it to the list of expressions, otherwise return an explicit error stating the pattern can't be parsed: https://github.com/envoyproxy/envoy/pull/3769/files#diff-e1ba5ba2b98b12e1bd4aaa74e9b61a80R58. This sounds like a simple fix, so if you all agree this is a bug I wouldn't mind start working on a fix (this sounds like a good first issue for me to work on)
That seems like a valid request. I think the main concern is how to handle invalid regex patterns since suppressing them will likely result in unexpected behavior.
@arianmotamedi how are you thinking of handling invalid regex patterns?
I need to dig through the code to see how Envoy handles other similar scenarios, but should this be treated as a fatal error during startup? Or just a warning for invalid patterns? I'm thinking a fatal error would be more clear.
Good call!
Similar to https://github.com/envoyproxy/envoy/blob/0f7120968e60da62feb59f00170078611dffc18a/source/common/router/config_impl.cc#L745
you can use
https://github.com/envoyproxy/envoy/blob/0f80888148c581cb4d0c81aba40488d8507a4ca2/source/common/common/utility.cc#L536
to throw an exception if parsing fails.
Just adding the call here
https://github.com/envoyproxy/envoy/blob/0f7120968e60da62feb59f00170078611dffc18a/source/common/router/config_impl.cc#L109
should solve it.
I remember wanting to use RegexUtil::parseRegex but for some reason have forgotten to do it 馃槄
Wait, looks like https://github.com/envoyproxy/envoy/blob/0f7120968e60da62feb59f00170078611dffc18a/source/common/router/config_impl.cc#L109 is already doing RegexUtil::parseRegex(regex) which should have thrown an exception, right? 馃
Sorry, these days I am extremely absent-minded. I even linked you to the exact line 馃槄
So, yes, I would expect that to throw an exception. I never used exceptions before in C++, so I would have to dig deeper. Maybe this exception gets caught somewhere?
In case you have some spare time, maybe you could check what happens if you provide a broken regex to https://github.com/envoyproxy/envoy/blob/0f7120968e60da62feb59f00170078611dffc18a/source/common/router/config_impl.cc#L745
I think this already works as intended. I attempted to repo with invalid regex and an exception was thrown at startup.
front-envoy_1_a1d4160e9143 | [2018-11-30 20:04:30.653][000007][critical][main] [source/server/server.cc:85] error initializing configuration '/etc/front-envoy.yaml': Invalid regex '*': regex_error
front-envoy_1_a1d4160e9143 | [2018-11-30 20:04:30.655][000007][info][main] [source/server/server.cc:502] exiting
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.
I have a similar issue. Can you please tell me what is wrong with it?
Envoy Config:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: main-listener
address:
socket_address: { address: 0.0.0.0, port_value: 51051 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: grpc_json
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
cors:
allow_origin: ["*"]
allow_methods: "OPTIONS, GET, PUT, DELETE, POST, PATCH, OPTIONS"
allow_headers: "authorization, keep-alive, user-agent, cache-control, content-type, content-transfer-encoding, x-accept-content-transfer-encoding, x-accept-response-streaming, x-user-agent, x-grpc-web, referer"
expose_headers: "grpc-status, grpc-message, x-envoy-upstream-service-time"
routes:
- match: { prefix: "/", grpc: {} }
route: { cluster: grpc-backend-services, timeout: { seconds: 60 } }
http_filters:
- name: envoy.cors
- name: envoy.grpc_web
- name: envoy.grpc_json_transcoder
config:
proto_descriptor: "/data/campaignmgmt.pb"
services: ["CampaignMgmt"]
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: false
preserve_proto_field_names: false
- name: envoy.router
clusters:
- name: grpc-backend-services
connect_timeout: 1.25s
type: logical_dns
lb_policy: round_robin
dns_lookup_family: V4_ONLY
http2_protocol_options: {}
hosts:
- socket_address:
address: host.docker.internal
port_value: 6565
curl request:
curl -v 'https://localhost:51051/foodad' -X OPTIONS -H 'Access-Control-Request-Method: POST' -H 'Access-Control-Request-Headers: content-type' -H 'Origin: http://localhost:8181'
@vaibhav-gupta-grab Having difficulties with same issue. Have you find the solution?
I had to remove the grpc : {} from the routes.match property.