Edit: (@alyssawilk on behalf of @cmluciano)
https://docs.google.com/document/d/1G9IVq7F7Onwinsl6EYzGsdzAGvVbo2FGfcPt35ItIx8)
Just like TCP proxying, it would be great if Envoy had support UDP proxying as well.
The current code for TCP proxying is pretty generic for most part. The flow is something like this:
on_connection_received_callback()
-->pick upstream and connect to it
on_data_received_callback(data)
-->write_to_upstream(data)
on_stream_reset_callback() [downstream reset or upstream reset?]
-->cleanups
Based on a cursory scan through the code, there is also a timer that cleans up connections beyond a certain period of inactivity ( @mattklein123 please confirm).
In terms of UDP support, much of the code in filters above can be repurposed or renamed to be generic to TCP/UDP where possible.
The ClientConnectionImpl class hardcodes the socket type to be Stream. This needs to be changed.
UDP packets with source port 0 should be dropped(?)
Instead of creating/destroying UDP connection objects per packet, the process can be optimized by having a keepalive style timer that deletes the connection objects after timer expiry. UDP datagram size can be fixed to one MTU or less, as a first order approximation, that should (RFC 791, RFC 2460). We do not need to buffer up data and send it out. WDYT?
In terms of session affinity, packets from same src port, src ip would go to same dst port, dst ip.
Just like TCP proxying, it would be great if Envoy had support UDP proxying as well.
The current code for TCP proxying is pretty generic for most part. The flow is something like this:
on_connection_received_callback()
-->pick upstream and connect to it
on_data_received_callback(data)
-->write_to_upstream(data)
on_stream_reset_callback() [downstream reset or upstream reset?]
-->cleanups
Based on a cursory scan through the code, there is also a timer that cleans up connections beyond a certain period of inactivity ( @mattklein123 please confirm).
There is not currently any idle timer in the tcp_proxy filter. This is a useful feature to add in either case, and we would want this for UDP.
In terms of UDP support, much of the code in filters above can be repurposed or renamed to be generic to TCP/UDP where possible.
Agreed, almost all of the code can be shared. The name of the "tcp_proxy" filter is unfortunate. I don't know if I would bother renaming it right away. We can do that in a dedicated change if we want.
The ClientConnectionImpl class hardcodes the socket type to be Stream. This needs to be changed.
This is related to what @jamessynge was asking about in terms of why the Address interface also includes socket stuff. I mainly did this for simplicity. Ultimately, for UDP upstreams, we would like the ability to specify the upstream probably as udp://1.2.3.4:80 in terms of the cluster definitions, CDS, etc. Given the current code, the simplest way to do this would be have the Address interface also hold the socket type (as you mentioned in Gitter), and remove this parameter from the various socket related functions. The alternative would be to split the Address interface out and have an Address and a SocketAddress, where a SocketAddress contains an Address. I could really go either way on this. I don't think it's a huge deal.
UDP packets with source port 0 should be dropped(?)
Instead of creating/destroying UDP connection objects per packet, the process can be optimized by having a keepalive style timer that deletes the connection objects after timer expiry. UDP datagram size can be fixed to one MTU or less, as a first order approximation, that should (RFC 791, RFC 2460). We do not need to buffer up data and send it out. WDYT?
Per above, I don't think the filter needs to do anything different whatsoever than it does today. The code can pretty much be identical, along with an idle timer to destroy things. I think where you have to deal with UDP is probably inside ConnectionImpl. You are going to need to know that it is UDP and deal with MTU there. Doing anything else will be too complicated I think.
In terms of session affinity, packets from same src port, src ip would go to same dst port, dst ip.
I don't think you need to worry about this. We will need to have UDP listeners, which bind, and have a filter stack. All of the normal rules then apply for where to forward. Along these lines, we are going to need to make the listener configuration more extensible. Right now we just support "port". I would like to extend this to be something along the lines of:
"bind_config": {
"type": "udp",
"address": "0.0.0.0:80"
}
Doing the above will make it easy for schemas, allow us to have pipe listeners, do IPv6, etc. In general I would appreciate it if you could sync up with @jamessynge on all of this, as I think it's related to IPv6 stuff, as well as future work I know probably needs to happen around QUIC, etc.
@moderation ^^^ Can you provide any info on the specifics of what scenario you need to support in terms MTU handling, etc.? Want to make sure we are hitting a specific use case.
@rshriram if we do this, I would like to do this in several different changes. For example, we could start with adding UDP listeners, which proxy to TCP. That is a pretty straightforward change and is independent.
I agree with splitting this into multiple PRs. There are small changes to different subsystems. We should do this piecemeal to make sure we can triage issues easily.
I am unfamiliar with the requirements on QUIC.
With regard to the bind_config, it looks very structured. But couple of questions: why do we need to key off address as well, when port is what matters? (is this related to the issue that @kyessenov posted?) Secondly, will this config be backward compatible with existing configs ? because, it seems to break the config format.
Here is an alternate format (I am okay with either one frankly).
listeners: [
{
"port": 80
"port_type": "udp|tcp" [tcp is default]
...
}
Has there been any progress made on this effort? Is it in active development or open to contribution?
No one is working on this that I know of. This did come up today in the context of something that would be good to work on. This is actually a fairly complicated feature and needs some thinking.
@shalako can you provide more color on what you need here actually? Do you just need UDP -> UDP? UDP -> TCP? Should datagram boundaries be preserved? Etc.
@shalako can keep me honest here, but I suspect our expectations are:
I’m interested in moving this forward. Are there more pertinent discussions that I should take a look at before starting some of the changes mentioned above?
@cmluciano I don't think anyone is actively working on this. This one probably is best served by a short design doc (1-2 pages, nothing fancy). Do you want to browse the code and then maybe we can collaborate on the doc contents? Would love to get this being worked on. FYI there is some interest from Cisco in also helping out with this but unclear on when they would have time. I think we can get started if you have cycles.
FWIW; I'm interested in this work for the purpose of proxying SIP traffic and RTP media streams in and out of a k8s kluster. When appropriate, I'll be happy to assist with setting up some services, and doing some testing, if that's helpful to you @cmluciano & @mattklein123
@mattklein123 Sounds good to me. I will take a look through the codebase and let you know when I'm ready for the doc.
@jevonearth Thanks! I will ping you when ready
Let me ask a few questions here about functional behavior that I don't see in the issue yet.
I presume we want to be able to specify:
udp://${proxyip}:${proxyport} -> udp://${proxiedToIp}:${proxiedToPort}
correct? So a packet's headers would be transformed like this:
dstip = ${proxyIp} -> ${proxyToIp}
dstport = ${proxyPort} -> ${proxyToPort}
srcip = ${clientIp} -> ${proxyIp}
srcport = ${clientPort} -> ${proxyFromPort}
and going the other way:
dstip = ${proxyToIp} -> ${clientIp}
dstport = ${proxyFromPort} -> ${clientPort}
srcip = ${proxyToIp} -> ${proxyIp}
srcport = ${proxyToPort} -> ${proxyPort}
Is that the desired behavior?
@hagbard5235 ^ is my assumption, but part of the reason that I think we need a design doc on this one is that it's honestly not clear to me exactly what the behavior should be. For example, it's easy enough to fit UDP into Envoy filter chain semantics by raising onData() for each datagram, but what if the user tries to send a datagram that is too large for the target MTU? (Either because path MTU does not match, or we are doing TCP -> UDP).
Also, there are some thorny issues around listening for UDP datagrams and the Envoy threading/filter model that need to be thought through.
Oh good... so I wasn't the only one not seeing clarity then ;)
Sounds like there's a desire to do TCP -> UDP and UDP -> TCP proxying as well. Does anyone have an example use case for those transitions? I'm curious how we anticipate them being used :)
Is it ok to start with UDP -> UDP and to allow packets to fragment on the way out, if there's an MTU issue?
Sounds like there's a desire to do TCP -> UDP and UDP -> TCP proxying as well
No idea if this is needed or not, I just want to make sure we consider it in the design and exclude with appropriate thinking. Either way the MTU mismatch issue and threading issues will need to be dealt with.
allow packets to fragment on the way out
Fragging may not be supported in the environment. In v1 we can likely ignore MTU issues and just document. This leaves threading. Anyway, just want to capture all of this in the design. :)
@mattklein123 I'm cool capturing TCP -> UDP and UDP -> TCP in the design :) I was asking because if one has more concrete examples available it often helps in the design process :)
Question, are we doing a pure UDP proxy (ie, we make our decisions on ip proto=UDP and port) or and purely mutate ip:port fields or are we looking to look into the datagrams to make proxy decisions?
Question, are we doing a pure UDP proxy (ie, we make our decisions on ip proto=UDP and port) or and purely mutate ip:port fields or are we looking to look into the datagrams to make proxy decisions?
My thinking here was to go for full L4 proxy. Basically something like:
UDP listener -> raised datagrams in OnData() -> filter chain
Using this model, the "tcp_proxy" I think should mostly "just work" modulo some minor changes.
For QUIC, in the future, we will have needs to do some pure L4 proxying, but IMO we should try to actually fit this within the existing filter model as much as possible.
The main issue that we have to solve (that I don't know answer to off the top of my head) is how to route the UDP packets between multiple threads. Basically, a connection today is bound to a thread along with its filters. This breaks down for incoming UDP packets that are not part of a connection. E.g., do we have all workers listen for packets and somehow forward? Only initially support UDP w/ 1 worker? etc.
I am not sure how threading becomes an issue. For first cut, it’s basically
a dumb datagram proxy.
A more fundamental issue is the semantics. Are we going to load balance per
datagram? Seems strange. We might need to reuse the Ketama hash or the ip
hash and send packets to same destination host. Any other load balancing
algorithm seems unintuitive imo.
The cluster will have to change as well. It’s wedded to stream semantics in
terms of circuit breakers. Notion of failure of a host is not going to work
given that it’s datagrams that we are sending (fire and forget). So things
like outliers, panic thresholds, etc. are out of the window.
A straw man impl would just take a watered down version of tcp proxy, and
hardwire it to a ip hash based cluster where everything related to
reliability is turned off.
@grosenhouse would this be a sufficient first cut for CF?
On Tue, Nov 7, 2017 at 6:40 PM Matt Klein notifications@github.com wrote:
allow packets to fragment on the way out
Fragging may not be supported in the environment. In v1 we can likely
ignore MTU issues and just document. This leaves threading. Anyway, just
want to capture all of this in the design. :)—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/492#issuecomment-342660940,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AH0qd7aWNi7HMUhxn-rlv_YIu1xWlVK8ks5s0OpVgaJpZM4MDq2C
.
@mattklein123 One way to think about it is as a session, not a connection. Define a session as (srcip, dstip, srcport, dstport) and map that to your threads. This way of thinking likely also solves @rshriram 's point about handling LB. A session goes to the same place.
@mattklein123 One way to think about it is as a session, not a connection. Define a session as (srcip, dstip, srcport, dstport) and map that to your threads. This way of thinking likely also solves @rshriram 's point about handling LB. A session goes to the same place.
Yes, but I don't know how this works if you don't know all your callers (which you almost never will). Some kind of pre-routing or forwarding is needed potentially in the kernel or between workers. Note again that we have this same problem in QUIC and solving it here first will be useful.
Evidence I have is that load balancing per datagram would suffice. I'll
will attempt to get clarifying info.
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
On Tue, Nov 7, 2017 at 4:08 PM, Matt Klein notifications@github.com wrote:
@mattklein123 https://github.com/mattklein123 One way to think about it
is as a session, not a connection. Define a session as (srcip, dstip,
srcport, dstport) and map that to your threads. This way of thinking likely
also solves @rshriram https://github.com/rshriram 's point about
handling LB. A session goes to the same place.Yes, but I don't know how this works if you don't know all your callers
(which you almost never will). Some kind of pre-routing or forwarding is
needed potentially in the kernel or between workers. Note again that we
have this same problem in QUIC and solving it here first will be useful.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/492#issuecomment-342666291,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAJjmP-sl9ZZmY0z_vEmnVseukp5I2Caks5s0PESgaJpZM4MDq2C
.
@shalako @rosenhouse the more concrete you can get w/ initial use cases would be really helpful. Please provide as much detail as possible.
@hagbard5235 @rshriram re: datagram routing, from talking with @jpinner, starting in 3.9 there is some type of UDP reuse port option, however this doesn't actually solve our datagram routing problem. The way one would want the kernel to work is that datagrams are hashed on the 4-tuple to a particular thread ID, such that all datagrams for a particular "session" go to the same worker. I think Google has a kernel patch to do this (@alyssawilk do you have a ref to this?) for QUIC use.
In v1, I think we could get by with just forwarding datagrams between workers if needed, but ultimately would really like to hear about concrete initial use cases in terms of what Envoy features people want to use along with UDP proxy.
@mattklein123 This: https://lwn.net/Articles/542629/ seems to indicate that since 3.9 SO_REUSEPORT for UDP does hash on the 4-tuple.
We have had this feature in the kernel for a while haven’t we?
Anyway why should it matter much ? If we hash deterministically based on
the 4 tuple, we will always be sending to same upstream ip. We just need
this sticky capability in cluster manager. Am I missing something?
On Wed, Nov 8, 2017 at 12:06 AM Ed Warnicke notifications@github.com
wrote:
@mattklein123 https://github.com/mattklein123 This:
https://lwn.net/Articles/542629/ seems to indicate that since 3.9
SO_REUSEPORT for UDP does hash on the 4-tuple.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/492#issuecomment-342712204,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AH0qdx0urUJfXoUEojOov8THTWEgVbJoks5s0TbkgaJpZM4MDq2C
.
@hagbard5235 I don't see any indication in that article that UDP packets are hashed such that a flow goes to the same thread/recv call. I can do some kernel splunking if needed but won't have time for a bit.
@rshriram the reason why it matters is because of the Envoy threading model. If we assume that there will be filters, and state, per flow/session (which will for sure happen with QUIC), we need all datagrams for a flow to wind up on the same worker. Thus, we need the kernel (or Envoy) to look at the 4-tuple and route datagrams appropriately. It's possible the kernel already does this, though, IIRC there was a patch by Google in this area that I'm not sure landed in mainline or not.
^ is why initial design and use cases matter a lot. I would like to understand what Envoy features are going to be used during UDP proxying. It's conceivable that in v1 we could just have stateless filter chains for UDP that are instantiated and run on each worker, but that won't give us what we need in the future.
The upstream linux kernel should be routing 4 tuples consistently but I
don't recall if it's always been the case, and I have no idea if one can
consistently assume this works cross-platform. We'll definitely want to
add some tests of the behavior earlier rather than later, and have a
warning one should not use UDP proxying on any hosts which don't pass the
tests. It won't stop folks who have different test/prod-machines from
missing the warning but it's what you can do.
For QUIC we're still going to need some userspace packet tossing code to
handle the case of NAT port migration. That might be reusable for
thread-locality for any SO_REUSEPORT implementations which don't do tuple
hashing at the cost of a bunch of CPU.
On Wed, Nov 8, 2017 at 1:28 PM, Matt Klein notifications@github.com wrote:
@hagbard5235 https://github.com/hagbard5235 I don't see any indication
in that article that UDP packets are hashed such that a flow goes to the
same thread/recv call. I can do some kernel splunking if needed but won't
have time for a bit.@rshriram https://github.com/rshriram the reason why it matters is
because of the Envoy threading model. If we assume that there will be
filters, and state, per flow/session (which will for sure happen with
QUIC), we need all datagrams for a flow to wind up on the same worker.
Thus, we need the kernel (or Envoy) to look at the 4-tuple and route
datagrams appropriately. It's possible the kernel already does this,
though, IIRC there was a patch by Google in this area that I'm not sure
landed in mainline or not.^ is why initial design and use cases matter a lot. I would like to
understand what Envoy features are going to be used during UDP proxying.
It's conceivable that in v1 we could just have stateless filter chains for
UDP that are instantiated and run on each worker, but that won't give us
what we need in the future.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/492#issuecomment-342912114,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARYFvb_knWWLxdTwcDTUV0ZqVHZb0Ylcks5s0fK6gaJpZM4MDq2C
.
@hagbard5235 Nice to run into you again.
@mattklein123 With regard to "Envoy features" I don't think we don't need anything more than proxying datagrams initially.
Is UDP similar to TCP in that a generic UDP router wouldn't be able to look up backends based on a hostname, as that data is at the application layer? If so, I assume each Route would be represented by a listener on a different port?
@mattklein123 With regard to "Envoy features" I don't think we don't need anything more than proxying datagrams initially.
Right, but why do you want to use Envoy? What is it providing? Dynamic config? Health checking? Etc. Just trying to get a feel for what this will be used for. If you could provide the exact use case that would be really helpful.
I suspect we'll be modeling CF applications as clusters in Pilot. As app developers push new apps to CF we'll dynamically add new clusters to Envoy via Pilot. As applications can be horizontally scaled horizontally and vertically, and the container orchestrator (Diego) keeps apps running by creating and destroying containers, the IP/port of cluster host members will be dynamically updated. As developers want to enable their apps to be accessible by external clients over HTTP, we'll add routes to a shared listener. For clients of TCP/UDP protocols, I suspect we'll need to dynamically configure a dedicated listener for each route. In CF, apps have a many to many relationship with routes; for UDP and TCP routes this may mean a many to many relationship between listeners and clusters.
Does that help?
We are interested in proxying UDP for protocol upgrade on untrusted networks (public cloud). We could route ICMP, NTP, LDAP, Kerberos etc. traffic over a secure TLS mesh with an egress proxy to the relevant backend systems.
@shalako yes that helps. Presumably you want load balancing? How would that work? Hash flow to one of the available backed servers I assume?
@moderation so you want UDP -> TCP?
So far I've heard round-robin datagrams across backend hosts. I am actively
soliciting requirements.
We'd want UDP->UDP
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
On Wed, Nov 8, 2017 at 12:47 PM, Matt Klein notifications@github.com
wrote:
@shalako https://github.com/shalako yes that helps. Presumably you want
load balancing? How would that work? Hash flow to one of the available
backed servers I assume?@moderation https://github.com/moderation so you want UDP -> TCP?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/492#issuecomment-342954099,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAJjmHfkZ9OK42ovF20ogtbWBVn1hjNEks5s0hNZgaJpZM4MDq2C
.
@mattklein123 I _think_ I want UDP -> TCP -> UDP. That would help us secure UDP traffic but I understand that may not be a mainstream use case.
@moderation Why not IPSEC for your use case?
@hagbard5235 We've looked at Wireguard, Ghostunnel and commercial vendors. They all have various downsides (proprietary, implemented in the kernel etc.). We'd basically like to copy Lyft's setup but be able to include UDP as well.
@moderation Totally get wanting to avoid commercial implementations. I was asking because it looks like you want to take a bunch of more or less IP traffic ( ICMP, UDP, etc) and simply get it onto an encrypted channel. You can do that by essentially pulling it up into TCP/TLS... but if you are looking to apply encryption to L3... IPSEC might be a more natural solution.
I'll be working on a Google doc that I hope to have ready before the start of CloudNative/Kubecon 2017 US. I will post the resulting link on this issue.
Thank you @cmluciano that's awesome!
Doc is available and can be edited by anyone that is a member of the envoy-dev mailing list
https://groups.google.com/d/topic/envoy-dev/J19VUtMifBM/discussion
Thank you @cmluciano! Looking forward to digging into this.
Chris,
Could you make this doc publicly viewable, please?
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
On Tue, Dec 5, 2017 at 3:03 PM, Matt Klein notifications@github.com wrote:
Thank you @cmluciano https://github.com/cmluciano! Looking forward to
digging into this.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/envoyproxy/envoy/issues/492#issuecomment-349470869,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAJjmMAFDr9OHAKzOwDmZzkS2Vf4mqKZks5s9cuygaJpZM4MDq2C
.
+1 for UDP, there is a strong use in telecommunication, all data is transferred with UDP in 3G PS, 4G both FDD and TDD, also in future 5G. I think we should start the support for UDP immediateley.
@lixiaobing1 Thanks for the interest. Are the two use-cases you have frequency-division duplexing and time-division duplexing?
While UDP support would potentially enable these use-cases, I'm not sure that I fully understand FDD/TDD enough to work up a PoC. Do you have more links on existing implementations related to telecommunications and edge-cases for UDP?
@cmluciano I just went through this issue again and the doc you put together. Thanks for the doc! Here is my rough thinking on how we should proceed:
IMO we should go straight to support for UDP "sessions", initially just considering the 4-tuple as a hash/ID that identifies the session. If we do this, we can largely treat a UDP session as a connection within the existing code base, including filters.
I think the initial implementation can potentially be not that fast. Meaning, we don't need to assume that the kernel/OS provide thread hashing (we should investigate what kernel version and options can make this happen by default). The idea should be that datagrams for a particular session wind up on the same worker, where they can be mapped to a "connection" and the associated filter stack. Initial code can just forward a datagram that arrives on the wrong worker to the correct listener (likely by hash modulo). For initial testing, we can verify with a small number of workers so there will be not much forwarding.
Now that tcp_proxy supports idle timeouts, I think we can just rely on that to "close" connections/sessions.
In terms of MTU handling, I think we can just ignore this in v1 potentially with some stats/error handling if we try to send datagrams with a size that is too large. If we do this, the basic flow of listener -> connection/session -> tcp_proxy -> upstream I think should mostly just work. For upstream, we can do round robin as well as affinity based on hashing of datagram tuples.
So the work here is primarily going to be around doing:
I think if we do ^, it will be generally useful to quite a few people, and act as a first step towards some of the work we need to do for QUIC.
Thoughts?
cc @alyssawilk @ggreenway @lizan @PiotrSikora
I agree, that is a good initial target.
We also need to make sure that none of the involved code will combine two datagrams (eg Buffer::Instance::move()). The datagram boundaries are part of the protocol, so we can't handle them the same way tcp/streams are handled.
+1 that path LGTM too @mattklein123 . I will finish addressing the comments in the Google doc.
Should we transplant the Google doc into a proposal PR after if is cleaned up?
Should we transplant the Google doc into a proposal PR after if is cleaned up?
If possible, yes please.
Ok I cleanup up some of the comments in the Google doc. There are some remaining comments left for reference when we transition to version 2 work or we want to recall discussion for current tasks.
I moved use-cases that we are unsure of or definitely have punted on for the future to a new heading. Anything in bold signifies the decision for v1.
Is there a place in the Envoy repo layout that makes most sense for a summary of the Google doc?
@cmluciano https://github.com/envoyproxy/envoy/tree/master/source/docs is probably good if you want to drop something there, or even an MD with just a link to the gdoc. Either way.
@cmluciano, yes, you are right, FDD is frequency-division duplexing and TDD is time-division duplexing, up to now, telecommunication has experieced 2G, 3G, 4G(LTE), 5G, the protocal is mainly as IP/UDP/GTPU.
The roadmap looks good to me. Thanks @alyssawilk !
@rshriram any update: when can we hope to have udp proxy support in envoy
? @cmluciano is the one doing the udp work
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
I'm re-adding help wanted since we haven't made much progress on this. Hopefully we can get some resources to work on this at some point.
I started making a bit of progress on the listeneroptions in my local fork.
Once I update the test cases, I will submit these small pieces for review.
There are now a couple PRs for UDP socket support, in #5108 and (redundantly) #5115. Past that, I think we'll need something at the Listener level for actually receiving packets. Here's a rough proposal, comments welcome...
Could you make that proposal available publicly or is that not possible? 🙈
Sure, here's a link that should allow comments: https://docs.google.com/document/d/1dEo19y-trABuW2x6-T564LmK7Ld-BPXZOlnR4df9KVU/edit?usp=sharing
Let me know if that doesn't work.
Roadmap
- [ ] Network filters (need a use-case)
Ref: DNS Filter
Wanted to pitch in here - A UDP proxy with a network filter can be useful in my scenario to create a DNS filter to respond to DNS requests. I would like to use Envoy as a DNS server where clusters are tagged with DNS names and the routing information present in Envoy can be used to respond to DNS - specifically DNS A records.
Unless someone gets to this first I will finish this out in service of https://github.com/envoyproxy/envoy/issues/1193.
For everyone watching this issue I will have something working pretty soon which will handle basic UDP datagram proxying on top of temporal "sessions" which timeout after inactivity.
If anyone out there has a relatively simple test case they would like me to try I'm happy to do that. Optimally you could help me out with various linux commands that mimic what you want to do and I can then test by sticking Envoy in the middle. Thank you!
For everyone watching this issue I will have something working pretty soon which will handle basic UDP datagram proxying on top of temporal "sessions" which timeout after inactivity.
If anyone out there has a relatively simple test case they would like me to try I'm happy to do that. Optimally you could help me out with various linux commands that mimic what you want to do and I can then test by sticking Envoy in the middle. Thank you!
@mattklein123 I created a SIP sandbox (https://github.com/readverish/sip-sandbox.git) to test SIP scenarios, which has an example of a very simple SIP use case using UDP. It uses docker-compose to setup a client and a server. Please have a look and let me know if this helps, or you want to test a different scenario.
Thanks @readverish! I will take a look this week.
Also, here's a UDP echo server.
Exciting to see the udp_proxy scaffolding get merged! 🎉
What's next? Is the roadmap at the top of the issue directionally accurate?
MVP complete pending code reviews here https://github.com/envoyproxy/envoy/issues/492 if anyone wants to kick the tires.
Most helpful comment
For everyone watching this issue I will have something working pretty soon which will handle basic UDP datagram proxying on top of temporal "sessions" which timeout after inactivity.
If anyone out there has a relatively simple test case they would like me to try I'm happy to do that. Optimally you could help me out with various linux commands that mimic what you want to do and I can then test by sticking Envoy in the middle. Thank you!