more: http://microservices.io/patterns/apigateway.html
this is traditionally referred to as "batching proxy". basically client sends a multi-part request, representing all the requests it wants to get, and the proxy completes those requests and responds with a multi-part response.
in the case of kong the multi-part request can be configured in the plugin / kong side, and the response is a multi-part response.
+1
@ahmadnassri Would it be possible to transform the response before passing it back to the client and also targeting devices? It's understood that everything is still in the idea phase.
@lindo-jmm depends on what you mean by "targeting devices" ?
I would love Kong do that but I wondering if its responsibility. https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQ35pXZMfL-YNeA67_FbT8uxvhgz1_1ONpXiTddYzjn9mnbycd3bU-IS11ZbQ there need a edge service in our api to compose load balance e fault tolerance enhance
+1
+1
+1
+1
In order to start working on this plugin, could you provide some examples of requests you would like to collapse, and how the response of those requests should be aggregated in one output?
@thefosk
there are two design patterns here that I know of:
plugin definition defines multiple upstream paths to query, and respond back with a single multi-part response that includes all results (headers + bodies)
an upstream timeout configuration is critical here, _(preferably a timeout per upstream)_.
this is traditionally referred to as "batching proxy". basically client sends a multi-part request, representing all the requests it wants to get, and the proxy completes those requests and responds with a multi-part response.
tl;dr:
multi-part in => multi-part out.
detailed example
incoming request, contains a body (multi-part), each part includes headers related to the target upstream (headers, path, etc ...)
GET /my/collapsed/api HTTP/1.1
Host: localhost:8000
Accept: multipart/form-data
Connection: keep-alive
Content-Type: multipart/form-data; boundary=---------------------------xxx
Content-Length: 554
-----------------------------xxx
Host: api.foo.com
Content-Type: application/json
Content-Length: 14
{"foo": "bar"}
-----------------------------xxx
Host: api.foo.com
Content-Type: application/xml
<xml><foo>bar</foo></xml>
-----------------------------xxx
Host: api.foo.com
Content-Type: text/html
<!DOCTYPE html><title>Content of a.html.</title>
-----------------------------xxx--
Host in each part, but better to be a custom Kong header, depicting upstream API name, identifier, etc ...outgoing response is a multipart response detailing each of the upstream results
I think this requires some more details before implementing. Eg. the example
given above lacks context as to which response came from which upstream.
As for implementation;
within nginx we have one request process, but we need multiple responses. If we
do all the upstream requests from the plugin, we'll be bypassing several process
steps in the nginx process (eg. bypass logging phase).
So probably we should have one request being handled by nginx in the request
process ('main request'), and the others via additional requests from Lua
code ('sub requests').
This will have the side effect that the sub requests can only exploit
the dns based loadbalancing, whereas the main request will be able to
use the 'upstreams' balancer, and use all other api defaults for example.
To make it play nicely with other plugins, this plugin would probably have to
run last (plugin 'priority') so all other plugins have been executed
already (plugins running after this one will only operate on the main request,
and not on the sub requests)
How to deal with errors?
How to pass parameters in the request? does every upstream get all parameters?
for now: identical query-string and body goes to every upstream
What http methods should it support?
for now: GET only, anything else gets a 405
__Configuration__:
{
main_name = "some name", -- equals the names below, but for the main request
sub_requests = { -- array of upstream request targets
{
name = "upstream2",
timeout = 10,
url = "http://some/where/",
},{
name = "upstream3",
timeout = 10,
url = "http://some/where/",
},
},
response = "application/json" -- response type of the combined response (json or url)
header_prefix = "X-Kong-batch-", -- for url encoded responses
}
__JSON response__:
status: always 200
body;
{
"some name": { -- good-path, proper upstream result
"status": xxx, -- the actual status code
"body": {} -- nested json body if it was json, string otherwise
},
"upstream1": {
"status": 500 -- internal Kong error (eg. timeout)
},
"upstream2": {
"status": 404, -- upstream error; basically same as good-path
"body": {} -- nested json body if it was json, string otherwise
}
}
__Multipart response__:
status: always 200
body;
as mentioned above, but each part will receive extra headers based on 'header_prefix';
eg. X-Kong-batch-name=some%20name and X-Kong-batch-status=200.
An internal error will result in X-Kong-batch-status=500 with an empty body.
The order of the parts should be identical to the order of the main/sub url's
Does the above make sense?
+1
+1
Is there any progress?
+1 waiting for this plugin
me too
+1 waiting
+1
Any status on this?
+1
Any update? :)
Hi everybody - This plugin can get very complex since everybody has very unique requirements. If you could share with us your use-case it would help us determine what's the best approach for this plugin. The more use-cases we can collect the better.
As of today we have been working on this functionality on a case by case basis with our enterprise customers, but maybe there are ways to implement a generic enough interface.
+1 waiting
@thefosk
We have a relatively (ahem) simple use-case - although it sounds quite different to the original one, and maybe it's already resolved elsewhere.
Essentially, we want a single call to fan-out into multiple API calls, and then to combine the returned document.
Something like this:
GET /my/aggregate HTTP/1.1
Host: localhost:8000
Accept: application/json
Fans out to multiple calls:
Host: api.foo.com
Content-Type: application/json
Content-Length: 14
{"foo1": "foo", "bar2": "bar bar"}
-----------------------------xxx
Host: api.bar.com
Content-Type: application/xml
{"foo2": "foo foo", "bar2": "bar bar"}
Which gives the resulting document:
{"foo1": "foo", "foo2": "foo foo", "bar2": "bar bar"}
Preferably we'd like to be able to combine nested object trees, and possibly in some sort of defined order.
Ultimately we'd like to declare some of the fan-out calls as optional, so the entire call can succeed without them.
Would we have to write our own plugins to do that at this point?
Where can I find the plugin 'request collapsing' ?
@shaowin16 it hasn't been written yet. So far it is a proposal/request for a new plugin.
+1 waiting
+1 waiting
+1
+1 waiting
+1 waiting
+1 Waiting...
Use case for us :
+1 for the above use case, only change I want to request is to please add support for additional message formats such as JSON, XML etc
+1
Hi guys, I've made something close to that:
https://bitbucket.org/leandro-carneiro/kong-aggregator/src/master/
@thefosk
We have a relatively (ahem) simple use-case - although it sounds quite different to the original one, and maybe it's already resolved elsewhere.
Essentially, we want a single call to fan-out into multiple API calls, and then to combine the returned document.
Something like this:
GET /my/aggregate HTTP/1.1 Host: localhost:8000 Accept: application/jsonFans out to multiple calls:
Host: api.foo.com Content-Type: application/json Content-Length: 14 {"foo1": "foo", "bar2": "bar bar"} -----------------------------xxx Host: api.bar.com Content-Type: application/xml {"foo2": "foo foo", "bar2": "bar bar"}Which gives the resulting document:
{"foo1": "foo", "foo2": "foo foo", "bar2": "bar bar"}Preferably we'd like to be able to combine nested object trees, and possibly in some sort of defined order.
Ultimately we'd like to declare some of the fan-out calls as optional, so the entire call can succeed without them.
Would we have to write our own plugins to do that at this point?
Any idea , if we have this plugin available now?
Hi guys, I've made something close to that:
https://bitbucket.org/leandro-carneiro/kong-aggregator/src/master/
@carnei-ro can you add a license to your repo?
@Tieske I moved from bitbucket to github, please, use the link: https://github.com/carnei-ro/kong-aggregator. It has MIT license.
Most helpful comment
I think this requires some more details before implementing. Eg. the example
given above lacks context as to which response came from which upstream.
As for implementation;
within nginx we have one request process, but we need multiple responses. If we
do all the upstream requests from the plugin, we'll be bypassing several process
steps in the nginx process (eg. bypass logging phase).
So probably we should have one request being handled by nginx in the request
process ('main request'), and the others via additional requests from Lua
code ('sub requests').
This will have the side effect that the sub requests can only exploit
the dns based loadbalancing, whereas the main request will be able to
use the 'upstreams' balancer, and use all other api defaults for example.
To make it play nicely with other plugins, this plugin would probably have to
run last (plugin 'priority') so all other plugins have been executed
already (plugins running after this one will only operate on the main request,
and not on the sub requests)
How to deal with errors?
How to pass parameters in the request? does every upstream get all parameters?
for now: identical query-string and body goes to every upstream
What http methods should it support?
for now: GET only, anything else gets a 405
__Configuration__:
__JSON response__:
status: always 200
body;
__Multipart response__:
status: always 200
body;
as mentioned above, but each part will receive extra headers based on 'header_prefix';
eg.
X-Kong-batch-name=some%20nameandX-Kong-batch-status=200.An internal error will result in
X-Kong-batch-status=500with an empty body.The order of the parts should be identical to the order of the main/sub url's
Does the above make sense?