Is there a way to use the Swagger Specification for WebSockets? It seems to be quite bound to HTTP.
It's meant to be used for REST APIs so...
On Wed, May 14, 2014 at 6:48 PM, Sten Govaerts [email protected]:
Is there a way to use the Swagger Specification for WebSockets? It seems
to be quite bound to HTTP.—
Reply to this email directly or view it on GitHubhttps://github.com/wordnik/swagger-spec/issues/55
.
There are possible plans to expand the spec to provide support for it.
For now I'm closing the issue but marking at as a proposal for the next version.
+1
+1: OData has a mechanism to "subscribe" to changes of requested resources, see http://docs.oasis-open.org/odata/odata/v4.0/errata02/os/complete/part1-protocol/odata-v4.0-errata02-os-part1-protocol-complete.html#_Toc406398235.
The ideal notification channel is WebSockets, and we'd like to be able to describe the service-specific shape of that callback/notification mechanism.
For barebone requirements, a Websocket has a HTTP endpoint (e.g., /websocket) and supports a few types of JSON messages,
Sample request frame,
{
"cmd": "createUser",
"cmdId": "12",
...
}
Sample response frame,
{
"cmdId": "12",
...,
"status": 200
}
Parent: #586
+1
While there may be some higher level semantics that people might want to specify on top of WebSockets, eg subscribing to notifications, I think use cases like that are not really that well understood from a standardisation perspective, and there would be little value in trying to standardise high level concepts like that now, as best practices and idioms are likely to change significantly over time.
I agree with @travishaagen, though I would remove the request/response requirement as it too is also a high level semantic that really has no relation to WebSockets itself.
At the most basic level, I would describe a WebSocket in the same way that a regular HTTP endpoint is described, with a request/response message schema, but unlike a regular HTTP endpoint, on a WebSocket there will be zero to many instances of the request/response messages sent in each direction. From a spec perspective, this could be indicated by simply adding a flag to say that the request and/or response messages are streamed. Further information could be added to indicate what streaming protocol should be used (websockets, SSE, newline separated JSON ala Twitter streaming endpoint style). Higher level semantics such as request/response within the stream, publish/subscribe, etc can in future be built on top of that, but I think starting with just being able to specify that a particular endpoint is a stream is a good start.
+1
+1
FWIW, HTTP2 is a multiplexing + streaming protocol, like WebSockets. Might make sense to tackle the uses cases for either protocol together.
@alechenninger From an HTTP API perspective, HTTP/2 really makes no changes over HTTP/1.1. It is possible that we could describe pushed requests, but if an intermediary cache handles the pushed requests as intended, then there is no need to change the API description.
You are not the first person I have run into that has said HTTP/2 supports streaming, but I have not yet seen anything that changes HTTP semantics. You can do streaming in HTTP/2 in exactly the same way that you can do it in HTTP/1.1. Due to other changes it will likely be more efficient, but I don't see how it changes anything from an API description perspective.
There is the notion of control flow that should finally do away with chunk encoding and make life much easier for connectors to manage memory when dealing with large payloads. Allowing streams to pause will allow having multiple long polling requests going over a single connection.
All of these changes are awesomely fabulous, but I see nothing that changes how HTTP APIs should be described. It possibly will make websockets more efficient, but HTTP/2 isn't a full-duplex protocol. It is the same old client server protocol that we know and love.
If I am wrong about this, I would love somebody to point to me some specifications that shows me why I am wrong.
@darrelmiller As far as I know, HTTP/2 _is_ a full duplex protocol. Disclaimer: I'm no expert.
For example see the gRPC wire format, which takes advantage of HTTP2 streaming semantics: http://www.grpc.io/docs/guides/wire.html
Also see the spec: https://http2.github.io/http2-spec/#StreamsLayer
@alechenninger Maybe I'm misusing the terms, I need to investigate more.
"A "stream" is an independent, bidirectional sequence of frames exchanged between the client and server within an HTTP/2 connection"
The key here is that it is "within an HTTP/2 connection". Even, in a HTTP/1.1 connection, once a client has made the request, the server can return bytes down the wire when ever it chooses. That's how long polling works. What I don't think can ever happen is have the server initiate a HTTP/2 connection with a client.
What I may have misunderstood is that it may be possible for a server to send HTTP/2 frames that don't correspond to a request once a connection has been established. I have not seen this discussed in anything I have read so far, but that doesn't mean it is not possible.
From here https://http2.github.io/http2-spec/#rfc.section.8.1
A server can send a complete response prior to the client sending an entire request if the response does not depend on any portion of the request that has not been sent and received.
This is a very interesting difference because this means that a long polling interaction can effectively do full-duplex communication of bytes, once the client has sent the initial request.
This is a very interesting difference because this means that a long polling interaction can effectively do full-duplex communication of bytes, once the client has sent the initial request.
That's not a difference, the HTTP/2 spec here is simply being explicit about something that HTTP/1.1 has always supported (and there are real world uses cases out in the wild that I've seen that exploit this feature of HTTP/1.1, as well as a number of client and server implementations that support it, including one that I used to be the lead developer of - Play Framework). In fact HTTP/1.1 even has a way of semantically expecting a complete response before sending a complete request, it's part of the Expect: 100-continue spec.
So I agree with your initial comment about there being no semantic difference between HTTP/1.1 and HTTP/2, and so nothing to change from API perspective. The streams feature of HTTP/2 only allows multiplexing of multiple concurrent exchanges down a single connection, semantically, nothing changes since in HTTP/1.1 you achieved exactly the same thing by making multiple connections, it changes nothing in terms of the full/half duplex nature of the exchanges that happen, and has no impact on the high level API semantics.
The doing away with chunked encoding by the way is a bit misleading, it's not so much that chunked encoding is done away with, its that now effectively everything is chunked into frames that have headers that specify the length of each frame (which is precisely what chunked encoding is).
@jroper are you saying HTTP/1.1 supported multiple bidirectional messages within a single HTTP request?
No, I'm saying it supports bidirectional streams in a single HTTP request. There's still only one message each way (from an HTTP semantics perspective), but that message can be a stream, and both ends can be streaming at the same time. The message can, as is the case with SSE, be an encoding of many smaller messages which are streamed over time, but the description of that is beyond the scope of HTTP.
Note it's one thing to say "HTTP supports this", it's another thing for it actually to be supported by clients and servers. Most clients (eg browsers) can't do it, and most traditional servers can't either. But there are some that can.
+1
@sten Is this still an issue with the latest OAI v3 spec?
+1
Take a look at https://www.asyncapi.com/
It's basically the Swagger of message based systems.
Most helpful comment
While there may be some higher level semantics that people might want to specify on top of WebSockets, eg subscribing to notifications, I think use cases like that are not really that well understood from a standardisation perspective, and there would be little value in trying to standardise high level concepts like that now, as best practices and idioms are likely to change significantly over time.
I agree with @travishaagen, though I would remove the request/response requirement as it too is also a high level semantic that really has no relation to WebSockets itself.
At the most basic level, I would describe a WebSocket in the same way that a regular HTTP endpoint is described, with a request/response message schema, but unlike a regular HTTP endpoint, on a WebSocket there will be zero to many instances of the request/response messages sent in each direction. From a spec perspective, this could be indicated by simply adding a flag to say that the request and/or response messages are streamed. Further information could be added to indicate what streaming protocol should be used (websockets, SSE, newline separated JSON ala Twitter streaming endpoint style). Higher level semantics such as request/response within the stream, publish/subscribe, etc can in future be built on top of that, but I think starting with just being able to specify that a particular endpoint is a stream is a good start.