Operating system or device - Godot version:
Debian 9 - Godot 2.1.4-beta custom build (e85be2f5df3a24dfad50e02c16abb4757abb8141)
Issue description:
So, making plain HTTP request work fine, but now I configured and "secured" my VPS with a LetsEncrypt SSL cert. Godot will connect and request will perform just fine. But sometimes (more than I would like them to) the requests fail, and it doesn't seem to be a server problem.
The only error I see thrown is this:
ERROR: _print_error: Some I/O error occurred. The OpenSSL error queue may contain more information on the error.
At: modules/openssl/stream_peer_openssl.cpp:415.
And I have no idea on what more can I do to help fix this. Also, the code I use for making requests can be seen here.
This would not be so bad if my game was not using it to update some data constantly.
So I tried communicating with my server using the HTTPRequest node, it works great. Maybe I'm doing something wrong with the HTTPClient class?
Seems like doing HTTPClient.close() after each request solves the problem. But I would like to keep the connection open.
And the only difference between thus working and failing is SSL? Everything is exactly the same on the server aside from that?
@RandomShaper Ok, I tried using plain http and disabled HTTPClient.close(). Didn't change anything else. It starts to fail, it doesn't throw any error. It just says the request ended with no response and request status code is 0. So the problem is not specific to SSL.
So this is a final note on what did not work:
HTTPClient variable to perform all my request (independent of the server and port, even tho the connection would be closed when connecting to a new server or port.) and leaving the connection open after the first request did not work. HTTP or HTTPS would make the game crash if 2 requests would happen at the same time.HTTPClient.close() after each request would partially solve the problem, requests wouldn't fail so often but they did from time to time. The game still would crash if 2 requests would happen at the same time.What did work (but not what I wanted, maybe it's just the way it works.):
Removed the global HTTPClient variable, create a new one every time a request is performed and close its connection when the request finishes. Make sure to never ever have two request made at the same time (or make sure to not make a request when one is active, in other words.).
I'm running into this issue with the HTTPRequest node. Where would one check the queue that it mentions in the error?
I've noticed similar problems making multiple requests. 2 requests at once seems ok, but more causes sporadic connection errors
error events from ssl:
0:00:04:0289 - Some I/O error occurred. The OpenSSL error queue may contain more information on the error.
----------
Type:Error
Description:
Time: 0:00:04:0289
C Error: Some I/O error occurred. The OpenSSL error queue may contain more information on the error.
C Source: modules/openssl/stream_peer_openssl.cpp:429
C Function: _print_error
v3.0.2.stable.custom_build
using var request = HTTPRequest.new() which is usually called during the completed signal of a previous request.request()
Signals are connected dynamically with request.connect("request_completed", self, "_on_request_completed", [data, moredata, evenmoredata])
Is this also reproducible in the current master branch?
CC @godotengine/network
You can't make two requests at the same time using the same HTTPClient, HTTPRequest.
You must wait for the previous one to finish.
If you use keep-alive, you must also manually disconnect the client before you can connect to a different URL (while you can just make a new request as soon as the previous one has finished if the URL/port are the same)
Can you provide more technical details about this limitation (or just point to the code piece that involves this)? Thanks, just curious.
@tmathews sure, the HTTPClient is, well, a client.
The client has a TCP socket.
The TCP socket is connected to an HTTPServer (once you connect).
The HTTP standard requires that on each connection, there is a request/response sequence.
You cannot make a new request on the same connection unless the server has finished replying to the previous one* (see down).
Historically, as soon as the server replied, it would immediately close the connection unless the client specified the keep-alive header in the request (in which case, a server supporting that feature would reply with a content-length header specifying how big the reply was, then send the reply). Once the reply was over, i.e. the client received that amount of bytes, the connection was ready to make a new request, and would not be automatically closed.
When the web became more interactive, with usually a single web page requiring a single client to make many requests to the same address for different resources (scripts/images/CSS) the HTTP/1.1 standard introduced support chunked transfer, which allows to use the keep-alive technique without knowing the content-length. In all this, the rule of the "one request at a time" still holds.
* To be honest though, HTTP 1.1 do have a chapter on pipelining (i.e. making multiple concurrent requests on the same connection), but comes with many limitation (can only be used with safe methods, i.e. cannot be done with post/put/delete) and it's mainly unsupported (browsers have it off by default, many proxy servers does not support it)
HTTP 1.1: https://tools.ietf.org/html/rfc7230
HTTP 1.1 pipelining: https://tools.ietf.org/html/rfc7230#section-6.3.2
Mozilla docs on pipelining: https://developer.mozilla.org/en-US/docs/Web/HTTP/Connection_management_in_HTTP_1.x$revision/1330814#HTTP_pipelining
Great explanation. Thanks.
Most helpful comment
@tmathews sure, the HTTPClient is, well, a client.
The client has a TCP socket.
The TCP socket is connected to an HTTPServer (once you connect).
The HTTP standard requires that on each connection, there is a request/response sequence.
You cannot make a new request on the same connection unless the server has finished replying to the previous one* (see down).
Historically, as soon as the server replied, it would immediately close the connection unless the client specified the
keep-aliveheader in the request (in which case, a server supporting that feature would reply with acontent-lengthheader specifying how big the reply was, then send the reply). Once the reply was over, i.e. the client received that amount of bytes, the connection was ready to make a new request, and would not be automatically closed.When the web became more interactive, with usually a single web page requiring a single client to make many requests to the same address for different resources (scripts/images/CSS) the HTTP/1.1 standard introduced support
chunkedtransfer, which allows to use thekeep-alivetechnique without knowing thecontent-length. In all this, the rule of the "one request at a time" still holds.*To be honest though,HTTP 1.1do have a chapter onpipelining(i.e. making multiple concurrent requests on the same connection), but comes with many limitation (can only be used with safe methods, i.e. cannot be done withpost/put/delete) and it's mainly unsupported (browsers have it off by default, many proxy servers does not support it)HTTP 1.1: https://tools.ietf.org/html/rfc7230
HTTP 1.1 pipelining: https://tools.ietf.org/html/rfc7230#section-6.3.2
Mozilla docs on pipelining: https://developer.mozilla.org/en-US/docs/Web/HTTP/Connection_management_in_HTTP_1.x$revision/1330814#HTTP_pipelining