Aiohttp: Cannot easily access max_field_size in client response

Created on 7 Oct 2017  路  5Comments  路  Source: aio-libs/aiohttp

Long story short

The http client doesn't really support overriding the default header value length of 8190, even though the parser parameter is there https://github.com/aio-libs/aiohttp/blob/e7c9390111932dd6dbb642170d7a0da1876271ec/aiohttp/http_parser.py#L59

This results in the error message:

aiohttp.client_exceptions.ClientResponseError: 400, message='Got more than 8190 bytes when reading Header value is too long.'

Stack trace from real app:

<snip>
File "/usr/local/lib/python3.6/site-packages/pyportify/google.py", line 113, in _http_get
params=merged_params,
File "/usr/local/lib/python3.6/site-packages/aiohttp/helpers.py", line 97, in iter
ret = yield from self._coro
File "/usr/local/lib/python3.6/site-packages/aiohttp/client.py", line 241, in _request
yield from resp.start(conn, read_until_eof)
File "/usr/local/lib/python3.6/site-packages/aiohttp/client_reqrep.py", line 564, in start
message=exc.message, headers=exc.headers) from exc
aiohttp.client_exceptions.ClientResponseError: 400, message='Got more than 8190 bytes when reading Header value is too long.'

Expected behaviour

I can either set the parser in the session/get request, or set the max_field_size.

Actual behaviour

I can't do either

Steps to reproduce

I believe this will do it:

import aiohttp
import asyncio

@asyncio.coroutine
def main():
        with aiohttp.ClientSession() as session:
                resp = yield from session.get('http://test.xr6.me')
                print(len(resp.headers['X-TEST-HEADER']))
                resp.close()

if __name__ == "__main__":
        loop = asyncio.get_event_loop()
        loop.run_until_complete(
            asyncio.gather(main())
        )
        loop.close()

Your environment

Ubuntu jessie (bash for windows, reproduces on others though)
python3.4 (also reproduces on others)

Most helpful comment

I've run into this this evening, deploying MS Teams bot framework(aiohttp) as one our microservices.
As we use OpenID/Oauth2, the user's attributes in their x-userinfo header then breaks 8Kb limit.

We're fairly reliant on these signed headers as they're the single source of truth we trust in the payload as being none tampered with, do identify the user, scopes, and attributes they have.

If there's support for this change, one of our guys can likely take a run at a PR?

I can say these values tend to be configurable, in our experience
Bowsers: Chrome/ium supports 250KB of headers, other mainstream the same or greater
HTTP daemons flexible in this limit, via config options:

  • nginx - client_header_buffer_size 64k;
  • npm - "start": "react-scripts --max-http-header-size=60000 start"
  • gunicorn - limit_request_field_size: 65536
  • IIS: MaxFieldLength
  • Apache: LimitRequestFieldsize 65536

All 5 comments

Unfortunately current client design have no place for requested parameter.
After #2019 done we could return to the issue.

Isn't our goal to be more generous than a browser? I don't know if browsers discard these long lines or not, but they certainly don't error out. In which case 8190 is obviously too small. If we aren't going to make this limit settable, raising it a lot is a great solution.

IFAIK browsers silently discard things that are longer that the browser expects.
Silent ignorance is not an option for a library IMHO but it should be the configurable thing.

I've run into this this evening, deploying MS Teams bot framework(aiohttp) as one our microservices.
As we use OpenID/Oauth2, the user's attributes in their x-userinfo header then breaks 8Kb limit.

We're fairly reliant on these signed headers as they're the single source of truth we trust in the payload as being none tampered with, do identify the user, scopes, and attributes they have.

If there's support for this change, one of our guys can likely take a run at a PR?

I can say these values tend to be configurable, in our experience
Bowsers: Chrome/ium supports 250KB of headers, other mainstream the same or greater
HTTP daemons flexible in this limit, via config options:

  • nginx - client_header_buffer_size 64k;
  • npm - "start": "react-scripts --max-http-header-size=60000 start"
  • gunicorn - limit_request_field_size: 65536
  • IIS: MaxFieldLength
  • Apache: LimitRequestFieldsize 65536

I would go ahead and do the PR, I've seen similar things accepted before.

Was this page helpful?
0 / 5 - 0 ratings