The meta data endpoint seems to have changed from /latest/meta-data/iam/security-credentials/
-> /latest/api/token
and can no longer get an EC2 instance role.
The below are snippets from a aws s3 ls --debug
command running within a docker (Docker version 19.03.4, build 9013bf583a) on ubuntu "Ubuntu 16.04.6 LTS" EC2 instance:
2019-11-19 21:25:24,502 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): 169.254.169.254:80
2019-11-19 21:25:24,503 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/ HTTP/1.1" 200 23
2019-11-19 21:25:24,504 - MainThread - urllib3.connectionpool - DEBUG - Resetting dropped connection: 169.254.169.254
2019-11-19 21:25:24,504 - MainThread - urllib3.connectionpool - DEBUG - http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/ec2-example-role HTTP/1.1" 200 1322
2019-11-19 21:25:24,505 - MainThread - botocore.credentials - DEBUG - Found credentials from IAM Role: ec2-example-role
2019-11-19 21:11:48,998 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): 169.254.169.254:80
2019-11-19 21:11:50,000 - MainThread - botocore.utils - DEBUG - Caught retryable HTTP exception while making metadata service request to http://169.254.169.254/latest/api/token: Read timeout on endpoint URL: "http://169.254.169.254/latest/api/token "
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 421, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 416, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.7/http/client.py", line 1344, in getresponse
response.begin()
File "/usr/local/lib/python3.7/http/client.py", line 306, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.7/http/client.py", line 267, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/local/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 263, in send
chunked=self._chunked(request.headers),
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 423, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 331, in _raise_timeout
self, url, "Read timed out. (read timeout=%s)" % timeout_value
urllib3.exceptions.ReadTimeoutError: AWSHTTPConnectionPool(host='169.254.169.254', port=80): Read timed out. (read timeout=1)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/botocore/utils.py", line 295, in _fetch_metadata_token
response = self._session.send(request.prepare())
File "/usr/local/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
raise ReadTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "http://169.254.169.254/latest/api/token "
2019-11-19 21:11:50,002 - MainThread - botocore.utils - DEBUG - Max number of attempts exceeded (1) when attempting to retrieve data from metadata service.
Ditto, this narrowly missed breaking our production environment because we missed locking the aws-cli version to a specific minor revision. Major regression.
Good diagnostics @scottschreckengaust
Same, our cli version updated with our pre-deployment for production and we caught it... seeing the same thing. AWS cli installed in docker image is not able to get credentials on EC2 within ECS.
We luckily caught this in our pipelines (the ones that aren't pinned to a specific version) but this is a huge issue.
Same. This needs to be reverted.
Thanks for bringing this to our attention. We'll be reverting this ASAP and cutting an additional release today.
AWS CLI v1.16.286 has been released and reverts to the previous behavior fixing the regression. We are still working on a proper fix to support the new IMDS behavior.
Most helpful comment
Thanks for bringing this to our attention. We'll be reverting this ASAP and cutting an additional release today.