I am currently looking into boto3 ssm client and calling the describe_instance_patches for a given instance, and i run into the following error:
"* OverflowError: date value out of range"
I have used the boto3 library version: boto3==1.9.42 (This is the same as the default in Lambda. I have also used the latest version: boto3==1.9.203 and this has the same issue. i prefer the 1.9.203 release as this includes pagination for the resposne)
I was able to resolve this on my own machine by changing my environment variable for TZ to UTC:
export TZ="/usr/share/zoneinfo/utc"
I believe this issue happens because Amazon works on UTC, so the timezone on the response is UTC and is being converted to my local timezone, which errors.
Any assistance on this would be ace.
If you want test this yourself (I am assuming you have already got credentials present for boto3):
import boto3
from pprint import pprint
def get_instance_patches(client, instance_id):
paginator = client.get_paginator("describe_instance_patches")
for page in paginator.paginate(InstanceId=instance_id):
for patch in page.get("Patches", []):
yield patch
def list_instances(client):
paginator = client.get_paginator("describe_instance_information")
for page in paginator.paginate():
for instance in page["InstanceInformationList"]:
yield instance
if __name__ == "__main__":
client = boto3.client("ssm")
for instance in list_instances(client):
instance_id = instance["InstanceId"]
for patch in get_instance_patches(client, instance_id):
pprint(patch)
Thank you for your post. I am not able to reproduce the issue. Can you please provide me what was your TZ value before setting it to UTC and timestamp so that i can reproduce the issue ?
Thank you for your post. I am not able to reproduce the issue. Can you please provide me what was your TZ value before setting it to UTC and timestamp so that i can reproduce the issue ?
The value of my TZ with an output from date (clock on some systems):
echo $TZ && date
# Output
/usr/share/zoneinfo/GB
Wed Aug 14 09:52:26 BST 2019
a timestamp:
from datetime import datetime as dt
dt.now()
# Output
datetime.datetime(2019, 8, 14, 9, 53, 50, 936365)
To clarify, I assume you have machines with SSM Setup on them which have had a patch baseline scan ran?
Just checking that the labels on this issue don't mean it gets ignored. Since the no-response bot has removed the closing-soon-if-no-response label but response needed is still stuck here. Still facing this issue even with the latest release
I'm facing this issue too. The AWS CLI command works fine, but the boto equivalent fails with timestamp parsing errors.
stacktrace.txt
I'm working on a Windows AWS EC2 instance, but unfortunately changing the timezone to UTC doesn't fix the problem for me.
My default timezone is AEST:
$> tzutil /g
AUS Eastern Standard Time
but when I change it to UTC I get exactly the same result. I'm using the following command to change the timezone, which takes effect immediately on the system:
$> tzutil /s UTC
If I try and reboot the system, the AD policies simply change it back to AEST.
Running the boto function through a lambda function works, so I'd hazard a guess it's still something to do with my machine's time settings. Regardless, the boto function should be able to handle different time zones.
On my same windows machine I can use the AWS CLI without any issues:
$> aws ssm describe-instance-patches --instance-id <instance_id>
{
"Patches": [
{
"Title": "2018-05 Update for Windows Server 2016 for x64-based Systems (KB4132216)",
"KBId": "KB4132216",
"Classification": "CriticalUpdates",
"Severity": "Important",
"State": "Installed",
"InstalledTime": 1528812000.0
},
{
"Title": "2018-11 Update for Windows 10 Version 1607 for x64-based Systems (KB4465659)",
"KBId": "KB4465659",
"Classification": "SecurityUpdates",
"Severity": "Critical",
"State": "Installed",
"InstalledTime": 1542546000.0
},
and so on...
so there definitely are some patches there for the function to find...
@jack1902 - Sorry for the late reply. I am not able to reproduce this issue on a mac with latest version of boto3 with time zone same as yours(GB). As mentioned by @visitjonathan , are you also getting the error on windows machine ? I have not tried to reproduce this issue on windows ec2 instance. Today i will run the code in a windows ec2 instance and see if i can reproduce the issue.
If you are getting the error on a mac then it would be really helpful if you can provide me the full debug log. You can enable log by adding boto3.set_stream_logger('') to your code.
@visitjonathan - I tried the code in windows but still i am not able to reproduce the issue. If you can provide me the full debug log then we can know what's the exact timestamp you are getting in the response. So that i can try reproduce the issue. You can enable log by adding boto3.set_stream_logger('') to your code.
so I have just re-ran the code snippet from above with boto3==1.9.243 against an amzn2 Linux machine with missing patches to ensure the code runs. It's working fine now so something must have changed within a release since I opened this. I've ran it for all timezones i can set on my pc and haven't encountered the issue. I'm going to close this issue :)
Would be nice if you allowed time for me to get in to work on Monday morning. Just saying...,
My bad on being too hasty to close this ticket. I am actually re-running my code against a few Operating systems. It appears that Linux isn't throwing an error but Windows is.
Here is the stack trace from adding the extra verbose logging, with sensitive data removed:
errors.txt
It seems the issue is related to the negative time values on those results that do not have timestamps in the console, eg:
{
"Title": "",
"KBId": "KB2267602",
"Classification": "",
"Severity": "",
"State": "InstalledOther",
"InstalledTime": -62135596800.0
}
When running the client.describe_instance_patches() function on windows where it will return this result, it will fail when parsing the installedTime value.
I've attached the error I get. The function works when I put in a filter returning just the 'Installed' packages, but I believe that is because those patches all have positive InstalledTime values.
os_error_stacktrace.txt
Seems there is an unrelated issue with this function (I'm not sure if there's an existing issue, but there needs to be one) on adding multiple values in a filter too. Tried adding a filter for patches where 'State' = 'Installed' and 'InstalledOther' and it gave me a different error.
FilterException.txt
This comes up when I alter your 'get_instance_patches' function to use the following filter:
def get_instance_patches(client, instance_id):
paginator = client.get_paginator("describe_instance_patches")
for page in paginator.paginate(InstanceId=instance_id, Filters=[
{
'Key': 'State',
'Values': [
'Installed',
'InstalledOther'
]
}]):
for patch in page.get("Patches", []):
yield patch
According to the doco, the 'Values' list should be able to have up to 50 values.
@jack1902 @visitjonathan - I can confirm that this issue is related to negative timestamp. But the problem is service should not give a negative timestamp value. I have already contacted the service team about the issue and the service team has confirmed that its a bug on their side.
@visitjonathan - And for the Filter Values you can't use more than one value at a time. Even i can't see anywhere in the documentation that it is mentioned that Values can have more than one value.
Hope it helps and please let me know if you have any questions.
This is extremely confusing - the Values for the describe_instance_patches filter is part of a 'PatchOrchestratorFilter' object rather than a standard 'Filter' object (which can take multiple values), but it still takes a 'list' of values
The value for the filter.
Type: Array of strings
Length Constraints: Minimum length of 1. Maximum length of 256.
Required: No
The fact that it is an 'array of strings' does in fact imply multiple values, not a single value. The boto call is supposed to be a wrapper around the aws api, and the documentation I have pasted above is taken from the aws api documentation.
Even the CLI has the following in its help:
"--filters" (list)
Each entry in the array is a structure containing:
Key (string, between 1 and 128 characters)
Values (array of strings, each string between 1 and 256 characters)
Shorthand Syntax:
Key=string,Values=string,string ...
implying multiple values.
@visitjonathan @swetashre
def get_instance_patches(client, instance_id):
paginator = client.get_paginator("describe_instance_patches")
for page in paginator.paginate(InstanceId=instance_id, Filters=[
{
'Key': 'State',
'Values': [
'Installed',
]
},
{
'Key': 'State',
'Values': [
'InstalledOther'
]
},
]):
for patch in page.get("Patches", []):
yield patch
Try this as it should work for having multiple values for the filter. Boto3 has its quirks that's for sure.
@swetashre if it's a bug with the service how is the AWS CLI handling this? I am assuming its been fixed in the AWS cli which uses botocore underneath. Do we have any time frames for the bug in SSM to be fixed? (This now seems to be something boto3 can only try/except around and ignore bad timestamps until SSM fixes it on their end)
@jack1902 - This issue occurs only when the service returns negative timestamp. I don't have any exact time frame but there is already a corresponding CR for it. I will post here when i will get any update from the service side.
Can you please post the logs for CLI ? I am curious to know what is the response you are getting from the service. You can enable the log by adding --debug to your code.
@visitjonathan - The 'PatchOrchestratorFilter' object is a generic filter used across different apis that's why it is very confusing to understand that it can't take more than one value for a key. We will see if we can do something to make the documentation more clear.
You can try the solution suggested by @jack1902 instead of making different api calls.
@jack1902 - The CR has been merged. Can you please check if you are still facing the issue ?
@swetashre So i am now using the latest boto3 available and calling the same script i have before. The error still persists
boto3==1.10.2
botocore==1.13.2
docutils==0.15.2
jmespath==0.9.4
python-dateutil==2.8.0
s3transfer==0.2.1
six==1.12.0
urllib3==1.25.6
@jack1902 - Are you still getting the negative timestamp in the response body ? Can you please check and let me know ?
so here is the response, out being the output of the script and then errors.txt containing the errors
@swetashre thanks for the patience on this. didn't have time over the weekend to get the info uploaded
@jack1902 - Thank you for providing me full debug log. I can see that service is still providing negative timesatmp. I will follow up with the service team on this issue.
The changes has been released and i believe with that this issue has been resolved. I am closing the issue. Please reopen if you have any more concerns.
Hi, I don't believe it is fixed, I'm currently using latest versions of botocore and boto3 and facing the exact same issue. @jack1902 could you confirm if you're still seeing the issue on your side as well ?
Hi, I don't believe it is fixed, I'm currently using latest versions of botocore and boto3 and facing the exact same issue. @jack1902 could you confirm if you're still seeing the issue on your side as well ?
@GMael1 I have ran the above code snippet against a few different accounts and i no getting any errors.
As stated by @swetashre this is an issue with the Service in AWS and what it returns to boto3 and botocore. Are you able to provide any additional details such as OS Version, name, the package you believe to throwing the issue? Although these seems more like things to raise in an AWS Support case
I have also encountered the same issue as MaelG. I am having the issue when running this against a windows 2016 instance. I get the same OverFlow error.