template wasnt changed at all just redeployed and got this error.
there is no information in the logs so cant really understand whats wrong..
same as https://github.com/aws/aws-lambda-dotnet/issues/761
Serverless: Typescript compiled.
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Installing dependencies for custom CloudFormation resources...
Serverless: Installing dependencies for custom CloudFormation resources...
Serverless: [serverless-plugin-split-stacks]: Summary: 17 resources migrated in to 2 nested stacks
Serverless: [serverless-plugin-split-stacks]: Resources per stack:
Serverless: [serverless-plugin-split-stacks]: - (root): 195
Serverless: [serverless-plugin-split-stacks]: - APINestedStack: 10
Serverless: [serverless-plugin-split-stacks]: - PermissionsNestedStack: 7
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service sls-blind-chat.zip file to S3 (23.86 MB)...
Serverless: Uploading custom CloudFormation resources...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
CloudFormation - UPDATE_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_ROLLBACK_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
CloudFormation - UPDATE_ROLLBACK_COMPLETE - AWS::CloudFormation::Stack - sls-blind-chat-dev-stack
Serverless: Operation failed!
Serverless: View the full error output: https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stack/detail?stackId=arn%3Aaws%3Acloudformation%3Aus-east-1%3A956109742295%3Astack%2Fsls-blind-chat-dev-stack%2Fcac3ba10-29e0-11eb-b791-0a73682547f5
Serverless Error ---------------------------------------
An error occurred: sls-blind-chat-dev-stack - Received malformed response from transform AWS::Serverless-2016-10-31.
This is a :bug: bug-report
Same issue here: us-east-1
I'm having the same issue on us-east-1
Same issue here. us-east-1
Hi,
Good morning.
Looks like there is a service outage for us-east-1 region. Service teams are working on it and the issue should be resolved soon.
Thanks,
Ashish
I am also seeing the same issue in us-east-1.
Same issue here.
+1
Same issue here.
Same issue here, deploying some lambdas
Same issue here also, deploying Go lambdas
same issue - python lambdas
➕ 1
apparently us-east-1 region is not having a good time right now.
[08:12 AM PST] Kinesis Data Streams customers are still experiencing increased API errors. This is also impacting other services, including ACM, Amplify Console, API Gateway, AppStream2, AppSync, Athena, Cloudformation, Cloudtrail, CloudWatch, Cognito, Connect, DynamoDB, EventBridge, IoT Services, Lambda, LEX, Managed Blockchain, Resource Groups, SageMaker, Support Console, and Workspaces. We are continuing to work on identifying root cause.
As @slitsevych pointed out, that region is having a lot of outages. You can find them here, although the outages are also affecting the dashboard it seems.
same here, ruby and node.js lambdas
+1
Hi All,
There is a comment in the related old issue https://github.com/aws/aws-lambda-dotnet/issues/761 that the problem is fixed. Please verify the same and confirm if we could close this issue.
Thanks,
Ashish
@ashishdhingra I'm still seeing the issue as of 11:18 PM EST in us-east-1.
Worked for me now, able to deploy sam stack in us-east-1 region
As per service dashboard at https://status.aws.amazon.com/, everything appears to be running normally.
Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.
Most helpful comment
Hi,
Good morning.
Looks like there is a service outage for us-east-1 region. Service teams are working on it and the issue should be resolved soon.
Thanks,
Ashish