After upgrading my project from 0.5.6 to 1.0.0 I attempted to deploy and received this error:
Template format error: Number of resources, 202, is greater than maximum allowed, 200
I expect that this should have worked since there is no limitation on AWS that should forbid this, and it was working on 0.5.6. Serverless should break up the template resources automatically to make this possible.
With the current method you are forced to break up the service, which should not be necessary. This causes problems with general project structure, custom domain mapping, and shared dependencies.
The example config below will generate the error by creating 65 endpoints.
service: test
provider:
name: aws
runtime: nodejs4.3
functions:
hello:
handler: handler.hello
events:
- http:
path: e00
method: get
cors: true
- http:
path: e01
method: get
cors: true
- http:
path: e02
method: get
cors: true
- http:
path: e03
method: get
cors: true
- http:
path: e04
method: get
cors: true
- http:
path: e05
method: get
cors: true
- http:
path: e06
method: get
cors: true
- http:
path: e07
method: get
cors: true
- http:
path: e08
method: get
cors: true
- http:
path: e09
method: get
cors: true
- http:
path: e10
method: get
cors: true
- http:
path: e11
method: get
cors: true
- http:
path: e12
method: get
cors: true
- http:
path: e13
method: get
cors: true
- http:
path: e14
method: get
cors: true
- http:
path: e15
method: get
cors: true
- http:
path: e16
method: get
cors: true
- http:
path: e17
method: get
cors: true
- http:
path: e18
method: get
cors: true
- http:
path: e19
method: get
cors: true
- http:
path: e20
method: get
cors: true
- http:
path: e21
method: get
cors: true
- http:
path: e22
method: get
cors: true
- http:
path: e23
method: get
cors: true
- http:
path: e24
method: get
cors: true
- http:
path: e25
method: get
cors: true
- http:
path: e26
method: get
cors: true
- http:
path: e27
method: get
cors: true
- http:
path: e28
method: get
cors: true
- http:
path: e29
method: get
cors: true
- http:
path: e30
method: get
cors: true
- http:
path: e31
method: get
cors: true
- http:
path: e32
method: get
cors: true
- http:
path: e33
method: get
cors: true
- http:
path: e34
method: get
cors: true
- http:
path: e35
method: get
cors: true
- http:
path: e36
method: get
cors: true
- http:
path: e37
method: get
cors: true
- http:
path: e38
method: get
cors: true
- http:
path: e39
method: get
cors: true
- http:
path: e40
method: get
cors: true
- http:
path: e41
method: get
cors: true
- http:
path: e42
method: get
cors: true
- http:
path: e43
method: get
cors: true
- http:
path: e44
method: get
cors: true
- http:
path: e45
method: get
cors: true
- http:
path: e46
method: get
cors: true
- http:
path: e47
method: get
cors: true
- http:
path: e48
method: get
cors: true
- http:
path: e49
method: get
cors: true
- http:
path: e50
method: get
cors: true
- http:
path: e51
method: get
cors: true
- http:
path: e52
method: get
cors: true
- http:
path: e53
method: get
cors: true
- http:
path: e54
method: get
cors: true
- http:
path: e55
method: get
cors: true
- http:
path: e56
method: get
cors: true
- http:
path: e57
method: get
cors: true
- http:
path: e58
method: get
cors: true
- http:
path: e59
method: get
cors: true
- http:
path: e60
method: get
cors: true
- http:
path: e61
method: get
cors: true
- http:
path: e62
method: get
cors: true
- http:
path: e63
method: get
cors: true
- http:
path: e64
method: get
cors: true
Template format error: Number of resources, 214, is greater than maximum allowed, 200
another possible error
Template may not exceed 460800 bytes in size.
I expect that this should have worked since there is no limitation on AWS that should forbid this
This isn't entirely true - I mean, it's a limitation of CloudFormation. In order to overcome this, Serverless would have to create nested stacks based on both number of resources and template size. (There's also a limit, though soft, on how many stacks you can have).
I would be curious if you have a real-world example of something that fits the appreciably vague definition of a "micro" service that is hitting this limit. That might inform a way that serverless could split the CFN definition in to multiple without having to be 'intelligent' about it?
@dougmoscrop
it's a limitation of CloudFormation.
When I say 'there is no limitation on AWS that should forbid this' I'm specifically referring to Serverless v1.0 being unable able to successfully deploy a number of endpoints that is far lower than what AWS actually supports.
I would be curious if you have a real-world example of something that fits the appreciably vague definition of a "micro" service that is hitting this limit.
Absolutely I do. I was attempting to migrate my current project from Serverless 0.5.6 to 1.0.0 when I hit this roadblock. I have around 40 endpoints, some with multiple methods, and CORS enabled. This alone far exceeds to 200 resource limit, and I'm only done with a fraction of the project. I've had to go back to Serverless 0.5.6 so that I could continue working.
serverless could split the CFN definition in to multiple without having to be 'intelligent' about it?
- A property could be added at the functions or function level that indicates which CloudFormation template group to use, then Serverless would split up the templates using these groups.
- Serverless could calculate the resource count per function, then split up the templates automatically if it detects it will exceed the limit.
Both methods would require benefit from bit of logic though. It would be possible for a function to move from one template to another, possibly creating a resource collision.
I'd propose the management of lambda's is taken out of CloudFormation. When we 1st developed JAWS I had a CF file for labmdas and a CF file for everything else for this reason (among others). We eventually decided to deploy lambdas via API for speed of deployment, limits of CF and complexity of breaking up and managing multiple CF files.
I struggle with depending on CloudFormation for any "newer" services from AWS (like lambda, APIG, CloudFormation) because the CF team really struggles with keeping up on new features (rightly so, there are feats being released all the time).
I just experienced this exact same issue. We're transitioning our REST API with Serverless, started with around 6 endpoints, everything worked perfectly. Then configured all the routes for the remaining 80 endpoints, transitioned controllers, etc, deployed the code and saw this error [Template may not exceed 460800 bytes in size.]
I'm guessing our issue also has to be due to the high number of endpoints because the file itself is only 15kb, well under the maximum byte size.
Do you guys recommend downgrading to v0.5.6?
Do you guys recommend downgrading to v0.5.6?
@mdang At this point you can downgrade back to v0.5.6, or restructure your program into multiple smaller services. I downgraded since restructuring really isn't an option since I have a spec I need to adhere to.
@flomotlik Do you have any thoughts on this?
Thanks @jordanmack, I was hoping to avoid that but at this point it's better than losing all the work we already put into it.
Did you have to do much to make your project backwards compatible with 0.5.6? I'm trying to look into what changes are needed in order to do that
@mdang You could try using some of the catch all routes and do some routing in the functions using something like KOA or Express. This will also include some benefit of using warm containers.
Did you have to do much to make your project backwards compatible with 0.5.6? I'm trying to look into what changes are needed in order to do that
@mdang In my case I didn't have to do anything since my changes for v1.0 were in their own branch. I just didn't merge. The two versions are pretty different, so there isn't too much you can do to make a v1.0 project backwards compatible with a v0.5.x version.
Perhaps @flomotlik will be able tell us what direction Serverless plans to take in the future, so we can figure out what the best approach is.
I've started work on a plugin to address this issue (among others), as we are also unable to use v1 as it is architected. In short using CloudFormation to manage API Gateway and Lambda plain does not work for us (README.md in my plugin goes into specifics on why). IMO current v1 is good for development, but not a real world production environment. We are stuck on v0.5
for our existing workloads.
We prefer to let swagger be the interface to manage APIG and direct API integration to create and update Lambdas, while keeping permissions (IAM) and other not frequently changing resources in CloudFormation (or maybe Terraform).
Does anyone have similar sentiments?
@jordanmack I did some research and you're right. They're both pretty different and it would cause me to rewrite quite a bit, which normally wouldn't be a problem but in this case it's for an architecture that's being deviated quite a bit from. Sooner or later I'm going to have to transition it back to v1+.
@andymac4182 You're a genius. I completely forgot I can use method any and have the function itself route the request to another controller. This might be my saving grace and allow me to salvage the work that went into it already. It'll allow me to combine a lot of my REST routes, I just hope that it's enough to overcome this limitation. I have around 86 endpoints, what's the most? 50? It doesn't really say.
@doapp-jeremy If this was ready for use I'd install it in a heartbeat. I'll keep watching it
Serverless is amazing, I hope this gets worked out because I can see it being a show stopper for many. I know it's still new but maybe consider a "gotcha" type page for people looking to use it for something like a full REST API
@mdang it will prob be a few weeks til its ready.
Unrelated - curious does my name show up as doapp-jeremy
because I've had people mention me quite a few times lately on github as that. Should be showing up as doapp-ryanp
. Hopefully its just an autocomplete mixup.
@doapp-ryanp Oops, doapp-jeremy is the first option that comes up and I didn't pay attention before hitting enter
Does anyone have similar sentiments?
@doapp-ryanp I don't have a firm stance on use of Swagger, but my experiences with CloudFormation have been very mixed. The noted limits from this issue aside, it's slow, quirky, and bug prone. Frankly, I don't trust the thing to consistently work right. The deployment method used in v0.5.6 had it's minor quirks, but I still view it as the better option.
@mdang have a look at serverless-http you can do a bunch of routing in one function. Separate by lifecycle (separate repos, separate deployments) and use shared libs.
@johncmckim yea my question was more in regard to decoupling serverless v1 from cloudformation for create/update of APIGateway . I have a substatial list of other reasons on why, but since it was large I didn't want to hijack this thread but I think for clarity I've changed my mind, and will put it inline:
@dougmoscrop Thanks, I didn't see that and this could be exactly what I need. So if I understand correctly, I would just have one service with one Lambda function that handles all the possible routes for the request?
Today I went down the route of separating the app into 4 distinct services, however I quickly realized that having to have duplicates of all my library files, .env, packages doesn't make much sense. Not to mention my API now has 4 distinct endpoints that will need to be called within any client apps.
Today I went down the route of separating the app into 4 distinct services, however I quickly realized that having to have duplicates of all my library files, .env, packages doesn't make much sense. Not to mention my API now has 4 distinct endpoints that will need to be called within any client apps.
@mdang I tried something similar as well. For me it doesn't work because of the way AWS handles mapping of paths to a custom domain. You cannot have two services that share same domain and path. For me that was a breaker since my API would then be out of spec.
I also hit the library problem. Fortunately that will be addressed soon. I know they are working on enhancing the include/exclude functionality. @doapp-ryanp also put together a plugin that I believe would handle this situation.
@jordanmack It's nice to know I'm not the only one trying solutions like these, I was wondering if I was going the completely wrong direction. The solution with multiple services/host url's and duplication of libraries, etc is not ideal but I accepted that this is still really new and I'm going to have to redo this at some point, right now getting it to work is most important. Hopefully I can learn enough about how Lambda/API Gateway and Serverless work to make some contributions back some day.
It looks like there might be some similar issues from other users having similar problems: #2853, #2605
Per flomotlik:
We won't be able to fix this soon by ourselves as its an edge case and we need to prioritise other things higher for now.
@doapp-ryanp It doesn't look like this issue will be addressed by the Serverless team any time soon. Any new thoughts on how to implement a patch to work around this?
Serverless deletes and re-creates APIG distro on every update
Is this being done because of issue #1684? If that's the case, then breaking up the CF template for the APIG deployment might still be a plausible solution. Since Serverless is doing a full delete and redeploy each time, the breakup process of the CF template could remain a "dumb" procedure that doesn't track resource changes in any way.
@jordanmack I've begun work on a new serverless plugin https://github.com/doapp-ryanp/serverless-plugin-swag . I haven't spent much time on it yet as I wanted to see what was going to be announced @ re:invent. Honestly I'm not sure when I will get time to work on it in the next few weeks - prob wont be til beginning of next year. But I can assure you I have the same issues (and more) as you so I will have to address them at somepoint.
Hi! I also hit this issue today. Any workarounds? Thanks in advance. Don't know how to deploy my project now :(
LogGroup
s are taking up space from the CF stack from what I have seen.
@asantibanez The only way to work around this that I'm aware of right now is to break up your service into several smaller services, or go back to Serverless v0.5.6. Unfortunately there doesn't seem to be a very good way right now.
Hey @jordanmack ! Just hit this issue today. Any luck or workaround with this issue? Don't know where to start fixing this. Any ideas? Saw the nesting cf stacks
answers but that seems killer.
Notifying @nicka about this one as he has experience with this.
You can use Fn::ImportValue
(http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html) to reference outputs from other stacks.
Any updates on this? This is
@jordanmack is right! Depending on your setup, another solution could be grouping functions and using something like serverless-http.
functions:
foo:
handler: handler.foo
events:
- http: # Catch all proxy useful for express/koa apps
path: '{proxy+}'
method: any
@asantibanez @mwawrusch We're currently splitting our stacks(services) with Fn::ImportValue
's as they are the most flexible(for us). Please check (https://www.youtube.com/watch?v=TDalsML3QqY) from 13:27.
Should the framework do all of this? It would definitely be possible for the framework to do all the heavy lifting for you, but IMHO this would become very complex fast. As a developer I think you'd rather have control over this your self (think about order of stack deployments etc.).
@nicka thanks for your input. The whole raison d'Γͺtre of serverless in it's current incarnation is to make our lives easier, not harder. I seriously regret even touching 0.5 and 1.0 - so many hours wasted.
This is a major issue that is made worse by version 1.3.0. Since 1.3.0 adds versioned lambda functions it effectively creates once extra resource for every Lambda function [1] which for a large project can be a lot of resources. In mine it put me from 20 bellow the limit to 10 above just by upgrading.
Yes, there are ways to restructure the project but this is really something Serverless should take care of because it grinds projects to a screeching halt.
[1] https://gist.github.com/andrewcurioso/58bba11ac1175c26f508888ca466d0ea
Should the framework do all of this?
@nicka I strongly believe that it should. Serverless is attempting to ease the deployment of services, and it does it very well as long as your services are tiny. As soon as you have a real project these hard limits make it unusable. It should be just as easy to deploy 10 endpoints as it is to deploy 100.
@ac360 There seems to be disagreement on the severity of this issue and how it should be solved. Could you please throw in your two cents?
We are migration from EC2 to lambda (serverless), and I had this error:
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading function .zip files to S3...
Serverless: Updating Stack...
Serverless Error ---------------------------------------
Template format error: Number of resources, 227, is
greater than maximum allowed, 200
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Your Environment Information -----------------------------
OS: darwin
Node Version: 6.7.0
Serverless Version: 1.4.0
I think it is same error, we will release it in next year, my team is working with serverless offline and it works very well, but I'm procrastinating to resolve that error, but now I'm starting to get scared.
Maybe I'll separate my serverless.yml in multi sub-projects to I can deploy it, but it will give me a lot work. π
I just saw on serverless site:
Note: Currently, every service will create a separate REST API on AWS API Gateway. Due to a limitation with AWS API Gateway, you can only have a custom domain per one REST API. If you plan on making a large REST API, please make note of this limitation. Also, a fix is in the works and is a top priority. Reference
Much better to know it is on priority. π
@felipefdl Yes, it does look like you have hit the same error as us. I don't think that the note you posted is directly related to this. The template resource error is actually related to a CloudFormation limitation for deployment. It actually has nothing to do with APIG specifically.
The only work arounds for Serverless v1.0+ is to break up your project into smaller services, or place it in some kind of a wrapper so use fewer endpoints. There is no out of the box solution. I don't believe there is an ETA for an official fix on this since the core team has not labeled it as a priority (or even recognized it as a significant problem).
@jordanmack I thinking about create a single function on serverless with wildcard path and wildcard method, and parse the URL on "middleware" lib. What do you think about that? Then I can put all related function together.
For example, I will have a single handle with wildcards, to manager account endpoints, another one to product details, etc... Can you see any bad side on this method? I'm not comfortable with that, because the normal way works well if we don't have this f* limit, but it is a way.
@felipefdl Using a single endpoint is definitely another way to get around it. The only drawbacks I can think of are that you would lose the benefits of the Serverless framework which were specifically created to manage multiple endpoints. Your serverless.yml file you would lack visibility and only show a single endpoint. Deploying or debugging a single isolated endpoint is no longer possible.
It might also take some work to convert everything over. But hey, that's better than not being able to deploy at all.
If you scroll up a bit nicka mentions using serverless-http for that purpose. Also doapp-ryanp is working on another solution, but he said it would not be ready until early next year at the soonest.
As @andrewcurioso said, "it grinds projects to a screeching halt."
What's the solution here?
What'd be good from the team is to know how they'll likely solve this, as I suspect some of us (myself included) need to go live soon and might not be able to await a solution. However, I don't want to make my life harder than it needs to be when coming back to "the way"
@davestone Not sure why I got singled out for a solution since I'm not part of the Serverless team and my post was essential a "me too" with a note about why v.1.3.0 makes it worse but since you asked:
I think the most straight forward solution is to supported Nested CloudFormation stacks and allow configuration options on resources and functions in serverless.yml
to specify which stack the resource should be included in.
That way small projects remain unchanged and large projects can leverage CloudFormation Nested Stacks in a way that makes sense for them. I can even think of uses for Nested Stacks besides just circumventing the 200 resource limit. For example, making deploys faster.
Edit: This, of course, does not directly solve the issue. But at least gives a path to being able to solve it themselves. Another possibility is to automatically breakup the stack in some way. For example, since most of the resources created are tied to the lambda function you could break those out into groups. There would still be a limit but it would be more manageable.
Edit/Note 2: I hit the 200 resource limit with just 34 Lambda functions because I have DynamoDB tables, SNS streams, etc.
Hey everyone!
Thank you very much for these very great discussions π
We just opened up an issue where we can discuss the native support for nested stacks (#2995) and how this could look like.
@andrewcurioso didn't mean to single you out; my mention of you was meant to be a "me too" also.
@davestone I don't mind, I meant that more as "Oh... I wasn't expecting to participate in this thread but since I was mentioned... here's some idea" :)
@davestone 39 lambda functions, few DynamoDB, S3, etc and I've hit this issue on the day I'm meant to be launching. What can I elaborate on to add to a use case to help?
Prerequisite: Just wanted to let you know, I'm not a member of the core team and my response should not be taken as if it's coming from the Serverless team directly.
I can see how packing all those functions and AWS resources (DynamoDB tables and S3 buckets) into one service/stack would become a problem. I know this might not sound fun or helpful at all, but in a way you should be glad your not in production yet. Imagine being in production, wanting to add a resource as part of a fix or feature request and then hitting this issue (happened to me in the past with a regular CloudFormation setup). Since I think a nested stack setup will take quite some time from the Serverless team (time you don't seem to have) could you please elaborate on your setup? For example is the majority of functions linked to an API Gateway endpoint? Do you have a function per request path? If so I definitely recommend to package more endpoints into multiple Lambda functions, this will also make your API faster because of less cold starts.
Majority is linked to APIG, yes. There's a few that aren't, none the less.
Yes, function per path.
I did previously have /foobar* in one lambda while tinkering initially.
Opted to not, and was wondering if that would "fix" this, which is really
delay this issue. I don't know enough about CloudFormation to answer. But,
you're saying it does & speed pro. Anything else to know?
On Tuesday, December 20, 2016, Nick den Engelsman notifications@github.com
wrote:
@davestone https://github.com/davestone 39 lambda functions, few
DynamoDB, S3, etc and I've hit this issue on the day I'm meant to be
launching. What can I elaborate on to add to a use case to help?Prerequisite: Just wanted to let you know, I'm not a member of the core
team and my response should not be taken as if it's coming from the
Serverless team directly.I can see how packing all those functions and AWS resources (DynamoDB
tables and S3 buckets) into one service/stack would become a problem. I
know this might not sound fun or helpful at all, but in a way you should be
glad your not in production yet. Imagine being in production, wanting to
add a resource as part of a fix or feature request and then hitting this
issue (happened to me in the past with a regular CloudFormation setup).
Since I think a nested stack setup will take quite some time from the
Serverless team (time you don't seem to have) could you please elaborate on
your setup? For example is the majority of functions linked to an API
Gateway endpoint? Do you have a function per request path? If so I
definitely recommend to package more endpoints into multiple Lambda
functions, this will also make your API faster because of less cold starts.β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/2387#issuecomment-268344908,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB58BMVKfsp3WBVfx2jP2QgAXCZ0J_2ks5rKDWNgaJpZM4KXfyk
.
@davestone For example in most cases a DELETE call will not be called as much as a GET request. Which would make your Lambda in charge of DELETE request slower because of cold starts. By grouping multiple request types into a single Lambda (GET and DELETE) a normally slow DELETE request would now profit from an already active Lambda container. Serverless posted a nice blog post about different setups Serverless Code Patterns.
TL;DR
In your case you might be able to consider something as mentioned earlier https://github.com/serverless/serverless/issues/2387#issuecomment-266821567.
I'll give it a read, thanks. Authorisation differences between GET and
DELETE calls is the obvious issue that appears to me.
On Tuesday, December 20, 2016, Nick den Engelsman notifications@github.com
wrote:
@davestone https://github.com/davestone For example in most cases a
DELETE call will not be called as much as a GET request. Which would make
your Lambda in charge of DELETE request slower because of cold starts. By
grouping multiple request types into a single Lambda (GET and DELETE) a
normally slow DELETE request would now profit from an already active Lambda
container. Serverless posted a nice blog post about different setups Serverless
Code Patterns
https://serverless.com/blog/serverless-architecture-code-patterns/.TL;DR
In your case you might be able to consider something as mentioned earlier #2387
(comment)
https://github.com/serverless/serverless/issues/2387#issuecomment-266821567
.β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/2387#issuecomment-268349222,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAB58ClX4o1srr7xXpvwlIGmXq2n4jl4ks5rKDnQgaJpZM4KXfyk
.
Summary : recommend using 1.2.x version.
right? :)
Am having this issue too, did anyone find a workaround to it?
Of course it happens as we're scheduled to go to production
@Si1kIfY: A get-to-production solution is to break the project into multiple sensibly separated services. Note that if you need a single ApiGateway, you will need to keep all functions hanging off of it in the same service. The generated template may be reducible if you use custom IAM role(s), defining those in a separate service along with LogGroup declarations for each of your functions. Additionally, you can move any of the non-function resources to the separate (or another) service. Deployment would consist of an in-order deployment as it makes sense on the basis of resource dependencies.
From a how do I look into and gain this knowledge myself/look under the covers standpoint, use the --noDeploy flag and look into the ~/.serverless/cloudformation-template-*-stack.json
files. The role and log groups are generated unless you supply pre-created roles for all functions (specifically at mergeIamTemplates.js, line 27).
Note that ^ this suggestion will only get you so far. It will reduce the resource count by N+1 resources, where N is the number of functions you define.
FWIW, this is a major concern within the community.
Other improvements can be made using the solutions mentioned in str3tch's issue linked above.
Hi Everyone,
Came across the same limitation while upgrading from 0.5.6 to 1.x and for now it's a show stopper for us.
Is there any official support planned by the serverless team (even long term)?
Our use case is a whole api, so all endpoints need to live at the same url... In the meantime, I was thinking of splitting the whole project in smaller services and having a server sitting in front of all the apis, in charge of proxying the calls to the right service based on the path. Obviously it's not optimal but that would fill the gap π
Hi guys,
We have the same issue with 1.7.0 version. Multiple services doesn't seem to be a good solution as client side can't support it properly. So for the moment we just do deployment with some other tools, together with some cli commands to fill the gap. Hopefully this can be resolved soon...... Thanks team
Hey everyone!
We've just prioritized this issue since it's a problem many developers face when they work with complex services in a serverless context.
We've created a new issue (#3411) where we'll gather different approaches how we can resolve this issue.
Would be really nice all of you could chime in on this and provide some feedback regarding the best / your favorite solution!
I hit this issue today with version 1.25 (the latest version currently)
If you are curious and want to know what resources will be created after run serverless deploy
, you can try these commands
$ grep -r "\"Type\": \"AWS::" .serverless/cloudformation-template-update-stack.json |wc -l
202
$ grep -r "\"Type\": \"AWS::" .serverless/cloudformation-template-update-stack.json |sort |uniq -c |sort -n
1 "Type": "AWS::ApiGateway::ApiKey",
1 "Type": "AWS::ApiGateway::Deployment",
1 "Type": "AWS::ApiGateway::RestApi",
1 "Type": "AWS::ApiGateway::UsagePlan",
1 "Type": "AWS::ApiGateway::UsagePlanKey",
1 "Type": "AWS::IAM::Role",
1 "Type": "AWS::Logs::SubscriptionFilter",
1 "Type": "AWS::S3::Bucket"
7 "Type": "AWS::DynamoDB::Table",
22 "Type": "AWS::ApiGateway::Resource",
30 "Type": "AWS::Lambda::Function",
30 "Type": "AWS::Lambda::Permission",
30 "Type": "AWS::Lambda::Version",
32 "Type": "AWS::Logs::LogGroup",
43 "Type": "AWS::ApiGateway::Method",
Thinking to add a feature that every time when run the serverless deploy
, we should run a summary to report how many resources are created in this serverless stack, should we? If we can't fix this issue and have to split to small stacks.
this package works perfect for me
I believe no further effort in core of the Framework should be made to overcome this limit. It's very difficult to secure a reliable solution that will work well for all cases, and currently existing plugins as serverless-plugin-split-stacks by @dougmoscrop probably do best job that can be done in this area.
Instead, we decided to depart from relying on CloudFormation, as apart of being limited in many areas, it's also significantly slower when comparing to relying on AWS SDK directly.
Please check Serverless Components. A lot of time is being invested now to provide it as a viable alternative
Is there a timeline or plan to take core off CFN, or just components? Have you considered the CDK as an alternative to the SDK?
Is there a timeline or plan to take core off CFN, or just components?
There's no plan to remove CF from here. It's more that Components should become powerful and mature enough to simply make what we have in core now obsolete and deprecated (but I think it's not near right now)
Have you considered the CDK as an alternative to the SDK?
No, this will actually mean fiddling more around CF based deployments and at this point we don't want to invest time in that approach.
Most helpful comment
Hey everyone!
We've just prioritized this issue since it's a problem many developers face when they work with complex services in a serverless context.
We've created a new issue (#3411) where we'll gather different approaches how we can resolve this issue.
Would be really nice all of you could chime in on this and provide some feedback regarding the best / your favorite solution!