I have 8 times ${cf:..} in my serverless.yaml and I get 3 out of 5 times "Rate exceeded" while running serverless. I suppose I've reached a limit for cloudformation's API calls (DescribeStack for instance). Is there any chance to avoid this error except of increase my limits? Why doesn't serverless calls the api only once for all stacks? Or at least only once per stack?
Last but not least: Which limit do I reach? I don't know which one mentiond on aws limits
For bug reports:
custom:
stage: ${cf:StackA.StagePrefix}
vpcStackName: ${cf:StackA.VpcStackName}
topicGeneral: ${cf:StackB.In}
topicBs: ${cf:StackC.In}
dnsName: ${cf:StackD.LoadBalancerDNSName}
securityGroupIds: ${cf:StackD.AlbSecurityGroup}
privateSubnet1: ${cf:StackE.PrivateSubnet1}
privateSubnet2: ${cf:StackE.PrivateSubnet2}
> sls package
Serverless Error ---------------------------------------
Rate exceeded
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Forums: forum.serverless.com
Chat: gitter.im/serverless/serverless
Your Environment Information -----------------------------
OS: linux
Node Version: 6.9.1
Serverless Version: 1.14.0
For feature proposals:
Similar or dependent issues:
I assume that you do not hit an AWS resource limit, but hit the standard AWS REST API request limit (i.e. the amount of concurrently submitted requests to the AWS CloudFormation REST API).
Internally that should be visible by the API call returning a 429 (too many requests). To solve such issues the API calls should be serialized properly and each one should wait for the previous one to succeed or fail. Submitting all requests to resolve the 8 cf variable references at once is imo likely to lead to that condition.
/cc @eahefnawy
@StephanPraetsch interesting. Thanks for reporting ๐ and thanks @HyperBrain thanks for jumping in ๐
The AwsProvider plugin implements the check for too many requests here.
But yes, we should definitely do smth. about this since this could be a common problem when heavily using the ${cf:} variable support.
/cc @eahefnawy @brianneisler
+1 having the same issue with 3 ${cf:}-referenced variables
+1 having the same issue with 3 ${cf:}-referenced variables
Thanks for confirming @hassankhan ๐
Interesting to see that (only) 3 usages of ${cf:} already cause the "rate exceeded" error to pop up.
/cc @eahefnawy
Yep, I unfortunately could not deploy at all so I've hardcoded some values for the moment :nauseated_face:
Yep, I unfortunately could not deploy at all so I've hardcoded some values for the moment ๐คข
๐ค what about using Outputs an Fn::ImportValue for the time being?
Doh! Silly me, completely forgot about those, I'll give that a go!!
Much appreciated, @pmuens ๐
Much appreciated, @pmuens ๐
Great! That should do the trick for no. Furthermore that should give you more security cross-referencing your stacks.
We'll investigate in this one and try to fix it so you can switch back to ${cf:} later on (if you'd like to).
@pmuens, Ran into this same issue on my end and implemented a solution similar to what exists here https://github.com/serverless/serverless/blob/84ca99869f6ac320d5eea008bf66548da09f4b0b/lib/plugins/aws/provider/awsProvider.js#L148
My solution simply checks for statusCode 400, which is the status code returned when the 'Rate Exceeded' error is thrown. I'll open a PR to include it when I get home today.
@HyperBrain's solution of serializing the requests sounds like the ideal way to go but I'm not sure how to implement that at this time. Is there on-going conversation around implementing that solution?
Hi,
I also receive this error however with only one ${cf:} variable (for a user pool which I assign to a custom variable that is then referenced as the authorizer for each lambda function). I can't use Fn::ImportValue for this presumably because sls needs the value for the authorizer at the time of packaging.
It seems that each time a custom variable is referenced sls retrieves the value from aws. Also if you have multiple variables in an object that reference ${cf:} vars, then any time you reference that object it looks like all of the ${cf:} variables inside that object are retrieved. This obviously can quickly lead to a lot of requests and the corresponding 'Rate exceeded' error (I can get it with just one ${cf:} variable that is referenced a lot). My testing suggests I can get between 21-23 aws requests in quick succession before getting the rate exceeded error.
My quick solution was to change the code in Variables.getValueFromCf to return a cached promise if the variable had already been requested, i.e.:
getValueFromCf(variableString) {
...
if (
this.cfVars &&
this.cfVars[stackName] &&
this.cfVars[stackName][outputLogicalId]
) {
return this.cfVars[stackName][outputLogicalId];
}
let promise = this.serverless.getProvider("aws").request(...);
if (!this.cfVars) {
this.cfVars = {};
}
if (!this.cfVars[stackName]) {
this.cfVars[stackName] = {};
}
this.cfVars[stackName][outputLogicalId] = promise;
return promise;
}
Thanks for commenting @ubaniabalogun and @stevearoonie ๐
@HyperBrain's solution of serializing the requests sounds like the ideal way to go but I'm not sure how to implement that at this time. Is there on-going conversation around implementing that solution?
Would be great if we could implement such a fix! PRs are always highly welcomed!
Hi @pmuens,
Actually I encounter the exact same issue using ${ssm:/path/to/param} to set environment variables from AWS Parameters Store.
I can use 5 times this syntax. But from the 6th it fails with a "Rate exceeded" error.
I think the fix should be wider than just "Only request CloudFormation variables once".
BTW, does someone have a workaround to still deploy from SSM?
Thank you all.
@pmuens I think it is time to do the serialized requests now. @gozup 's issue clearly shows that this will be the only way to eliminate the problem forever. Every other approach that still allows to break the limit of parallel AWS REST API accesses is not really a solution ๐ but merely a workaround that will make the problem more obscure with each PR.
I'm not sure how to implement that at this time.
I think the right place would be the API request method itself, as it is used centrally from every location of the framework. It could be handled by using a promised queue which just queues submitted method call promises (I think a proper module was BbQueue)
UPDATE:
It is bluebird-queue (https://www.npmjs.com/package/bluebird-queue). This even allows to set a concurrency limit, so that we can go straight to the limit without exceeding it.
@pmuens @HyperBrain
I digged into this issue, and it's actully due to a lot of unexpected calls to SSM api. Let me explain:
My serverless service count 7 functions and each has 13 environment variable set with SSM.
I've added a counter in the function getValueFromSource of ./node_modules/serverless/lib/classes/Variables.js:185:19) in the if statement that handles the ssmRefSyntax.
And, well... it makes 442 requests to AWS SSM API before getting the error "Rate exceeded". Meaning 34 requests PER declared SSM parameter...
I'm not sure that is the expected behavior.
Any reason for that?
Best
Emmanuel
And, well... it makes 442 requests to AWS SSM API before getting the error "Rate exceeded". Meaning 34 requests PER declared SSM parameter...
Ooops ๐ฎ . This doesn't sound healthy, that's an average of roughly 5 calls per var (in each function). @pmuens @eahefnawy Any thoughts? This looks like the variable resolution might not be deterministic and limited.
that's an average of roughly 5 calls per var (in each function)
And it coud be probably more, since the process is stopped by reaching the rate limit!
UPDATE :
In the meantime I did a test (by my own - no relation with serverless) to process 91 (7 func * 13 vars) concurrent calls on SSM.getParameter() with a Promise.all() over the AWS Node Js SDK and no limit reached. It worked like a charm.
Cheers,
Emmanuel
Hey @HyperBrain,
I've developed a plugin to workaround the SSM issue. However I got a question.
I'd like to know how I can access the parsed serverless.yml file?
Inside my plugin, if I do a this.serverless.service.custom I can get the custom values, but not if they come from a nested file.
If my serverless.yml is set like this :
...
service: my-service
custom: ${file(./environments/serverless/${env:NODE_ENV}.yml)}
plugins:
- serverless-ssm-fetch
...
then a this.serverless.service.custom will return ${file(./environments/serverless/${env:NODE_ENV}.yml)} instead of the file content.
Actually it's the last blocker I meet before running live my plugin.
Do you have a solution for this?
Cheers
Emmanuel
Good catch @gozup ๐
Yes, @HyperBrain AFAIK we have the same issue with the cf and the s3 Serverless Variables implementation (especially cf).
It would be nice if we could "queue and batch" the requests somehow. AFAIR we've discusses such a behavior in one issue (couldn't find the correct reference right now) ๐ค
Edit: Hahaha. I finally found the reference. Here you go: https://github.com/serverless/serverless/issues/3821#issuecomment-335494259 ๐
@gozup Hmmm... Normally this.serverless.service.custom should be completely dereferenced and resolved. This looks like a bug - maybe the resolution of the yaml hasn't been finished when you access the object (as everything in resolution is asynchronous). In which hook do you try to access the data - using it in the constructor of a plugin is most likely too early.
/cc @eahefnawy @pmuens
@HyperBrain,
I use it in before:package:initialize hook. But tried in before:deploy:deploy and it was the same.
I set up a serverless-ssm-fetch repo on Github if you want to have a look.
To sum up, if custom is set from the root serverless.ymlit works. If set from a nested file, this.serverless.service.custom returns the file syntax path.
I'm very close from the goal, but this last part definitely blocks me :(
Thank you mate.
Emmanuel
Did you try if it gets resolved, if you have the file reference in a sub-property of custom instead of custom itself? If it did not work in general, we'd have LOTS of bug reports. Maybe the bug is exactly that importing "custom" from a file does not work.
Actually, it's a question of async.
If I use a setTimeout before getting this.serverless.service.custom, then I can access the nested file properties.
Not very nice, but I didn't find a way to listen on when the serverless.yml is fully parsed. Any advise about it?
Cheers
Ok. Then this is a severe bug in the variable resolution part of Serverless. Upon invocation of the lifecycles it MUST be guaranteed that the whole serverless.yml file is resolved with each of its variables that it contains as well as any references that there may be.
@horike37 @pmuens @RafalWilinski We should open a separate issue for that with high priority. https://github.com/serverless/serverless/issues/3821#issuecomment-336169219 shows clearly that there is a bug. We have to make sure that the variable resolution finishes (and SLS waits for that) and only then continues to start the command lifecycles.
Hi @pmuens, @HyperBrain, @horike37, @RafalWilinski,
FYI, I just did a pull request of my plugin to handle SSM parameters in the meantime.
MR: https://github.com/serverless/plugins/pulls
Plugin repo: https://github.com/gozup/serverless-ssm-fetch
Hope it helps.
Cheers,
Emmanuel
@gozup
Thank you for telling us the plugin :tada:
That looks convenient:+1:
@HyperBrain where can we find the separate issue you mention for fixing this bug? This is a huge nightmare and the mentioned plugin does not work in any stack i try it in.
Hi Bruno,
I saw you opened an issue on the plugin's repo. I'll dig into it by the end of the day.
However, you can be sure that it works as I'm using it in production on a large project that fetches more than 30 secret parameters.
Seems like the problem is more coming from the installation than from the plugin itself.
Cheers,
Emmanuel
Le 27 oct. 2017 ร 06:41, Bruno Watt notifications@github.com a รฉcrit :
@HyperBrain where can we find the separate issue you mention for fixing this bug? This is a huge nightmare and the mentioned plugin does not work in any stack i try it in.
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@delprofundo Sorry, I did not create one yet - forgot it completely ๐ as I had only minimal time lately to track anything here.
@hassankhan
I got this issue today as well, when have 6 ${cf:..} in my serverless.yml
Could you please show me the codes how to use Fn::ImportValue properly in this problem?
I think I need to add Export in outputs in stackA's serverless.yml
Outputs:
DynamoDbTable:
Value: { "Ref": "DynamoDbTable" }
Export: { "Name" : {"Fn::Sub": "${AWS::StackName}-DynamoDbTable" }}
But got this error:
Invalid variable reference syntax for variable AWS::StackName. You can only reference env vars, options, & files. You can check our docs for more info.
Then I need to add something below in stackB's serverless.yml
My old code:
provider:
environment:
DynamoDbTable: ${cf:${self:custom.cf_stack}.DynamoDbTable}
Should new code be like this? Can you confirm?
provider:
environment:
DynamoDbTable: [{"Fn::ImportValue" : {"Fn::Sub" : "${self:custom.cf_stack}-DynamoDbTable"}}]
@ozbillwang You have to use Fn::Join instead of Fn::Sub as that will clash with Serverless' variable notation.
Try to use:
Outputs:
DynamoDbTable:
Value:
Ref: DynamoDbTable
Export:
Name:
Fn::Join:
- ''
-
- Ref: AWS::StackName
- '-DynamoDbTable'
@HyperBrain
Thanks a lot, that's really helpful.
Finally, I fix the problem to use Fn::ImportValue
Ouput name is the local variable in CFN stack, so you can refer it as ${cf.stack_name.variable} from another CFN stack. But in our case, if you reference too many ${cf:..} you will get rate exceed issue.
Export Name is a global name in that aws account. The usage of it is different, you need to get the value from Fn::ImportValue
If you set Outputs in StackA
Outputs:
DynamoDbTable: # This is output name
Value:
Ref: DynamoDbTable
Export:
Name: DynamoDbTable-${self:custom.stage} # This is export name
You need refer this Export variable in StackB with below codes. (remember, you needn't add CFN stack name when using Fn::ImportValue)
provider:
environment:
DynamoDbTable:
Fn::ImportValue: DynamoDbTable-${self:custom.stage}
If you need use "Pseudo Parameters", such as AWS::StackName or AWS::AccountId, use the format @HyperBrain provided.
I also raised PR (https://github.com/serverless/serverless/pull/4468) to update the related document.
I closed #4294 because it breaks the semantical separation in the variable handling and would introduce provider specific code into the generic Variables class.
We'll go for a two step approach now, to solve this problem once and forever. The first part is solved by #4499 which adds generic variable caching to the Variables class. This will prevent multiple resolutions of the same variable.
The second step to finalize this issue is, to add AWS request caching to the AWSProvider.request() class method to cache the results of specific AWS API calls. This will catch the case that different variables trigger the same AWS API calls.
Of course we have to take some care there, because naturally not all AWS calls can be cached (e.g. the stack monitoring calls). We should either implement the request cache with a whitelist defined in AWSProvider, that includes all cacheable functions (e.g. describeStacks, etc.) or a new method parameter shouldCache that defaults to false to keep the current behavior and let the caller decide if he supports caching or not for a specific call.
This second step will allow us to not only cache AWS calls originating from the variable system, but also done from other locations or even plugins that use the request method.
Maybe we can try to fix the issue mentioned in #4311 (dependent variable resolution) in the same rush.
Oops. I closed this one by accident - reopened again ๐
@serverless/framework-maintainers @e-e-e I'll finish the actual request caching in the PR linked above. Then we should have killed the rate exceeded eventually.
@HyperBrain awesome. Thanks for letting me know. Although I thought the solution was to implement a rate limited promise queue on the provider rather than a cache. As if you donโt queue the requests, we will still end up with the possible scenario where X (any large number) calls are made in parallel and ๐ฅ.
@e-e-e Oh, you're right. I'll change the code to cache the request promise instead of the received value. That's an easy thing.
UPDATE: Cache now works together with queuing the requests.
The complementing PR is finished now. I'd appreciate if people in this thread would test out the cache-AWS-requests branch ("serverless": "github:serverless/serverless#cache-AWS-requests") and report here if all their rate exceeded issues in the variable system are resolved now.
@HyperBrain
Do you mean, with this new feature, I needn't use export variables any more, right?
Because when implemented with export variables, I found the cloudformation stacks go with chain, and I can't delete stackA directly, because it has several export variables referenced by stackB.
And sls deploy doesn't report any problem with this issue (only in console, I can see roll back status on stackA.
@ozbillwang Yes. With this fix, referencing variables with ${cf:...} would work again. The difference is, that the value of the referenced output is only retrieved during build time and thus does not go under CloudFormation control and it does not enforce a CF dependency to the stack from which the output is retrieved.
I am facing the same issue.
My deploy command: serverless deploy --aws-s3-accelerate --stage production --verbose
Serverless: Zip service: /home/ubuntu/project/.webpack/service [8295 ms]
Serverless: Packaging service...
Serverless: Remove /home/ubuntu/project/.webpack
Serverless Error ---------------------------------------
Rate exceeded
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Forums: forum.serverless.com
Chat: gitter.im/serverless/serverless
Your Environment Information -----------------------------
OS: linux
Node Version: 6.10.3
Serverless Version: 1.26.1
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
NOTE: Also, removing --aws-s3-accelerate does not work.
Seems to be happypack package the problem. I am thinking about CircleCI resources exceed (CPU maybe?)
After removing that works properly...
Well... The error comes back again, even without happypack.
Thoughts?
@brunocascio Is there an exact version of Serverless where it reappeared - or what is the last working one?
Hi @HyperBrain, thanks for the quick response. I believe that I'm facing this issue since 1.26.x.
I am using circleCI in order to deploy the api but I tried again and it works. Sometimes it works, sometimes it doesn't.
Perhaps, could be a CircleCI error because in local environment works pretty well, BUT, I'm using different IAM credentials for circleCI, so could be a limit of AWS resources usage, --aws-s3-accelerate?
Good point. Can you check if some permissions in that area are different or missing in the CI user role?
Even using administrator access I got this error.
Can I limit how many requests --aws-s3-accelerate should do?
Oh, wait. The Rate Exceeded fix I did (which closed this task) only affects any implementation that internally uses the Serverless.provider.request() method. I think the S3 upload/download functions might bypass this (especially after the acceleration has been added). So it might not be resolvable right now, but at least is restricted to this use case. Of yourse this does not help you ๐ค .
We should open a separate bug that explicitly states the dependency on S3 / acceleration as it is different than this issue.
Cool! Let me a few builds without using --aws-s3-accelerate and I come back here, just to be sure ๐
Well... Even removing --aws-s3-accelerate the error appears anyway...
@brunocascio
I used to have this issue, but after v1.25, I don't have it any more.
Yesterday, I re-run the codes in exist and a totally new aws accounts, both works fine with latest sls version. I am not sure the problem you have, but this is my feedback I can give. I didn't use the option --aws-s3-accelerate in my project.
Can you try this, remove the whole stack and deploy it again?
I removed .cache folder from node_modules and it seems to work. The error appears sometimes.. In a few days I'll back to comment
It does not work... I'm thinking about serverless-webpack library or CircleCI resources, but the error is just "Rate Exceed" even when I use --verbose argument.
Serverless: Copy modules: /home/ubuntu/project/.webpack/service [5603 ms]
Serverless: Prune: /home/ubuntu/project/.webpack/service [1661 ms]
Serverless: Packaging service...
Serverless: Remove /home/ubuntu/project/.webpack
Serverless Error ---------------------------------------
Rate exceeded
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Forums: forum.serverless.com
Chat: gitter.im/serverless/serverless
Your Environment Information -----------------------------
OS: linux
Node Version: 6.10.3
Serverless Version: 1.26.1
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
yarn deploy --stage ${CIRCLE_BRANCH} --verbose returned exit code 1
Action failed: yarn deploy --stage ${CIRCLE_BRANCH} --verbose
Might be related with: https://github.com/serverless-heaven/serverless-webpack/issues/299
@brunobelotti "Rate exceeded" must originate from Serverless' deployment phase (uploading the artifacts and calling the CF deployment) as serverless-webpack only does its work during the package phase (where no external interaction is involved).
So any plugin that hooks into the deployment phase is a candidate as well as Serverless itself. The error is catched by Serverless' exception handler, so I think it is an AWS call.
Can you try to execute SLS_DEBUG='*' serverless deploy ... to get a proper stacktrace for the "Rate exceeded" error. Then we should be able to exactly trace back where the error happens?
FYI (different topic): I saw that you use yarn to deploy (and probably setup the node_modules folder). Serverless as well as any released serverless-webpack versions use npm for packaging and might mess up the dependencies installed with yarn - and they do not respect any yarn lock file.
Serverless-webpack (master branch) contains full yarn support now to overcome these issues, so you should prepare to update with the next release there or try the master branch.
I've just updated webpack and set SLS_DEBUG. The error could be related with serverless-plugin-split-stacks plugin, but I don't know what's happening...
Serverless: Invoke webpack:package
Serverless: Zip service: /home/ubuntu/project/.webpack/service [239 ms]
Serverless: Packaging service...
Serverless: Remove /home/ubuntu/project/.webpack
Serverless: Invoke aws:package:finalize
Serverless Error ---------------------------------------
Rate exceeded
Stack Trace --------------------------------------------
ServerlessError: Rate exceeded
at BbPromise.fromCallback.catch.err (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:258:33)
From previous event:
at persistentRequest (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:247:13)
at doCall (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:205:9)
at BbPromise (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:216:14)
From previous event:
at persistentRequest (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:203:38)
at Object.request.requestQueue.add [as promiseGenerator] (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:237:49)
at Queue._dequeue (/home/ubuntu/project/node_modules/promise-queue/lib/index.js:149:30)
at /home/ubuntu/project/node_modules/promise-queue/lib/index.js:156:26
From previous event:
at Queue._dequeue (/home/ubuntu/project/node_modules/promise-queue/lib/index.js:151:18)
at /home/ubuntu/project/node_modules/promise-queue/lib/index.js:108:18
From previous event:
at Queue.add (/home/ubuntu/project/node_modules/promise-queue/lib/index.js:93:16)
at AwsProvider.request (/home/ubuntu/project/node_modules/serverless/lib/plugins/aws/provider/awsProvider.js:237:39)
at listStackResources (/home/ubuntu/project/node_modules/serverless-plugin-split-stacks/lib/utils.js:261:23)
at ServerlessPluginSplitStacks.getStackSummary (/home/ubuntu/project/node_modules/serverless-plugin-split-stacks/lib/utils.js:273:12)
at Promise.all.nestedStacks.map.stack (/home/ubuntu/project/node_modules/serverless-plugin-split-stacks/lib/migrate-existing-resources.js:19:21)
at Array.map (native)
at getStackSummary.catch.then.then.nestedStacks (/home/ubuntu/project/node_modules/serverless-plugin-split-stacks/lib/migrate-existing-resources.js:15:39)
at runCallback (timers.js:672:20)
at tryOnImmediate (timers.js:645:5)
at processImmediate [as _immediateCallback] (timers.js:617:5)
From previous event:
at ServerlessPluginSplitStacks.getCurrentState [as migrateExistingResources] (/home/ubuntu/project/node_modules/serverless-plugin-split-stacks/lib/migrate-existing-resources.js:14:6)
at Promise.resolve.then (/home/ubuntu/project/node_modules/serverless-plugin-split-stacks/split-stacks.js:67:24)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Forums: forum.serverless.com
Chat: gitter.im/serverless/serverless
Your Environment Information -----------------------------
OS: linux
Node Version: 6.11.5
Serverless Version: 1.26.1
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
yarn deploy --stage ${CIRCLE_BRANCH} --verbose returned exit code 1
Action failed: yarn deploy --stage ${CIRCLE_BRANCH} --verbose
Thanks for your help!
You're welcome ๐ . Yes, according to the stack it's there.
@brunocascio
I used serverless-plugin-split-stacks plugin in my project as well, don't see the problem after v1.25
@ozbillwang Take a look at https://github.com/dougmoscrop/serverless-plugin-split-stacks/issues/34 & https://github.com/dougmoscrop/serverless-plugin-split-stacks/pull/35 :)
@brunocascio @HyperBrain I approached very same issue on my side, and proposed some general fix for recoverable request errors here: https://github.com/serverless/serverless/pull/4877 (on my side works like a charm)
@medikoo Sound Great! I was facing that issue sometimes.
Most helpful comment
Ok. Then this is a severe bug in the variable resolution part of Serverless. Upon invocation of the lifecycles it MUST be guaranteed that the whole serverless.yml file is resolved with each of its variables that it contains as well as any references that there may be.
@horike37 @pmuens @RafalWilinski We should open a separate issue for that with high priority. https://github.com/serverless/serverless/issues/3821#issuecomment-336169219 shows clearly that there is a bug. We have to make sure that the variable resolution finishes (and SLS waits for that) and only then continues to start the command lifecycles.