Serverless: Skip resources if already exist

Created on 2 Feb 2017  ยท  105Comments  ยท  Source: serverless/serverless

This is a (Feature Proposal)

Description

For bug reports:

  • What went wrong?

I had a bug that cloudformation just stuck at UPDATE_ROLLBACK_FAILED.
So I should to delete the stack and deploy serverless again.
But now I got another problem:

```
Serverless Error ---------------------------------------

 An error occurred while provisioning your stack: AvailableDynamoDbTable
 - Available already exists.

```

  • What did you expect should have happened?

I think that database is too critical in production level to wont use Retain, In a simple wrong deploy or remove the stack can banish all your tables, wrong deploy can easily rollback but data is really critical.

So I suggest to have something like: serverless deploy --skipResources, so it wlll skip the resources that already exist and cloudformation wont bring that error.

Similar or dependent issues:

  • #3148

Additional Data

  • Serverless Framework Version you're using: 1.6.1
  • Operating System: Mac OS El Capitan
  • Stack Trace:
  • Provider Error messages:
cadeployment feature wontfix

Most helpful comment

As I said, a bug anywhere could happen, as it did for me. So its not a wrong pattern of management. It can and do happen. So this feature will save a lot of headache if this happen in a prod env.

All 105 comments

If the data is too important to delete, you probably shouldn't be managing the Table resource in your service definition - it belongs outside, either in a "resource-only service" (if you want to use sls to manage it), or in a completely different CFN template.

As I said, a bug anywhere could happen, as it did for me. So its not a wrong pattern of management. It can and do happen. So this feature will save a lot of headache if this happen in a prod env.

I don't think this is a bug; You are trying to create a resource with exactly the same name as an existing resource, which is not allowed for DynamoDb Tables (but is allowed for other resources).

Edit: Just thought of another example; For some resources CFN will generate a unique name (by affixing a random string), so how will you know if the resource is to be kept or not? I'm sure there's other edge cases like this that make such a feature complex and error-prone to implement (even though it sounds like a good idea on the surface).

This case is not like I am trying to upload a new project with resources already deployed.

Its related to bugs like that: #3146
Even so this AWS Bug persist since 2012.

I dont know if it is applicable to every "resources", atleast for databases I think.

I already have a small project running with sls 0.5.6 and now I decided to create a big one under serverless framework.

Another problem was:

I did a mistake to create a new DynamoDB table with wrong index, so I decided to delete the database and create again. But after that, serverless gives the same error for all others tables, so I needed to delete all my tables and deploy again. (Obs: I use retain in config).

I too am having a problem with this.

I have a lambda service which subscribes to a SNS topic created, and written to, by a server resident service. I am attempting to user ServerLess™ to manage this lambda, but I get the following error in deployment:
```
Serverless: Checking Stack update progress...
................Serverless: Deployment failed!

Serverless Error ---------------------------------------

 An error occurred while provisioning your stack: SNSTopicOrderStatusNotifications
 - OrderStatusNotifications already exists.

```
I understand, from #1842, that ServerLess™ is failing when it attempts to create the topic.

It makes sense to me that the service that writes to the topic should create the topic. Having the subscriber(s) create SNS topics, especially in cases such as mine, where the topic is widely subscribed to, seems sub-optimal. In this case ServerLess™ should skip the creation of the pre-existent resource.

Apparently, my situation is already taken care of. I missed this in the docs.

Apologies for the noise.

@hermanmedsleuth great to see that your issue is resolved ๐Ÿ‘

@pmuens Another problem related to this:

I created a new DynamoDBTable resource in Serverless.yml and also a stream function to this table.

It gives me an error because stream in this table was not SET, its ok. Since I just created the table.
So, I go to the console, enabled the stream, now tried to deploy serverless again.

And the error becomes again, DynamoDBTable already exists.

I'll keep this post updated with all errors that I found to convince you that it should be implemented. XD

@marckaraujo thanks for updating! ๐Ÿ‘

The stream event has some known bugs which might be related here (see thread here: https://github.com/serverless/serverless/pull/2488).

I would be happy for this feature to exists as well

@rowanu

I'm sure there's other edge cases like this that make such a feature complex and error-prone to implement (even though it sounds like a good idea on the surface).

I don't think thats a valid reason not to implement it. Shouldn't the community try to find a collective solution to this?

I agree with @rowanu here. If you want to put a further level of safety onto your resources you should put them into a separate resource only CF stack, export the resource names and import them via _Fn::ImportValue_ in your function stack. This also guarantees that the resource stack cannot be deleted as long as it is referenced anywhere.
A CF stack naturally owns its resources and makes sure that everything is created/changed/updated in a transactional way.
BTW: You should not specify a _TableName_ property with DynamoDB tables as this prevents any changes that need _Replacement_ like changing the keys, etc. A better way is to grab the tablename via _Ref_ where you need it and publish it through environment variables to your code.

It makes sense, to me, to consider, and handle, trigger resources as exogenous to the CF stack. That a CF stack is expected to manage a resource that incipiates the instantiation of itself, strikes me as poor design choice. A lambda function is not always the entirety of an application. It is (probably most) often a service to a larger application.

@hermanmedsleuth then why is my serverless.yml file an interface to CF and not just a subset of it? It looks like something partially implemented right now

@kennu has made a plugin for deploying additional CF stacks with Serverless.

@kennu @laardee This plugin is very nice but I dont know if it solves the problem described here. Since if you need to re-deploy a resource like Dynamodb table you will get an error even if it is in another CF stack.

@marckaraujo that plugin helps you to manage multiple CF stacks, like @rowanu and @HyperBrain suggested. If your service deployment then fails, the DB and other critical resources won't be affected when your service CF stack needs to be removed.

I am constantly running in this issue aswell. It prevents continuous deployment on my system. I created a stackoverflow discussion on how to handle tables that block updates: http://stackoverflow.com/questions/43771000/how-to-migrate-dynamodb-data-on-major-table-change/43790256#43790256
Maybe that helps you guys to implement a better handling of table deployment in this framework.

๐Ÿค” I agree with the point that you might want to put this kind of resources into separate stacks and manage them there.

@brianneisler and @eahefnawy what are your thoughts on this?

I'm still not seeing a solution here for defining a pre-existing, possibly shared, resource as a trigger for a lambda, in ServerLess™.

Given Serverless is a private company Do you guys have any open governance structure for the framework? Like RFCs that will let the Serverless user base tackle discussions like this a bit more formally? cc @pmuens

Good question @mariogintili ๐Ÿ‘ Thanks for asking!

/cc @brianneisler @ac360 @worldsoup ๐Ÿ”

@pmuens When you agree with putting your databases into another stack. Is that a stack not run by serverless?

From my point of view serverless is a perfect fit for building micro services which by definition have their dedicated resources they require to function and shall not share them with other services. Also by definition running serverless requires not only to manage software but also the required operating systems from code. Thus, managing dynamodb externally tasks like the suggestion to go back to a hosted solution.

@pmuens When you agree with putting your databases into another stack. Is that a stack not run by serverless?

@nenti Yes, you could do that and then reference this DynamoDB with Cross-Stack-References (via Fn::ImportValue) to tie them together.

Also note that Serverless supports function-free services (https://github.com/serverless/serverless/pull/2499) so you could even deploy this DynamoDB only-service via Serverless.

I agree that it's nice to have everything in one CloudFormation stack but you might want to split things up as soon as your service gets bigger and you want more control over the resources.

If an event can reference a pre-existing resource, then ServerLess™ should not gork if it cannot create it. However, when the resource is a SNS Topic, that is not the case. This is inconsistent, at best.

@pmuens Ok I can put it into another stack. But then when I change my table and redeploy I get same error from serverless. Even if I use a function-free service. Because serverless doesn't handle dynamodb. So If my serverless consists of dynamodb only that doesn't change this fact.

@pmuens Ok I can put it into another stack. But then when I change my table and redeploy I get same error from serverless. Even if I use a function-free service. Because serverless doesn't handle dynamodb. So If my serverless consists of dynamodb only that doesn't change this fact.

The "best-practice" (if you don't want to put everything in one template and run into the problem this issue describes) would be to deploy the DynamoDB with a separate CloudFormation file and import output values via Fn::ImportValue. That's also something AWS recommends.

This way you'd get highly coupled services, but it's a good thing if you cannot remove the DynamoDB stack when its values are still used in another stack.

I think I ran into a similar problem. It sounds like the solution is to have the dynamodb table be created in a resource only CF stack. Serverless can manage function-free services so I could do it via serverless.

We have a lambda that subscribes to a dynamodb stream via Fn::GetAtt. It sounds like I can export the StreamArn from the resource stack and use it in my service stack via Fn::ImportValue?

Problem below for context:
The problem I have is our serverless file creates a dynamodb table, and then creates a bunch of lambda functions that listen to streams. Sometimes we get an error creating the stream

An error occurred while provisioning your stack: IntentsEventSourceMappingKinesisAnalyticsstream
     - Received Exception while reading from provided stream.

After the rollback the table still exists due to retain policy which means when I go to deploy again it'll say the table already exists and fail.

It sounds like I can export the StreamArn from the resource stack and use it in my service stack via Fn::ImportValue?

@iwllyu yes, that should be possible.

The problem I have is our serverless file creates a dynamodb table, and then creates a bunch of lambda functions that listen to streams. Sometimes we get an error creating the stream

Do you have more information about the exception at hand? What kind of exception it is? Could it be that the EventSourceMappings are created before the stream is in place? Have you tried to add DependsOn properties?

Sorry I didn't include the root cause. I think it has to do with when event streams are first created the system tries to read everything and it causes a throughput exception, which is an AWS thing, not a serverless thing. Possibly because we use TRIM_HORIZON

Received Exception while reading from provided stream. Rate exceeded for shard shardId-xxx in stream x under account x. (Service: AmazonKinesis; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: x)

Sorry I didn't include the root cause. I think it has to do with when event streams are first created the system tries to read everything and it causes a throughput exception, which is an AWS thing, not a serverless thing. Possibly because we use TRIM_HORIZON

Ah. Yes, that makes sense! Yes, this seems to be AWS and not Serverless related.

Thanks for getting back and providing this information @iwllyu ๐Ÿ‘

I'm pretty new to the serverless project, been lurking.
But I use cloudformation almost daily, and was curious if the serverless deploy could have a complimentary argument named 'update'.

Since it's basically just generating a cloudformation template, that template can be applied as an update instead of a deploy.

An update will compute the difference between what is already present and notify you of the changes before taking action.

@zbuckholz thanks for your comment and welcome abroad ๐Ÿ‘

Do you mean CloudFormation change sets? We've spent some time to figure out if we should / could add them, but unfortunately introducing them now would be a breaking change and could therefore only be done in v2.

I'm just throwing my hat in the ring on the side of: there's nothing that Serverless needs to implement here. As many have said, you can manage those resources in a separate resource-only stack (I default to this and have tons of stacks that have only S3 buckets / DynamoDB tables / etc). Then your services are made to depend on those by means of imports or just name patterns.

How would Serverless implement a --skip-existing-resources anyway? Serverless is creating a CloudFormation template and letting CloudFormation do the deployment. If Serverless didn't include the resources in the CF template (and they were there in a previous version), CF would remove them.

A workaround is to comment-out the resource you know are already in AWS before running deploy. I too, would like to see support for this feature in Serverless. May be it can be done per resource type and Serverless can just look up the resources by their id in AWS to see if they exist before generating the Cloudformation.

@jthomerson @rowanu putting the resources in a separate service may be better design but it adds unnecessary complexity. The simplicity of Serverless over Cloudformation or SAM is a major advantage in my opinion.

I also have a problem now that I just hit the cloudformation creation limit, so I cant deploy all my resources at once. Then I need to comment-out the resources already deployed.

But now if I commment-out the resources, my linked dynamodb streams doesnt work anymore.
(If you ask me why?) thats why:

streamRoom:
                  description: "Stream Room"
                  handler: api/stream/room/handler.default
                  memorySize: 256
                  timeout: 36
                  events:
                    - stream:
                        type: dynamodb
                        arn:
                          Fn::GetAtt:
                            - RoomDynamoDbTable
                            - StreamArn
                        batchSize: 1

With this workaround, I dont need to setup streamId, so it works for multiple stages with an easy configuration, but if RoomDynamoDbTable is comment-out. Then it doesnt work. @pmuens

@dimitrovs @mataide the problem with the "commenting out" strategy is that CloudFormation will see that the resources are not there anymore and hence removes them.

The CloudFormation template always reflects how the actual deployment / state will look like after applying / deploying it. That's why commenting out won't work.

any progress on this issue?

Ok, a lot of people give their opinion but what is the solution for this? Because until now I have any from Serverless, another plugin or workaround.

I think --skip-custom-resources is not a solution because, as said, the CF will delete resources not included in the update. The ultimate solution is to handle database via a separate cloud formation stack. In this case there is still problem that you cannot update the same table with new indexes but this is a known problem and it is not coupled to deployment of lambda functions then.

My only concern right now is that without defining my tables in serverless.yaml I am not able to use the serverless-dynamodb-local plugin. This is what I am looking for right now: running a local dynamodb using a custom CF template.

P.S. I guess I need smth like this https://github.com/steven-bruce-au/dynamodb-local-cloud-formation

Hello all,
I am ramping up on Serverless, and have already experienced a need to reference an existing resource as an event trigger to my function.

After thoroughly reviewing this thread, it is apparent that more documentation from Serverless is required to handle everyday scenarios.

For anyone who is just starting with Serverless, the resources section is used to define NEW resources you want to create as part of your services roll out. Only use it to define new resources. If you start using the resources section to reference existing resources you are creating resource versioning issues. This is why the Op suggested -skipResources option would not be a good solution.

Example: You have two serverless.yml files both defining a shared Resource, both with slightly different properties whether on purpose or by accident. Which was is the correct one? Or let say both Resource definitions are the same, but you want to change the properties, now you will have to update the Resource definition in two places. Not a good approach.

The better approach is the have a master or a base serverless.yml define all Resources once, and have a serverless variable or use the Fn:ImportValue in all subsequent serverless.yml to reference the ARN of the existing Resource.

Hello all. Another vote for being able to manage a subset of resources with Serverless.

We have already existing infrastructure managed by terraform - and which is owned by another team. We can't include DynamoDB resources in our serverless.yml at production deploy time, since they already exist. We need these resources defined for offline dev.

Just another use case. My gitlab CI setup includes a preview build which builds and deploys a serverless stack. Upon branch merge or branch deletion the CI will remove the associated serverless stack. The initial build out works as intended but if additional commits are added to the branch the build pipeline gets triggered and runs sls deploy again which causes the dyamodb table exists failure.

As a workaround I am just removing the table before hand

In this case I don't care about the data but I would have thought the cloud formation would have seen that the dynamodb table existed and moved on.

I suppose I could move just the dynamodb table creation out of serverless into its own CF deploy, but I am lazy ATM.

Can this be handled via a plugin? I am thinking to right a plugin that will just remove the resource section from the output? Am I in wrong direction?

@ali-himindz The brute force approach I use removes the entire resource key from serverless.yml at deploy time (before restoring it).

cat serverless.yml | yq -y 'del(.resources)' | sponge serverless.yml
sls deploy
git checkout serverless.yml

(Requires the python yq tool, jq and moreutils).

that's rather sad, can't we just have a flag, which says 's skip if exists? Shouldn't be too hard...

@berlinguyinca what do you want Serverless to do? Serverless is _not_ creating the resources. Serverless simply creates a CloudFormation template and lets CloudFormation create the resources.

is there any way to check for serverless if resources exists ahead of time?

On Thu, Apr 26, 2018 at 1:47 PM, Jeremy Thomerson notifications@github.com
wrote:

@berlinguyinca https://github.com/berlinguyinca what do you want
Serverless to do? Serverless is not creating the resources. Serverless
simply creates a CloudFormation template and lets CloudFormation create the
resources.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/3183#issuecomment-384783797,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAA_7J3sTiNnE99BFu31ILerqtzDxTwIks5tsjJegaJpZM4L1B5m
.

--

Lead Developer - Fiehnlab, UC Davis

gert wohlgemuth

work:
http://fiehnlab.ucdavis.edu/staff/wohlgemuth

phone:
530 665 9477

coding blog:
http://codingandmore.blogspot.com

linkedin:
http://www.linkedin.com/profile/view?id=28611299&trk=tab_pro

@berlinguyinca Sure, Serverless can call any AWS API that it needs to. The point is, though, that this isn't the responsibility of Serverless - Serverless isn't in the business of creating resources. If it added a feature to see if a resource existed, it would need to constantly keep up with the APIs for every service that AWS provides. Serverless' job is to create a CloudFormation template that it sends to CloudFormation so that CloudFormation can do the heavy lifting of integrating with all the AWS services.

I don't understand your usecase where you're defining a resource in Serverless that already exists. What originally created the resource? If something else created it, why is it being defined again in the Serverless template?

we are utilizing serverless to streamline our deployment in a CI system,
everytime all tests are passing, the serverless file is deployed. If new
resources were defined during the development, they obviously have to be
created on the serverside as well automatically. We also run in several
different env. Like dev/test/production. And it kind of defeats the purpose
in my humble opinion to create all the resources by hand, before initial
deployment.

On Thu, Apr 26, 2018 at 2:58 PM, Jeremy Thomerson notifications@github.com
wrote:

@berlinguyinca https://github.com/berlinguyinca Sure, Serverless can
call any AWS API that it needs to. The point is, though, that this isn't
the responsibility of Serverless - Serverless isn't in the business of
creating resources. If it added a feature to see if a resource existed, it
would need to constantly keep up with the APIs for every service that AWS
provides. Serverless' job is to create a CloudFormation template that it
sends to CloudFormation so that CloudFormation can do the heavy lifting of
integrating with all the AWS services.

I don't understand your usecase where you're defining a resource in
Serverless that already exists. What originally created the resource? If
something else created it, why is it being defined again in the Serverless
template?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/3183#issuecomment-384802000,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAA_7AzjmM3kU4zeLCLanm1hwRqyCZm6ks5tskMggaJpZM4L1B5m
.

--

Lead Developer - Fiehnlab, UC Davis

gert wohlgemuth

work:
http://fiehnlab.ucdavis.edu/staff/wohlgemuth

phone:
530 665 9477

coding blog:
http://codingandmore.blogspot.com

linkedin:
http://www.linkedin.com/profile/view?id=28611299&trk=tab_pro

Also hitting the problem. Use case: simple service with dynamodb, want to create if not existing, else update or at least ignore, for several environments. We are looking for simplicity - it is a pain to do this by hand, and complicate the deployment with extra steps...

Wish I knew what half this shit ment.

I am up for "skip if exists"

I just ran into this issue by accidentally deleting the wrong Serverless app (luckily a dev version from the wrong branch and not the production app). Our DynamoDB tables all have DeletionPolicy: Retain for exactly these situations. However because the table was already there I could not re-deploy the app.

Here's how I worked around the issue:

  1. I deleted the tables manually in AWS with the create backup option selected.
  2. I re-deployed the Serverless app to fix the broken CloudFormation state.
  3. I deleted the tables manually _again_ (no backup this time since they were empty).
  4. I restored all of the backups I made earlier into new tables with the same name as the originals.

This way I was able to retain all the data and got the CloudFormation stack to work properly again. I confirmed this by making a change to the ProvisionedOutput of one of the tables and then deployed, which worked as expected.

I know this is a bit unconventional as you're not supposed to touch CloudFormation controlled resources manually, but I would imagine if this would happen to someone in production environment this workaround might be a real life saver.

And for the feature proposal: I agree with what many have said here that it doesn't make any sense to declare the same resource in multiple Serverless apps. However for my particular case, ie. removing the stack and then later trying to re-deploy it, the feature proposal to fix this for all resource types sounds way too big as the root cause for this is CloudFormation's DeletionPolicy: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

I think the only feasible way would be to create custom Serverless plugin for each resource type to migrate the existing data somehow when the stack is re-created (using the backups like I did can be pretty slow as it can take ~4 hours). Something like that could also be used for "branching" apps so you could for example migrate the DynamoDB table data to the new app. And if you start talking about data migration then you might want to think about migrating data when you need to change the table keys or join two tables together etc... (thinking of something like FlyWay but for DynamoDB). And btw by feasible I mean "possible but a huge amount of work!" ๐Ÿ˜

I'll second what @sernaton said. With complimentary services like Amplify, it would be nice to be able to use ServerLess to, say, add a stream to a dynamodb table created using Amplify.

@pmuens I think we already many feedbacks to get a conclusion. Anyone could provide a solution?

I am up for "skip if exists"

+1 :(

Another option could be to import existing resources.
Allow serverless to hook into existing infrastructure to start working on it.

Terraform has a similar feature:
https://www.terraform.io/docs/import/index.html

Hi guys, just had the same problem and a workaround whilst this is not currently supported, is to use serverless.js (we need yaml-boost to make use of the awesome '<<<' operator which means the object exported can be manipulated).

In AWS we wanted to deploy only one ECR as an additional stack for all environments and now that the parsing happens before, we can manipulate the obj and add.remove keys. Use with caution though.

Full example below:

const path = require('path');
const minimist = require('minimist');
const yaml = require('yaml-boost');

const args = minimist(process.argv.slice(2));
const slsConfig = yaml.load(path.join(__dirname, 'serverless.core.yml'), args);

if(args.stage && args.stage === 'prod'){
  delete slsConfig.custom.additionalStacks["external-ecr"];
}

module.exports = slsConfig;

Maybe I don't understand CloudFormation well enough, so I may be saying more than I know, but if this flag was implemented, couldn't you just generate the CloudFormation from the Serverless.yml but remove the resources property before generation? This would effectively act as if the resources are commented out. I have done this on several occasions and have not had any issues with my resources being removed by CF as mentioned above.

I agree, a feature to "skip creating resource if it already exists", especially for applications that require a database to exist, but need the data to persist over the lifespan of the application. It's unreasonable to assume that a new version of the application should implicitly replace or wipe out data, and it's also unreasonable to require a new database table for every release.

I encounter similar problems with databases, as well as IAM roles and policies for specific functions or services or resources to communicate as needed in an application. If I amend a policy or a role, and deploy my changes, the old policy should be removed/replaced/changed, and those changes should apply to the services the role is attached.

has this been updated yet? seems like a pretty important feature. currently running into this with the kinesis resource

I also tried deploy for lambda, but it also "MyRole already exists in stack".
Would love to skip having to delete then recreate my roles

Ok, here is another use case for this feature (hence another user being bitten by this problem ๐Ÿ˜„).
Our use case is as follows:
We have an SNS topic that acts as a broadcast mechanism for several things, some of them are lambdas.
To avoid losing any message we put an SQS queue between the SNS topic and the lambda. This SNS to queue mapping is only useful for this lambda because it includes filtering and it works like a buffer for the lambda. So the most reasonable thing was to create the queue and subscribe it to the SNS topic on the same serverless file where we declare the lambda that is going to consume it.
It does not make any sense to create a separate stack just to map the queue to the SNS topic and then import the queue on the lambda file when we can do everything on the same file, making the relationship much more obvious and making sure that the required resources will get created no matter to which environment you are going to deploy... because as many here do, we have like a dozen environments and creating resources manually before deployment is not an option.

Having a separate stack for the resources is not only less convenient, but it has the same problems as declaring them on the same file as the lambda: what if we want to add another queue? SLS is going to re-create the entire "resources" stack instead of just pushing the new stuff.

I would like to hear a solution to this situation

If anyone is interested I fixed my particular use case using this plugin:
https://github.com/SC5/serverless-plugin-additional-stacks

I ran into this and found that deleting the log group was only required once, not for every future deploy. The issue seems to have been introduced during some upgrade we did to serverless.

I also did run into this issue - SNS topic already exists
My used case is we have blue/green deployment for every environment (stage) which shares the common SNS Topic which is the output interfacing with other services. So in this case when we try to deploy the second version (blue is deployed, and now green is being deployed) the serverless fails saying the SNS Topic already exists. The SNS topic can not be separated into its own serverless/cloudformation deployment as it is integral to the deployment of the service.
stage - version
dev - blue/green
staging - blue/green
prod - blue/green
In this scenario only the first time serverless runs for every stage for any version (ex: dev-blue) the SNS Topic is to be created. The next deployment (ex: dev-green) the SNS Topic has to be reused (lambda subscription to the SNS Topic only has to be created/modified/deleted).
Is there any solution to this without implementing the "skip creation of resources if already exists"

@berlinguyinca Sure, Serverless can call any AWS API that it needs to. The point is, though, that this isn't the responsibility of Serverless - Serverless isn't in the business of creating resources. If it added a feature to see if a resource existed, it would need to constantly keep up with the APIs for every service that AWS provides. Serverless' job is to create a CloudFormation template that it sends to CloudFormation so that CloudFormation can do the heavy lifting of integrating with all the AWS services.

I don't understand your usecase where you're defining a resource in Serverless that already exists. What originally created the resource? If something else created it, why is it being defined again in the Serverless template?

Well, I think the point of the need is that in fact serverless framework does allow you to specify resources within it. Since it does do so it should be full enough featured at doing this. In particular there should be a way to specify a test condition like dynamodb table already existing before proceeding with some action regarding said table.

So if you don't think serverless should handle this then don't define/modify resources in serverless framework or have it have this functionality. In general if you are going to have some functionality make sure it is full featured enough to be useful.

@berlinguyinca Sure, Serverless can call any AWS API that it needs to. The point is, though, that this isn't the responsibility of Serverless - Serverless isn't in the business of creating resources. If it added a feature to see if a resource existed, it would need to constantly keep up with the APIs for every service that AWS provides. Serverless' job is to create a CloudFormation template that it sends to CloudFormation so that CloudFormation can do the heavy lifting of integrating with all the AWS services.
I don't understand your usecase where you're defining a resource in Serverless that already exists. What originally created the resource? If something else created it, why is it being defined again in the Serverless template?

Well, I think the point of the need is that in fact serverless framework does allow you to specify resources within it. Since it does do so it should be full enough featured at doing this. In particular there should be a way to specify a test condition like dynamodb table already existing before proceeding with some action regarding said table.

So if you don't think serverless should handle this then don't define/modify resources in serverless framework or have it have this functionality. In general if you are going to have some functionality make sure it is full featured enough to be useful.

The point I'm trying to make is: Serverless does not make tables. You say

there should be a way to specify a test condition like dynamodb table already existing before proceeding with some action regarding said table

@sjatkins But Serverless does not take any actions to make a table. The Serverless Framework is (basically) just a nice abstraction layer that makes CloudFormation templates for you. CloudFormation is the underlying technology that actually makes resources.

Again, I ask you the same question I asked @berlinguyinca: what's the actual usecase for this? If you make all your resources in Serverless templates in the first place, there's no need to have Serverless skip some resources if you happened to already make them manually. And for those who said 'but I manually created the resources in dev, and I only want Serverless to make them in prd', I think you're missing the point of infrastructure-as-code ... why would you want your two environments to work differently?

Note that CloudFormation (as of three weeks ago) will allow you to import existing resources into a stack [1], which is sort of what people are asking for here. You could probably try using that to accomplish what you're wanting Serverless to do here (or some variant of it).

[1] https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/

@jthomerson
Are you asking what is the use case of defining the Resource in serverless.yml or what is the use case that is triggering the undesired behavior?

I'm currently trying this:

  1. Define my functions
  2. Define my resources
  3. sls deploy

If I have dynamodb table defined in resource, after the first deploy I'm unable to deploy any other code change to my functions. This constitutes an issue for me on the UX layer, since I'm not doing anything that is not readily recommended by the docs or the community.

I think this is an issue on the UX part. Unless there is something wrong with the code running I should be able to run sls deploy and not worry about whether or not the defined resource exists. The current situation makes me worry about this. From a developers point of view if that table exists it should get used, if it doesn't it should be created, the function code must consume it and it must deploy.
This is where the abstraction layer should be in the sense that it makes it easier to reason about it.
Now if another developer that I don't know about hears there is data in this table and wants to use it, he would be right to set up the resource in his function and in that sense guarantee that if you dry run hist function it will have all the needed resource. If the table already existed, it should be able to use it and not fail his deploy.

Sorry to be a bother, I'm just confused as to how to and what to define where since right now I have a feeling that after the first deploy I'm unable to do anything unless I comment out the Resource from my serverless.yml.

The situation I run into is the one many run into. That is that some types
of resources, particularly dynamodb tables, are very finicky to get right.
Sometimes serverless will blow out on deploy if the table already exists.
And no, the table was not created initially outside serverless. It would
be much better and more predictable if logic could be added to only do some
operations in some cases just as we commonly do throughout the rest of our
software stacks. I don't know what the purpose is of acting as if the
problem really doesn't exist. There is evidence all over the internet that
the problem is real and a pain point to me. It is not at all helpful or
confidence building to be told a real pain point is not a problem.

On Sat, Dec 7, 2019 at 3:32 PM Jeremy Thomerson notifications@github.com
wrote:

@berlinguyinca https://github.com/berlinguyinca Sure, Serverless can
call any AWS API that it needs to. The point is, though, that this isn't
the responsibility of Serverless - Serverless isn't in the business of
creating resources. If it added a feature to see if a resource existed, it
would need to constantly keep up with the APIs for every service that AWS
provides. Serverless' job is to create a CloudFormation template that it
sends to CloudFormation so that CloudFormation can do the heavy lifting of
integrating with all the AWS services.
I don't understand your usecase where you're defining a resource in
Serverless that already exists. What originally created the resource? If
something else created it, why is it being defined again in the Serverless
template?

Well, I think the point of the need is that in fact serverless framework
does allow you to specify resources within it. Since it does do so it
should be full enough featured at doing this. In particular there should be
a way to specify a test condition like dynamodb table already existing
before proceeding with some action regarding said table.

So if you don't think serverless should handle this then don't
define/modify resources in serverless framework or have it have this
functionality. In general if you are going to have some functionality make
sure it is full featured enough to be useful.

The point I'm trying to make is: Serverless does not make tables. You
say

there should be a way to specify a test condition like dynamodb table
already existing before proceeding with some action regarding said table

@sjatkins https://github.com/sjatkins But Serverless does not take any
actions to make a table. The Serverless Framework is (basically) just a
nice abstraction layer that makes CloudFormation templates for you.
CloudFormation is the underlying technology that actually makes resources.

Again, I ask you the same question I asked @berlinguyinca
https://github.com/berlinguyinca: what's the actual usecase for this?
If you make all your resources in Serverless templates in the first place,
there's no need to have Serverless skip some resources if you happened to
already make them manually. And for those who said 'but I manually created
the resources in dev, and I only want Serverless to make them in prd', I
think you're missing the point of infrastructure-as-code ... why would you
want your two environments to work differently?

Note that CloudFormation (as of three weeks ago) will allow you to import
existing resources into a stack [1], which is sort of what people are
asking for here. You could probably try using that to accomplish what
you're wanting Serverless to do here (or some variant of it).

[1]
https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/3183?email_source=notifications&email_token=AACWMNQF7OKOYBJRKJS6JPDQXQP7PA5CNFSM4C6UDZTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGGQ5WQ#issuecomment-562892506,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACWMNU6URN2LW3ZK7CYVDDQXQP7PANCNFSM4C6UDZTA
.

A standard and key feature of Ansible is to check if actions have already been done and not repeat them if so.

As an automation tool, it's reasonable for serverless to provide this standard feature.

Apart from plugins, another way I solved this situation by just using serverless itself is by having a separate resources file. CloudFront seems to understand that, unless you makes changes to that other file resources do not need to be recreated.

I ran into this issue today and wanted to document what I found.

TL;DR
I think that extra spaces, or even commented out code, under resources makes Serverless (or CloudFormation?) think it needs to create a brand new resource.

Steps
I am using resources to create an S3 bucket. I want to ensure that resource exists and is the same for the project - regardless of environment. So the bucket name will always be some-asset-bucket.

When I add a new resource, for example adding a new SNS topic, Serverless will always try to create the S3 bucket again. So I removed the SNS topic and still had the issue.

I then performed a Git reset to before I made the SNS topic and it deployed without giving me the error.

Yea, an interesting thing about this problem in terms of Ansible and Serverless is that I am trying to do the following. Create Dynamo tables in Ansible (which works fine, except that you can't declare streams) And then in serverless declare appSync that uses the dynamo tables as the mutations. I then have lambdas in serverless that listen to the dynamo streams. So, I thought great, I can declare those tables in serverless as Resources because serverless does allow you to initialize the stream that then lambda will then listen to, but then it fails like mentioned above because the resource already existed. To recap, Ansible should really have the ability to define a stream. And serverless should really act like ansible, in that if a resource already exists, it just does an update of that resources settings.

@jasonmccallister Thank you for your input! I reached exactly the same issue right now - using resources to create S3 bucket and it seems that there's no option to skip creation if an bucket already exists.

I see most of the developers are asking for this feature will post scenarios where you need to share some resource (for example DynamoDB) and overall having SQL experience and not being able to do something as simple as: IF NOT EXISTS feels like missing core functionality.

Suggestion like: you can do it with some resources, custom script etc. is like reinventing the wheel. Just allow developers to have the choice to skip resource creation.

Yea, we had this problem for dynamo streams. You can use serverless to setup streams, but it requires you to recreate the Dynamo table, which is not very helpful. This let's you create the dynamo table outside of serverless (we use Ansible) and then it creates or connects to an already existing stream - https://www.npmjs.com/package/serverless-dynamo-stream-plugin To work around it for S3 we used serverless-external-s3-event which again, we create the bucket in ansible, then use it in serverless to connect it to a lambda event.

Unfortunally serverless lack this very basic feature to painlessly redeploy the applications without using another tool.
I think by default it should not create if exists, and delete and recreate only if i say so.
SO it is less destructive and you get confidence to use in production, without createing a HUGE damage for seconds fo attention lack

I'm experiencing a weird behaviour as I'm defining the DynamoDb tables on the serverless resources and after running serverless deploy multiple times I found that the tables aren't being deleted and the data still exist.

A week ago I had the issue that we're discussing here that the resources already exist, what I did back then is just using deletionPolicy: delete but today tried to remove that and also deleted the tables manually and tried again and found it works.

As I see it resources.Resource is broken (other than for one time scripts) until this is fixed.

Right now we "create resources manually" before deploying a new version that requires a new resource which is bad when automating deployment.

I can't understand why this feature isn't AWS highest prio....

I think that it should be the default behaviour and not a flag when calling serverless deploy.

Maybe you can instead add a flag to for those who want this exception when the resources are already there even when owned by you (something like serverless deploy --strictResources or so, my guess is that none will use that option anyway).

I agree, when Serverless creates a stack, it should be able to gracefully handle cases where a resource already exists, and import it as appropriate. Ideally, to prevent breaking existing applications, I would recommend only importing resources for which a DeletionPolicy tag exists, follow the DeletionPolicy when the stack is removed, and only throw an error when the resource already exists and the DeletionPolicy tag is not defined. This would prevent unexpected importing and deleting of important resources, while also not requiring a flag for this functionality.

A Serverless deployment's resource configuration should allow me to create the resources that don't exist while utilizing the ones that already do.

its been 3 years now since this issue was opened. Could we please have this basic functionality implemented? I mean it's obviously that a lot of people want and need this.

Again, I disagree with those asking for this feature. Serverless makes a CloudFormation template and deploys it. It should not make a different template depending on the current state of your environment. That negates much of the benefit of infrastructure-as-code.

Yeah this is something that should be a part of CloudFormation, not part of Serverless. CloudFormation could have a feature to adopt orphaned named resources back into stacks, but it doesn't.

S3 Buckets and Dynamo Tables almost never get deleted. To have it as a hard requirement that serverless fails if they already exist does not make sense. We get around this by using Ansible to deploy these resources, but the idea that Serverless can't handle this scenario makes no sense. Yes, lambdas should get destroyed and recreated everytime, because they are code, but buckets and tables are not.

@bwship but as @kennu mentions - and as I describe above - this is not a failing of the Serverless framework. Serverless is (to oversimplify) a convenience wrapper around CloudFormation. It's CloudFormation that is saying "you can't make something with that name because there's already something with that name".

That's exactly the behavior that most CloudFormation users expect - and need. Anything else would cause non-deterministic behavior, which is contrary to the goal of infrastructure as code.

You have a couple of options for dealing with this:

  1. Don't name your resources. That's actually considered a best practice, although not everyone agrees. I don't even like this one, and generally do name my resources. But, if you don't, then CloudFormation makes a somewhat-random name for you. Using exports and imports, and other AWS config tools (inc. environment variables), you can have all your resources have non-conflicting names by leaving the naming up to CloudFormation.
  2. Make sure that you don't define the same resource name in multiple stacks, and make sure that if you delete a stack, the resource is deleted. That's just good maintenance anyway.

Your suggestion would break things in subtle ways for people. Imagine you deleted a stack, and the bucket was left there because it had a retention policy that made it stay when the stack was deleted. Now you try to re-create the stack. In this scenario, I absolutely want CloudFormation to fail - to tell me that I'm not creating a new table or bucket, but already have one there with that name. That safeguards me from thinking that the stack deployed cleanly, making nice, new resources, but in reality it's my old resources left around. But under your suggestion, this would be a silent failure - I've got some old table or bucket with whatever old state it had just laying around and now I think my nice, new, service deployed but it's really not.

Yea, I agree this is a difficult scenario. Like I said, Ansible handles this in what I feel is the correct way. While I do agree with you that this is really on the CloudFormation level, the difficulty is really in resources that are in nature data sources, instead of just resources. EC2 boxes, SNS Topics, SQS Queues are just resources that don't really store data. So, while CloudFormation and therefore by extension Serverless are good at handling ephemeral resources, they are not good at handling data sources. Nobody in the world that has a User table with users actually in that table would expect their infrastructure as code to destroy that table. They would and do expect it to act like a SQL database, where you run migrations against it, not destroy it and reseed it. It's fine if you guys don't want to do it, we workaround it by using Ansible, or Terraform. And really I don't think you can do much complete deployment without using one of those plus Serverless plus a little sprinkle of bash glue anyway. But, this is continually where infrastructure as code gets the most difficult, around real data in a real system. One example is that we deploy our s3 buckets with ansible, and then use a Serverless plugin, serverless-external-s3-event, to attach our lambda to this s3 bucket.

Same with SQS queues, it should be possible to just print a warning and not
have everything crash. Right now we have several severless files in
different directories for deploying the stack, creating resources, etc.
This is just a bit counter intuit. Could we just have a simple plugin maybe?

On Mon, May 4, 2020 at 7:21 PM Jeremy Thomerson notifications@github.com
wrote:

@bwship https://github.com/bwship but as @kennu
https://github.com/kennu mentions - and as I describe above - this is
not a failing of the Serverless framework. Serverless is (to oversimplify)
a convenience wrapper around CloudFormation. It's CloudFormation that is
saying "you can't make something with that name because there's already
something with that name".

That's exactly the behavior that most CloudFormation users expect - and
need. Anything else would cause non-deterministic behavior, which is
contrary to the goal of infrastructure as code.

You have a couple of options for dealing with this:

  1. Don't name your resources. That's actually considered a best
    practice, although not everyone agrees. I don't even like this one, and
    generally do name my resources. But, if you don't, then CloudFormation
    makes a somewhat-random name for you. Using exports and imports, and other
    AWS config tools (inc. environment variables), you can have all your
    resources have non-conflicting names by leaving the naming up to
    CloudFormation.
  2. Make sure that you don't define the same resource name in multiple
    stacks, and make sure that if you delete a stack, the resource is deleted.
    That's just good maintenance anyway.

Your suggestion would break things in subtle ways for people. Imagine you
deleted a stack, and the bucket was left there because it had a retention
policy that made it stay when the stack was deleted. Now you try to
re-create the stack. In this scenario, I absolutely want CloudFormation to
fail - to tell me that I'm not creating a new table or bucket, but already
have one there with that name. That safeguards me from thinking that the
stack deployed cleanly, making nice, new resources, but in reality it's my
old resources left around. But under your suggestion, this would be a
silent failure - I've got some old table or bucket with whatever old state
it had just laying around and now I think my nice, new, service deployed
but it's really not.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/3183#issuecomment-623801143,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAD73HN7ALBT565HUI6JEDRP5Z2HANCNFSM4C6UDZTA
.

--

Lead Developer - Fiehnlab, UC Davis

gert wohlgemuth

work:
http://fiehnlab.ucdavis.edu/staff/wohlgemuth

linkedin:

https://www.linkedin.com/in/berlinguyinca

@berlinguyinca I'm still not understanding your usecase. The question I have is "how did you ever get in the situation where the stack you're trying to deploy has the SQS queue defined in it, but that queue already exists?"

I think the same question basically applies to @bwship's description of his scenario.

Because the premise that I'm operating on is:

  • I define a stack with this resource (database table, SQS queue, S3 bucket, etc)
  • I should never delete that stack unless I no longer have a need for that resource. So, I can keep making modifications to this stack, and it will modify that resource (if that resource had any changes in that deploy)

No other stack should define the same resource, so if I have that resource deployed from its original stack, how could I have another stack that "crashes" when I try to deploy it?

@jthomerson - so you are saying my User Dynamo table is one stack. And then my lambda that connects to the User Dynamo DB table is created on a separate stack? If that is the case, I can see your point, and maybe that is fine. In my mind, they would be part of one stack, and the IAC code would create the dynamo table, then deploy the lambda, then connect the dynamo table to the lambda.

Hi this is a good point and maybe I'm doing something wrong in this case.

use case is:

we constantly deploy updates to the stack, which technically recreate it.
Like add new lambda => redeploy.

in our CI system, commit is pushed and 'sls deploy --stage test' is called,
than all integration tests are run and so on.

thanks,

g.

On Mon, May 4, 2020 at 9:04 PM Jeremy Thomerson notifications@github.com
wrote:

@berlinguyinca https://github.com/berlinguyinca I'm still not
understanding your usecase. The question I have is "how did you ever get in
the situation where the stack you're trying to deploy has the SQS queue
defined in it, but that queue already exists?"

I think the same question basically applies to @bwship
https://github.com/bwship's description of his scenario.

Because the premise that I'm operating on is:

  • I define a stack with this resource (database table, SQS queue, S3
    bucket, etc)
  • I should never delete that stack unless I no longer have a need for
    that resource. So, I can keep making modifications to this stack, and it
    will modify that resource (if that resource had any changes in that deploy)

No other stack should define the same resource, so if I have that resource
deployed from its original stack, how could I have another stack that
"crashes" when I try to deploy it?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/serverless/serverless/issues/3183#issuecomment-623846453,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAD73EP24G6VRY3XHE3QW3RP6F27ANCNFSM4C6UDZTA
.

--

Lead Developer - Fiehnlab, UC Davis

gert wohlgemuth

work:
http://fiehnlab.ucdavis.edu/staff/wohlgemuth

linkedin:

https://www.linkedin.com/in/berlinguyinca

@bwship yes, you can do it that way. That's how I personally do it - I always separate data sources and similar resources into a stack of their own, and then have APIs, functions, etc, in another stack.

But that's not strictly necessary. I also have stacks where an SQS queue and the Lambda that's "listening" to it are in the same stack.

You can have the table and function in the same stack. Each time you deploy, so long as you didn't change the table definition, the table won't change. CloudFormation detects that the table's config is the same and only deploys the function.

But if you need to deploy the function to multiple stages, but all share the same table, then yes, definitely make them a separate stack. This is why I separate mine - we'll deploy the tables and data sources (buckets, etc) in an "integration build" type stack (e.g. "dev" stage), and then each developer can deploy the APIs / functions / etc to their own stages, but share the data sources. Then, if they're working on the actual data sources, they can also deploy a second stage of those (i.e. their own named stage) so they can iterate on that data source without impacting the one people are sharing in the integration build stage.

Does that make sense? Is it helpful?

@berlinguyinca what do you mean when you say "technically recreate it"? You can deploy thousands of times, and the stack is not recreated - it's always just updated. That's the entire purpose of CloudFormation - deterministic and iterative changes defined in code.

If your stack is getting recreated, then you're using it differently (e.g. different stage names each time, or modifying the service name between builds), or you're maybe deleting the stack between builds?

So long as the service name stays the same, you can deploy the same stage many times. If there's a change in a resource, that single resource will be updated (see my previous comment to @bwship). If no resources are updated, Serverless will generally catch that there was no change (if the CloudFormation template it generates is exactly the same as the last one it deployed). Even if there's a change that Serverless sees, if the actual definition of your resources have not changed when CloudFormation sees them, CloudFormation won't touch them.

@jthomerson I don't think anyone has any issue with the "happy path". I can deploy a thousand times and never hit an issue.

But then there are outliers like @jasonmccallister s case, or for instance my own where I added unrelated line and got an error regarding dynamodb. And to be honest right now I'm again battling the same demon.
I added apiKey to my function, just seems like it wasn't mentioned clearly that you cannot reuse the key currently between stages. That seems to have triggered some other change on the CF which in the end resulted in dynamo resource errors.
Right now I have issues with dynamodb tables that until now properly deployed.
I'm going to spend the next x amount of time trying to figure out something that really hasn't changed a bit and should just deploy my code, yet it won't.

Mind you I really had a blast hundred of other times deployed worked just perfectly :) and I'm grateful for those times.

What do you suggest as the best way to resolve those issues that block serverless?

One of my projects was set up with Cognito in early 2017. Back then Cognito wasn't supported by CloudFormation. Later that year support was added, and we'd now like to add our Cognito config to serverless.
This might be a rare case but a legitimite one.

I would like the same feature to add my Cognito User Pools to CloudFormation @FelschR. But it's not a missing feature of Serverless, it's a missing feature of CloudFormation. We should be making this feature request to AWS.

While I agree CloudFormation should handle the resource existence related issues, I'd like there to be a functionality in serverless where I can easily pass a flag for skipping specific resources so it would not be included in the CF template to begin with.

+1 to @jthomerson 's comment on May 4 which summarizes a pitfall of implementing this feature. I'd like to add to the crowd for a call for a skip feature. But, I would love for it to be an explicit option rather than a default. Currently, it seems that the multiple stacks/Fn::ImportValue route is the best option, with the unfortunate drawback of stack coupling :(

Does somebody know why would this happen? I deployed a service + dynamodb table via serverless successfully without error, but when I redeploy it again I got this error.

Then I manually deleted the table, triggered the deployment, the serverless created it, but the same error next deployment.

BTW, I'm using nodejs.

NVM, the service name got changed... so the resource is in multiple stacks

Hey, I just realized that CloudFormation does have support for importing existing resources into a stack: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html

In AWS SDK it's implemented with the ResourcesToImport parameter to createChangeSet. I belive Serverless Framework doesn't use createChangeSet(), but instead calls updateStack() directly, so this would require a large refactoring.

Sorry about the wrong information I said earlier about CloudFormation not supporting this.

Great thanks for all the input. I'm going to close it, please read carefully the reasoning:

What we deal with here, is not a limitation of a Framework per se, but limitation of CloudFormation through which Framework deploys configured services.

While what's being requested seems now "kind of" possible with CloudFormation (via combination of handling of DeletionPolicy and introduced not far ago import resources capability). Trying to tackle this generically, seems far from trivial. It may require tons of work (and new issues to fight with), as already observed by @kennu.

Due to implied complexity this doesn't seem as right direction. It seems more reasonable to agree that resources as configured with a Serverless service are inseparable part of a service and are meant to also be removed with service (when we remove it with sls remove).
For cases when we do not find that acceptable, we should rather configure resources in question externally. Note that Framework in many places allows to attach to _existing_ (created and configured externally) resources.

In scope of internal team we also put a lot of effort into Serverless Components, which are not backed by CloudFormation, so do not share its limitations. In its context we attach to eventually already existing resources on deploy, as it's being requested here.

I didn't understand the logic of not working! Right now I have critical data in production, does that mean I have to do this externally? So all the logic of creating new environments in a simple way goes to waste?

If going back to serverless 1.80.0 will work, is the problem version 2.0?

If going back to serverless 1.80.0 will work, is the problem version 2.0?

This was never solved in a Framework, as it's very difficult to solve on CloudFormation level (read the above comment for more info)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

lfreneda picture lfreneda  ยท  54Comments

kennu picture kennu  ยท  48Comments

StephanPraetsch picture StephanPraetsch  ยท  60Comments

enriquemanuel picture enriquemanuel  ยท  72Comments

mklueh picture mklueh  ยท  43Comments