* Which Category is your question related to? *
DynamoDB/@model
directive
* Provide additional details e.g. code snippets *
type Order @model @key(fields: ["customerEmail", "createdAt"]) {
customerEmail: String!
createdAt: String!
orderId: ID!
}
Hi everyone,
Let's assume the code above. We have multiple environments connected to AWS Amplify. Let's also assume that our production database (DynamoDB based on @model
) is up and running with a couple of million rows.
Now, let's change the type name from Order
to Test
(for whatever reason, this is still hypothetical). We amplify push
it and two things happen:
Test-<apiId>-dev
DynamoDB table is _created_Order-<apiId>-dev
DynamoDB table is _deleted_Without warning, all our existing data is gone. So, we can create a custom role which prevents table deletion, based on this: https://aws-amplify.github.io/docs/cli-toolchain/usage?sdk=js#iam-policy-for-the-cli
However, we feel that we are missing something. We definitely want to prevent accidental deletes, both for Amplify (amplify delete
) and DynamoDB tables.
Because we currently use a MySQL database, we use Lambda as a layer. In Amplify it is really easy to connect to to a pre-existing Lambda function (without creating it using amplify add function
). So we can create and delete what we want without "damaging" the database.
So in an ideal world, we would like to connect to a pre-existing DynamoDB (like Lambda) which would be unaffected by an amplify delete
or a change in type name. However by reading #1222 it seems that this is possible by creating a custom resolver. Before we explore this option further, we would really like to know if (and what) we are missing here.
@BabyDino We can by default have a "Retain" policy on all the DDB tables generated by the GraphQL transformer to avoid these mistakes. Do you think that would be a good solution? Or do you think having a top level configuration for this policy would be ideal?
@kaustavghosh06 Retain would certainly help but in case of a mistake, typo, etc. the table becomes "orphaned"(?). If we correct the mistake we have to connect back to the original table, i.e. connect to an existing DynamoDB table.
I do understand the complexity of @model
and @key
directives and the fact that DynamoDB doesn't like index modifications (LSI's etc.). However, and I'm just thinking out loud here, I feel that since DynamoDB is fairly table design critical we would rather set-up the tables ourselves manually and design them to our liking before connecting them to Amplify/AppSync. This would basically means that if @model
accepts arguments like @function
would create a desired situation.
For example: @model(name: "OurTableName-${env}")
would connect to our existing table and no table modifications (including delete) would take place when something changes. Of course, the @key
directive has to match our manually defined keys. This would be solely our responsibility. Basically this solution could allow for some more manual intervention.
Does this make any sense at all?
This has bitten me many times, a workaround for now I use is assigning an explicte deny to the automation account for DynamoDB delete operations.
@dallinwright I figured that if you add "DynamoDBEnablePointInTimeRecovery": "true"
to parameters.json
a backup will remain available for 35 days when you delete a table. This is acceptable for us now. This is not part of the docs as far as I know: https://aws-amplify.github.io/docs/cli-toolchain/graphql#configurable-parameters
CC @kaustavghosh06
Just lost my dynamo tables from a cli issue and can confirm that this would have been quite nice.
EC2 instances have a 'Termination Protection' setting. If a DynamoDB table had something similar I would enable it for all my Production tables. This type of protection is really needed to prevent accidents.
Hi, something new about that ? What is the proper way to protect our production data from accidents ?
This is really important for us as well. Is it being looked at?
I recently accidentally deleted my entire Amplify project including DynamoDB tables and all. Fortunately I able to spin a new project and recover all the data from backups. I'm now looking at how to secure everything so this doesn't happen again. 馃槀
CloudFormation has termination protection for stacks, but I'm still concerned about individual resources such as tables being accidentally deleted.
Most helpful comment
EC2 instances have a 'Termination Protection' setting. If a DynamoDB table had something similar I would enable it for all my Production tables. This type of protection is really needed to prevent accidents.