Amplify-cli: Passing dynamo table names to lambda function

Created on 15 Apr 2019  路  7Comments  路  Source: aws-amplify/amplify-cli

Note: If your question is regarding the AWS Amplify Console service, please log it in the
official AWS Amplify Console forum

* Which Category is your question related to? *
lambda function
* What AWS Services are you utilizing? *
Lambda, Cognito, AppSync GraphQL
* Provide additional details e.g. code snippets *
I need a Lambda function to implement some backend logic. In this code I need to access the tables created by the graphQL schema.

My question is how can I pass the table names to the lambda function, taking account that the table names changes from one environment to another.

In fact what it will be ideal, is that when creating a record in a table, then with a pipeline resolver call the function with the correct table names. As I understand the pipeline resolvers are not yet supported, So im planning calling both functions from my angular app.

How can I access the table names from my angular app?

graphql-transformer pending-response question

Most helpful comment

@kstro21 Thanks for the helpful tip!

@sergiorodriguez82 Are you deploying your lambda function via the amplify function category?

In general, I think there should be an easier mechanism to reference (and perhaps tweak) resources created by the transform. I have thought of two mechanisms. Maybe you guys have better ideas.

  1. Allow any custom stack in the stacks/ directory to reference a resource via the CloudFormation Fn::Ref intrinsic function.

  2. Allow a custom stack to overwrite properties on transform resources by providing a resource with the same logical id. As a side effect, this will require the logical ids in all api category stacks to be unique.

There is already logic in the transform that knows how to replace instances of Fn::Ref with corresponding Fn::ImportValue, Fn::ExportValue, and dependsOn expressions. We could use this same logic to allow custom stacks to reference resources resources in other nested stacks as if they were in the same stack (removing the need to worry about messy import/export names). Curious what people think of this?

All 7 comments

Hi, @sergiorodriguez82, there is a PR #1215 open to export the table name.

Meanwhile, you can use something like this is your template, for example, te pass the table name as an environment variable to the Lamda.

"Environment": {
  "Variables": {
    "TABLE_NAME": {
      "Fn::Sub": "Todo-[GraphQLApiId]-${env}"
    }
  }
},

Todo - replace it with the name of your model.
[GraphQLApiId] - replace it with the value of your GraphQL API ID.
${env} - is a parameter passed to the template, so, no need to change it.

For the GraphQL API ID, there is a feature request #1099 open to pass the value to the Lambdas, so, until it is implemented, we need to hardcoded the GraphQL API ID in the template.

About _when creating a record in a table_ invoking a Lambda, probably what you need is an AWS::Lambda::EventSourceMapping. It will invoke your Lambda when an item is CREATED, UPDATED and DELETED and pass the item in the event. You can use this snippet.

"EventSourceMapping": {
  "Type": "AWS::Lambda::EventSourceMapping",
  "DependsOn": ["LambdaFunction"],
  "Properties": {
    "Enabled": true,
    "EventSourceArn": {
      "Fn::ImportValue": "[GraphQLApiId]:GetAtt:TodoTable:StreamArn"
    },
    "FunctionName": { "Fn::GetAtt": ["LambdaFunction", "Arn"] },
    "StartingPosition": "TRIM_HORIZON"
  }
},

Hope it helps

Thanks for the prompt answer!
I decided to hardcoded the names in the resolver template until the #1099 is done.
Regarding the EventSourceMapping, it sounds right, but again I will need the table names for that request.

To complete my use case information:
I have the following 4 Tables

  • Objectives: objectives have multiple indexes and multiple measurements
  • Indexes: belongs to a Objective and have multiple measurements
  • IndexMeasurements
  • ObjectivesMeasurements
    What I need id whenever an index measurement is created the corresponding objective measurement is created or updated. For this I must get all the indexes and depending on the "weight" of each one I must calculate the corresponding objective measurement.

@kstro21 Thanks for the helpful tip!

@sergiorodriguez82 Are you deploying your lambda function via the amplify function category?

In general, I think there should be an easier mechanism to reference (and perhaps tweak) resources created by the transform. I have thought of two mechanisms. Maybe you guys have better ideas.

  1. Allow any custom stack in the stacks/ directory to reference a resource via the CloudFormation Fn::Ref intrinsic function.

  2. Allow a custom stack to overwrite properties on transform resources by providing a resource with the same logical id. As a side effect, this will require the logical ids in all api category stacks to be unique.

There is already logic in the transform that knows how to replace instances of Fn::Ref with corresponding Fn::ImportValue, Fn::ExportValue, and dependsOn expressions. We could use this same logic to allow custom stacks to reference resources resources in other nested stacks as if they were in the same stack (removing the need to worry about messy import/export names). Curious what people think of this?

@kstro21 Thanks for the helpful tip!

@sergiorodriguez82 Are you deploying your lambda function via the amplify function category?
Yes

In general, I think there should be an easier mechanism to reference (and perhaps tweak) resources created by the transform. I have thought of two mechanisms. Maybe you guys have better ideas.

  1. Allow any custom stack in the stacks/ directory to reference a resource via the CloudFormation Fn::Ref intrinsic function.
  2. Allow a custom stack to overwrite properties on transform resources by providing a resource with the same logical id. As a side effect, this will require the logical ids in all api category stacks to be unique.

There is already logic in the transform that knows how to replace instances of Fn::Ref with corresponding Fn::ImportValue, Fn::ExportValue, and dependsOn expressions. We could use this same logic to allow custom stacks to reference resources resources in other nested stacks as if they were in the same stack (removing the need to worry about messy import/export names). Curious what people think of this?

Not an expert in Cloud Formation, but the first option sounds more "clean". I guess both could work. I think the aws-amplify project rocks! nevertheless when working with the backend api, specially with appsync graphql, this project should keep improving towards making a more easy way for non-cloudformation users in how to create robust backend functionalities.

@sergiorodriguez82 Thanks for the response. The goal is definitely to continue providing simple to use and powerful abstractions that avoid having to write CloudFormation. We provide CFN as an escape route for situations we don't support but #1060 should also provide another route to handle custom, complex use cases. I will investigate what we can do to make Fn.Ref transparent so you can reference resources created by the transformers without needing to worry about nested stacks.

@kstro21 I do not see where I can define the Environment config in the CustomResources.json template. When I reference the documentation at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-appsync-resolver.html I do not see a place to add it to the template.

Thank you in advance, and I apologize if this is a silly question, I am new to CloudFormation so I am still figuring things out.

Hi, @vlutton, I don't quite understand your question. Probably you will need to share some snippet of what you are trying to do.

Meanwhile, if you are trying to get the Environment in the CustomResources.json template, the value is already injected in a parameter named env, so, all you need to use "Ref": "env" anywhere in your template where you want to use the value, or using Fn::Sub as shown in my example above https://github.com/aws-amplify/amplify-cli/issues/1274#issuecomment-483459091.

Hope it helps

Was this page helpful?
0 / 5 - 0 ratings