* Which Category is your question related to? *
API
* What AWS Services are you utilizing? *
AppSync
* Provide additional details e.g. code snippets *
In doc https://aws-amplify.github.io/docs/cli/graphql#add-a-custom-resolver-that-targets-a-dynamodb-table-from-model, users can write the resolver in VTL and version control it under resolvers/ folder. What if I want to implement a custom pipeline resolver? What do I have to do differently from the doc?
Note I know how to add a pipeline resolver in AppSync console. In the question, I care only about a "version-controlled" solution.
Follow the same instructions for create a data source.
Create functions.
"EchoFunction": {
"Type": "AWS::AppSync::FunctionConfiguration",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Name": "EchoFunction",
"DataSourceName": {
"Fn::GetAtt": [
"EchoLambdaDataSource",
"Name"
]
},
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyFunction.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyFunction.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
"QueryEchoResolver": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": {
"Fn::GetAtt": [
"EchoLambdaDataSource",
"Name"
]
},
"Kind": "PIPELINE",
"TypeName": "Query",
"FieldName": "echo",
"RequestMappingTemplate": "# any pipeline setup goes here \n{}",
"ResponseMappingTemplate": "$util.toJson($ctx.prev.result)",
"PipelineConfig": {
"Functions": [
{
"Fn.GetAtt": ["EchoFunction", "FunctionId"]
}
]
}
}
@mikeparisstuff In my existing pipeline resolver, there is no lambda function involved. From your solution, I need some lambda functions. Why? Can I do it without using lambda functions?
I have figured it out by following https://github.com/incr3m/aws-amplify-starter/blob/master/amplify/backend/api/testampl2/stacks/AppVersionQueryResource.json
The solution by @mikeparisstuff worked with following notes:
"FunctionVersion": "2018-05-29"
is a required property in EchoFunction.Properties
, 2018-05-29 is the only version that is supported.Fn.GetAtt
in pipeline resolver is actually spelt Fn::GetAtt
Side note: There is no need for a lambda data source, any data source e.g. from dynamodb can be used, also RequestMappingTemplateS3Location
can be used in place of RequestMappingTemplate
, empty request template should at least have a {}
Hello @mikeparisstuff
Thanks for your sample code.
What about if I want 2 resolvers that use Aurora to send SQL queries?
How should I setup this pipeline resolver?
In your QueryEchoResolver file, I don't see you are calling 2 resolvers.
Thanks for help.
@Ricardo1980 incase you haven't cracked the code yet for your question about multiple functions sending queries in a single pipeline, what you would do is:
PipelineConfig.Functions
part of where you defined your Pipeline. See Step 3 of mikeparisstuff's answer above. So for example, if you have 2 functions, one to do a select statement and the other to insert something, the Functions
part of your pipeline config would look something like: "PipelineConfig": {
"Functions": [
{
"Fn::GetAtt": ["MyCustomSelectFunction", "FunctionId"]
},
{
"Fn::GetAtt": ["MyFunctionToInsertRecords", "FunctionId"]
}
]
}
Thanks @bogan27
Very useful!
Most helpful comment
@Ricardo1980 incase you haven't cracked the code yet for your question about multiple functions sending queries in a single pipeline, what you would do is:
PipelineConfig.Functions
part of where you defined your Pipeline. See Step 3 of mikeparisstuff's answer above. So for example, if you have 2 functions, one to do a select statement and the other to insert something, theFunctions
part of your pipeline config would look something like: