Amplify-cli: Execute lambda on mutation?

Created on 26 Feb 2020  路  7Comments  路  Source: aws-amplify/amplify-cli

I'm successfully connecting and using lambdas connected to the specific fields in the graphql type schema. When I send a query containing that field, my function gets executed.

Is there a way to specify a lambda which will be executed on mutation? ( I'm thinking of using it for capturing and logging some specific mutations).

graphql-transformer question

Most helpful comment

You can attach Lambda functions to @model created types in Amplify now which should execute on your mutation: https://aws-amplify.github.io/docs/cli-toolchain/quickstart#as-a-part-of-the-graphql-api-types-with-model-annotation

All 7 comments

On second thought... I guess I could simply request a field which is connected to the lambda as part of the mutation, and probably the context will have sufficient information to figure out that was executed as part of a mutation?

But I guess there is no way to actually populate a field by using a lambda in the mutation (e.g. it would be useful to add "lastModifiedBy" and "lastModified" fields.

Just a suggestion, but you could use DynamoDB streams and create a Lambda trigger once a field mutates as mentioned here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html

Another option to look into would be Pipeline Resolvers: https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html

I wanted to look into the possibility to attach lambda to dynamodb streams anyway for different goal, however there are two issues of a) custom setup / cloudformation templates which would somehow need to be connected into amplify environment deployment , so that it creates/connects those streams for every new environment and specifically to this goal - b) the lambda will only have access to the data which is stored in the DB, and not the runtime data available to any lambda which might be executed through the appsync resolvers.

I saw on a different ticket now, that one can define custom mutation, and connect it to lambda - I'm guessing that solution might be using pipeline resolvers - I will check it out.

You can attach Lambda functions to @model created types in Amplify now which should execute on your mutation: https://aws-amplify.github.io/docs/cli-toolchain/quickstart#as-a-part-of-the-graphql-api-types-with-model-annotation

@undefobj , thanks for pointing that out! I missed it!
Any chance I also missed that you can somehow specify a lambda which could be used for "transformation/augmenting" the graphql mutation data before it gets received by the default generated resolver ? :)

The idea is that I could then have several fields in the graphql model which wouldn't be sent explicitly by the client, but generated on the server ( e.g. "updatedBy", "updatedAt" ,etc...)

You could potentially fill out items in the attributes map by reverse engineering and editing the generated resolvers in VTL. If you look at one of the examples here: https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html#dynamodb-helpers-in-util-dynamodb

    #set( $myFoo = $util.dynamodb.toMapValues($ctx.args) )
    #set( $myFoo.version = $util.dynamodb.toNumber(1) )
    #set( $myFoo.timestamp = $util.dynamodb.toString($util.time.nowISO8601()))

    "attributeValues" : $util.toJson($myFoo)

The $myFoo.timestamp is being generated on the server in the same way that you want updatedAt.

@undefobj , thanks for the pointer!

Was this page helpful?
0 / 5 - 0 ratings