Amplify-cli: RFC: Custom data sources, resolvers, and resources with GraphQL API category.

Created on 11 Dec 2018  路  20Comments  路  Source: aws-amplify/amplify-cli

Feature Request

This issue will serve as the primary design document and discussion thread for features related to configuring custom resolvers and data sources with a GraphQL API provisioned by the Amplify API category.

In the current implementation, there is no way to attach a custom data source or resolver directly from the Amplify CLI. This results in a friction point where if a customer wants to write custom resolver logic, they must do so using the AWS AppSync console or by deploying their own CloudFormation stacks with a separate deployment process.

Ideally, users should be able to:

  1. Attach existing DynamoDB tables, ES domains, http endpoints etc that were not provisioned from within this Amplify project.
  2. Attach resources that were configured within the Amplify project (e.g. any tables provisioned by the storage category).
  3. Write resolver logic that will be bundled and deployed as part of amplify push that target existing resources or resources that were provisioned by this project.
  4. Customize most everything for situations when the generated behavior isn't exactly what you need.
  5. Use pipeline resolvers.

Related Issues

74, #80, #140, #423, #570

Design

The current backend directory for the API category looks like this

backend/
- api/
   - [apiname]/
        - build/
            - resolvers/
            - schema.graphql
        - schema.graphql
        - cloudformation-template.json
        - parameters.json

I propose changing it to:

backend/
- api/
    - [apiname]/
        - build/ # compilation will never change anything outside of build/
            - resolvers/ # transform output with contents of ../resolvers merged in.
            - stacks/ # all stacks including custom stacks at ../stacks and nested stacks output by transform.
            - root-stack.json # root stack
            - schema.graphql # compiled schema output
        - schema.graphql # Your projects schema file OR
        - schema/             # or schema/ directory filled with .graphql files.
            - Query.graphql
            - Mutation.graphql
        - parameters.json # Override any parameters passed to root stack.
        - stacks/
            - CustomStack.json # anything put here will be deployed as a child of the root stack.
            - SQLCustomStack.json
        - resolvers/
            - Type.field.req.vtl # Use same name as a generated file to override
            - Type.field.res.vtl

build directory

The build directory should never be manually edited. It will be overwritten on each gql-compile. You may put customized resources in the higher level directories that will be merged in automatically.

stacks directory

Users may add any custom resources via the stacks directory. When you place a stack in the stacks directory, you can expect a minimum set of parameters that you may reference to add resources to your API.

// These will be provided automatically when deploying the stack
"Parameters": {
    "AppSyncApiId": {
        "Type": "String",
        "Description": "The id of the AppSync API for this project."
    },
    "env": {
        "Type": "String",
        "Description": "The Amplify environment name. e.g. Dev, Test, or Prod",
        "Default": "NONE"
    },
    "DeploymentS3Location": {
        "Type": "String",
        "Description": "The path to the S3 directory containing this deployment's resolver templates. E.G. s3://deployment-bucket/deployment/[deployment-id"
    }       
}

Users would be able to place up to N stacks in this top level stacks/ directory. The CLI will automatically upload any stacks placed in this directory and upload them as child stacks of the root api stack. Allowing multiple stacks is important to future proof this design from the CloudFormation limits we have run into before.

Custom resolvers (top level resolvers directory)

Users are able to add custom resolvers/functions/data sources using plain CloudFormation. The CLI will inject the parameters from the root stack so customers do not need to worry about how or where the resolver files are uploaded and can simply reference the parameter. The other big bonus of using plain CloudFormation is that it really removes blockers for advanced users that want to have more control over their deployment.

"Resources":
    "ListUserResolver": {
        "Type": "AWS::AppSync::Resolver",
        "Properties": {
            "ApiId": {"Ref": "AppSyncApiId"},
            // Referencing the UserTable created by @model
            "DataSourceName": "UserTable",
            "FieldName": "listUsers",
            "TypeName": "Query",
            "RequestMappingTemplateS3Location": {
                "Fn::Join": [
                    "/",
                    {"Ref": "DeploymentS3Location"},
                    "resolvers",
                    "Query.listUsers.request.vtl"
                ]
            },
            "RequestMappingTemplateS3Location": {
                "Fn::Join": [
                    "/",
                    {"Ref": "DeploymentS3Location"},
                    "resolvers",
                    "Query.listUsers.response.vtl"
                ]
            }
        }
    }
 }

The Amplify CLI will handle uploading all the files in the build/ directory to the S3 location provided by DeploymentS3Location. Users can then reference the files by name without worrying about how the files get there. They will still use the main *schema.graphql file to design their schema and add the relevant fields.

You may use the top level resolvers directory to write your own resolvers as well as to override the VTL templates that are generated by the transform. To override a file, just create a file in the top level resolvers directory with the same name and it will be merged on top of the generated output during the build. For example, if an @model creates a file Mutation.addPost.request.vtl and you want to tweak the behavior, you would be able to create a file with the same name in the top level resolvers/ directory and the CLI will upload that file with greater precedence than the one created by @model.

Pipeline resolvers

Since all files in the resolvers/ directory will be uploaded to S3, you may also use the directory to upload function templates. You can then use CloudFormation to write up pipeline resolvers within AppSync that depend on your function using the Cloudformation GetAtt intrinsic function.

For example:

GetPicturesByOwnerResolver:
    Type: AWS::AppSync::Resolver
    Properties:
      ApiId: !GetAtt AppSyncPipelineApi.ApiId
      TypeName: "Query"
      FieldName: "getPicturesByOwner"
      RequestMappingTemplate: |
        ...
      ResponseMappingTemplate: |
        ...
      Kind: "PIPELINE"
      PipelineConfig:
        Functions:
        - !GetAtt isFriendFunction.FunctionId
        - !GetAtt getPicturesByOwnerFunction.FunctionId

  isFriendFunction:
    Type: AWS::AppSync::FunctionConfiguration
    Properties:
      ...

  getPicturesByOwnerFunction:
    Type: AWS::AppSync::FunctionConfiguration
    Properties:
        ...

Break up your schema

If you want to break your schema up into multiple files, you can replace the schema.graphql with a directory named schema where you can place as many .graphql files as you would like. The .graphql files in the schema/ directory will be loaded when you run amplify push and amplify gql-compile.

Feedback

This design is not final and I encourage feedback. If there is a use case that this does not support or you have another idea that you think would be helpful please don't hesitate.

feature-request graphql-transformer

Most helpful comment

Hey guys,
We've added functionality for custom resolvers and other features discussed in the RFC. Please use npm install -g @aws-amplify/cli to install the latest version of the CLI. For documentation regarding it, please refer to https://aws-amplify.github.io/docs/cli/graphql#overwrite-a-resolver-generated-by-the-graphql-transform

Here's the launch announcement for the same - https://aws.amazon.com/blogs/mobile/amplify-adds-support-for-multiple-environments-custom-resolvers-larger-data-models-and-iam-roles-including-mfa/

All 20 comments

@mikeparisstuff would be interesting to grab resolvers that has been already done in the Appsync console. An _amplify api fetch resolver_ that would populate the build/resolvers dir.

As of today, my #current-cloud-backend dir isn't in sync due to the fact that I can not amplify api push (due to cloud formation template file size limitation).

I really hope we'll have clear path to update our local work with a kind of migration process.
Cheers,
Matthieu

The one thing that's not clear that I'd like to add if not already considered is ability to add data sources outside of the configured amplify cli region. Dynamo exists in multiple regions and AppSync supports connecting to data sources in all regions, even in regions where AppSync itself is not supported, so it's limiting when the cli forces everything to be in the same region.

@MatthieuLab's request is also interesting but can be workedaround by doing a onetime copying and pasting of these resolvers into the resolvers dir that @mikeparisstuff is proposing.

To clarify current behavior, if I write a custom resolver in the AppSync console, will amplify push throw it away?

Proposal looks awesome, I'll give it a more detailed read sometime this week.

@jkeys-ecg-nmsu If you add a resolver in the console, amplify push should not remove the resolver unless your amplify project is also creating a resolver with the same type/field combination. e.g. if you have type Post @model { ... }, amplify push would overwrite the mutation Mutation.createPost but would not overwrite Mutation.customResolver.

@hisham You would be able to target a table (or other resource) in another region by creating your own AWS::AppSync::DataSource resource in a stack in the stacks directory. For example,

DynamoDBPicturesTableDatasource:
    Type: AWS::AppSync::DataSource
    Properties:
      Type: AMAZON_DYNAMODB
      Name: Pictures
      ApiId: !Ref AppSyncApiId
      ServiceRoleArn: !GetAtt AppSyncTutorialAmazonDynamoDBRole.Arn
      DynamoDBConfig:
        TableName: PictureTable
        AwsRegion: us-east-1

You can then target this data source with your own resolvers etc.

@MatthieuLab This is a great idea and we have an item in the backlog to do just this. It would not be too hard to build a tool that lists your AppSync resolvers and drops them in the resolvers directory. I'll loop in some others to create a list of options for ways to hook an api fetch or api sync command into the CLI.

A couple more questions. I am not super experienced with cloudformation so just double checking:

1) Does this design allow customization of cognito resources created by Amplify. e.g. allowing users to sign in via email rather than username, see https://github.com/aws-amplify/amplify-cli/issues/102

2) Does this design allow setting stack policies to prevent accidential deletion of production dynamo tables or cognito userpools? See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html

A couple more questions. I am not super experienced with cloudformation so just double checking:

1. Does this design allow customization of cognito resources created by Amplify. e.g. allowing users to sign in via email rather than username, see #102

You should be able to do this today by changing the template in the auth directory of the Amplify project. You can then change what claim gets looked at for ownership using the identityField argument in the @auth directive as necessary.

2. Does this design allow setting stack policies to prevent accidential deletion of production dynamo tables or cognito userpools? See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html

This is a great question. I will look into what we can do to add deletion policies to resources that hold data.

Thanks @mikeparisstuff - fyi - for point 2 about preventing accidental deletion. For now as a workaround I just set an IAM policy under the amplify aws user (and my developer accounts too) to deny all deletion requests to the prod resources holding data.

Just want to add that this looks great. I'm really excited for the opportunity to fold in pipeline resolvers into this workflow. 馃敟

Is there any timeline for this feature ? We are using custom resolvers from the console but can't push forward as the @model transformer overwrites our create mutation 馃槩

We are actively working on this and you can track it here: https://github.com/aws-amplify/amplify-cli/pull/581
Cannot commit to a timeline, but we have a goal to hit the next 2-4 weeks. If you want to test out functionality please comment on that PR and @mikeparisstuff can give guidance on a sample.

@mikeparisstuff, customising resolvers & data endpoints for GraphQL API using amplify-cli is a feature I am eagerly waiting for. How soon can we expect this feature?

Looking forward to this. But, what would the ideal workflow be when there are multiple client apps relying on the same API? It seems odd that one of the projects, say iOS, would end up as the API source of truth for a web or Android app.

Hey guys,
We've added functionality for custom resolvers and other features discussed in the RFC. Please use npm install -g @aws-amplify/cli to install the latest version of the CLI. For documentation regarding it, please refer to https://aws-amplify.github.io/docs/cli/graphql#overwrite-a-resolver-generated-by-the-graphql-transform

Here's the launch announcement for the same - https://aws.amazon.com/blogs/mobile/amplify-adds-support-for-multiple-environments-custom-resolvers-larger-data-models-and-iam-roles-including-mfa/

Updating my project with the new 1.1.0 Amplify version got the following error:

脳 An error occured when migrating the API project.
脳 An error occurred when pushing the resources to the cloud

Resource is not in the state stackUpdateComplete

14 Feb 2019 00:43:39 | authcognito8e70670b | UPDATE_FAILED | Parameters: [authRoleArn, autoVerifiedAttributes, unauthRoleName, allowUnauthenticatedIdentities, smsVerificationMessage, userpoolClientReadAttributes, mfaTypes, emailVerificationSubject, useDefault, openIdLambdaIAMPolicy, userpoolClientGenerateSecret, mfaConfiguration, userpoolClientLogPolicy, openIdRolePolicy, identityPoolName, openIdLogPolicy, thirdPartyAuth, authSelections, smsAuthenticationMessage, roleExternalId, mfaLambdaLogPolicy, passwordPolicyMinLength, userPoolName, openIdLambdaRoleName, policyName, userpoolClientName, userpoolClientLambdaPolicy, resourceName, mfaLambdaIAMPolicy, mfaPassRolePolicy, emailVerificationMessage, userpoolClientRefreshTokenValidity, userpoolClientSetAttributes, unauthRoleArn, authRoleName, requiredAttributes, roleName, passwordPolicyCharacters, lambdaLogPolicy, userpoolClientLambdaRole, defaultPasswordPolicy, mfaLambdaRole] must have values

-- | -- | -- | --

Awesome! This really was a pain point. Thanks!

so if i do a custom query called listProducts and successfully deploy
than dont want it and want a Product table which with generate listProducts query than i get errors on creating the resolvers weird bug

Great stuff! Thank you very much for your hard work. This is much appreciated.

I want to output Arn of table (name - students table) but i am unable to edit table stack (which i think is auto-generated). This might be a naive question but do you know how can i fetch table arn in my lambda cloudformation in this setting?

@mikeparisstuff There is on issue related to resolver, during my initial amplify push i got a resolver against a field which I mentioned in schema as @connection. After that I deleted that resolver and no longer needed against that field. But when I do the amplify push i used to get error like there is no resolver against that field. Then I will attach template before push to get rid of these errors and after that I manually delete that resolver. Is there any permanent solution to do my amplify push smoothly without doing these kind of operations

So no support for attaching existing data sources, still?

This makes it very difficult to work with projects that were created outside of the Amplify CLI.

Was this page helpful?
0 / 5 - 0 ratings