* Which Category is your question related to? *
Functions
* What AWS Services are you utilizing? *
Lambda
* Provide additional details e.g. code snippets *
I can const Amplify = require('aws-amplify')
from within my Lambda function, but is there a clean way to pass it the aws_exports.js
file needed for Amplify.configure()
? My use case is that I would like to be able to call Auth.signUp
from within the Lambda function (so that I can specify some custom attributes "server side").
Have you taken a look at Cognito authentication hooks?
https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-post-confirmation.html
@troygoode The intent of the aws_exports.js file is to basically make the backend configurations available to the frontend code and we haven't seen many use cases around passing it to a Lambda function and using it on the server side.
You could probably write a script (which would be triggered after an amplify push
) which could copy the aws_exports.js
file generated locally and copy it to an S3 bucket and your Lambda function could probably read the file from the S3 bucket.
Thanks @kaustavghosh06. Let me put a slightly different spin on it: in general how do I reference Amplify CLI-generated resources from the server side (primarily a Lambda function) in a way that works with multienv
?
Can you give a specific use case @troygoode?
@jkeys-ecg-nmsu @kaustavghosh06 one use case is executing a query against the GraphQL endpoint generated by amplify add api (GraphQL)
. This endpoint is stored in aws_exports.js
in the aws_appsync_graphqlEndpoint
key and looks something like this:
https://abcdefg123456890.appsync-api.us-east-1.amazonaws.com/graphql
How do I query my data from one of the Lambda APIs generated from amplify add api (REST API)
? I can hardcode the above URI, but that URI changes for each environment when using @multienv
.
@jkeys-ecg-nmsu @kaustavghosh06 one use case is executing a query against the GraphQL endpoint generated by
amplify add api (GraphQL)
. This endpoint is stored inaws_exports.js
in theaws_appsync_graphqlEndpoint
key and looks something like this:
https://abcdefg123456890.appsync-api.us-east-1.amazonaws.com/graphql
How do I query my data from one of the Lambda APIs generated from
amplify add api (REST API)
? I can hardcode the above URI, but that URI changes for each environment when using@multienv
.
I'm not sure I follow. If you use a branch per-env, then your aws_exports.js
should look different for each env. Your instantiation of Amplify.configure(aws_exports);
(using nodejs here) should then do the right thing, no?
@timrchavez yes, if you did so from (projectroot)/src
, where aws-exports.js
lives. I'm talking about being able to execute a GraphQL query using Amplify.API
from within a Function generated by amplify add api
; this creates javascript files inside (projectroot)/amplify/backend/function/myfunction/src
which are effectively a separate application than what is stored in (projectroot)/src
.
I suppose I could always const aws_exports = require('../../../../../src/aws-exports.js')
but it doesn't feel right. I'm hoping there is a "right" way. It'd be strange if querying your database wasn't an anticipated need for the code generated by amplify add api
, wouldn't it?
I just ran into a new issue with handing aws-exports.js
over to a Function spun up via amplify add api
(or amplify add function
): aws-exports.js
exports itself via export default awsmobile;
which is not valid syntax for Lambda (which doesn't go through a transpilation step) – that requires it to be module.exports = awsmobile;
. In other words, even a simple cp src/aws-exports.js amplify/backend/function/foo/src/
is not sufficient (nor is const aws_exports = require('../../../../../src/aws-exports.js')
possible for the same reason).
@troygoode Could you go a bit deeper into your use case and why you're trying to call API
and Auth
categories inside of a Lambda function. As @kaustavghosh06 mentioned Amplify is frontend code, and it's not actually supported to use the library in Node functions (though we are looking at this in the future) so I'm assuming you're using some sort of pollyfill.
Stepping back a bit, can you walk us through your complete use case of why you're trying to proxyAuth
and API
calls through a Lambda function? Perhaps we can assist you better or provide recommendations. For instance what is the app you are building and what is the workflow?
@undefobj @kaustavghosh06 @nikhil-dabhade I have a couple of core use cases where this is an issue:
As part of my app I expose a REST API for use by external systems (e.g. Stripe) that offers a restricted set of capabilities (not pure CRUD) related to my data model. I have created this API via Amplify as a backend API using the "REST API → Serverless Express" option. My core data model is created via Amplify as a GraphQL api, which auto-generated the underlying DynamoDB storage. My options from my custom API are to either (a) go directly against the underlying DynamoDB storage or (b) go through the Amplify.API library. Either use case requires that I have configuration values pointing to resources that are dynamically generated for each environment (the DynamoDB table names in the former and the GraphQL endpoint in the latter). I use no Amplify-configured inbound authentication to the API, use IAM authenticate to retrieve secrets from Secret Manager, use those secrets to bootstrap a Cognito User identity as a service account, and then make the eventual calls into the GraphQL endpoint using that Cognito user.
I would also include things like cron jobs in this category.
@auth
rules (e.g. input validation)Certain operations against the data model require server-side code to run and cannot fully trust the client. An example of this is changes to the billing status of a tenant. The user may have permission to update the plan
field but not the price
field; there is no way to do this with Amplify today so I circumvent this by instead locking down all writes from users except (again) a special service account user. Then the user places a call into a separate REST API created via Amplify, but this time with Cognito authentication enabled. I then do some nonsense to actually get access to _who_ the requesting user is (something that I would expect would be a common need if you've selected that the API should be protected by Cognito). Then I perform my business logic within the Lambda function and ultimately execute the change against the GraphQL endpoint using the service account rather than the end-user's Cognito identity.
I'd be happy to schedule some time to walk you through my specific codebase and application if that'd be helpful. I'm based in California.
And yes, I had to use a polyfill. It looks something like this:
require('cross-fetch/polyfill') // otherwise the use of `fetch` fails internally within Amplify
// fetch the equivalent of aws_exports.js out of Secrets Manager, since aws_exports isn't available to backend functions; I have to manually update Secrets Manager as aws_exports.js changes...
const AWS = require('aws-sdk')
const secretsManager = new AWS.SecretsManager()
const secret = await secretsManager.getSecretValue({ SecretId: 'my-secret' }).promise()
const envSecret = JSON.parse(secret)[process.env.ENV] // get config for _this_ environment
// messy because of the export/import syntax differences
const AmplifyCore = require('aws-amplify')
const Amplify = require('aws-amplify').default
const { API, Auth } = Amplify
const { graphqlOperation } = AmplifyCore
// create a valid Cognito session using Amplify.Auth
const { username, password, ...aws_exports } = envSecret // I also store the username & password for a service account in there
Amplify.configure(aws_exports)
await Auth.signIn(username, password)
// okay now I can get to my data
const result = await API.graphql(graphqlOperation(myQuery))
For the first scenario, it's standard for your Lambda function to interact directly with the DynamoDB table that was generated from your GraphQL @model
directive. Going through the AppSync API for both the client and the Lambda function on the backend shouldn't be necessary.
On your second scenario it sounds like you might want Dynamic Group Based Auth (https://aws-amplify.github.io/docs/cli/graphql#usage-1) but on a per-field not per-record level of granularity for a multi-tenant scenario you're building. Pipeline resolvers would probably be a good way to do this which we're looking into for enhancing the transformer in the future. For now I would either run this in Lambda or build this out manually. Here is a blog post if you're not familiar with this technique: https://medium.com/open-graphql/intro-to-aws-appsync-pipeline-functions-3df87ceddac1
interact directly with the DynamoDB table that was generated from your GraphQL @model directive
@undefobj How does the Lambda func know the name of the underlying table? That name currently isn't exposed in aws_exports.js
and even if it were, aws_exports.js
isn't available to the Lambda function. The name is dynamically generated and will be different in each 'multienv' environment.
For the second scenario, I'm familiar with pipeline resolvers and yes, they could be used for simple validation scenarios. Unfortunately I have validation scenarios that are more complex and not suitable for that use case. Per-field Dynamic Group Based Auth would also be insufficient. As an example, one validation scenario involves validating part of the input (one of the fields, not all) against a 3rd-party REST API.
I think there is a fundamental disconnect here: Amplify allows you to generate a secured (via multiple choices of authentication) REST API but no concession is made toward allowing the REST API to access the data model. This does not seem like an obscure use case to me – quite the opposite, it seems like the number of use cases where you need a custom REST API but _don't_ need to access your own data would be the minority.
You will need to use the table name in the Lambda either directly or use Environment Variables, etc. I understand that you're trying to dynamically pull it from the exports file, but that's not what it was designed to do. The purpose of the exports file is for centralized configuration in Frontend apps, not backend apps.
There is no disconnect, I understand what you are trying to accomplish however it's not something that other customers have requested before. As stated above you can still access that DynamoDB table from the Lambda function connected to your API Gateway backend and this is a very common pattern.
Dynamically populating backend config like this might be something worthwhile, but it would need to be designed and thought out correctly as there could be security, cost, and scalability considerations. It sounds like you might want some sort of generic Lambda features(s) from the Amplify CLI Functions category with centralized configuration to access the resources which are generated by Amplify (DynamoDB, Cognito, S3, etc.). While we don't support this now, if you'd like to open up an issue as a feature request with your ideal workflow that would be good as we could gauge community interest/feedback for the future.
On the auth concern I would recommend using a custom Lambda function at this point, with or without Pipeline resolvers, as it sounds like your use case is different than other customers.
@undefobj Okay, I'll conceed that I should go directly against DynamoDB using aws-sdk
and do not make use of aws-amplify
on the backend.
you can still access that DynamoDB table from the Lambda function connected to your API Gateway backend
const dynamo = new AWS.DynamoDB({ apiVersion: '2012-10-08' })
const response = dynamo.putItem({
TableName: TABLE_NAME,
Item: {
'CUSTOMER_ID' : {N: '001'},
'CUSTOMER_NAME' : {S: 'Richard Roe'},
}
}).promise()
How do I determine the correct value to use for TABLE_NAME
? That value will change on every run of amplify env checkout <env>
.
centralized configuration to access the resources which are generated by Amplify (DynamoDB, Cognito, S3, etc.)
Yes, exactly this. I can't see how Amplify can continue being both (a) the system that is responsible for resource identifier generation and (b) a system that supports creating and executing backend functions without (c) making the result of (a)
available to (b)
.
@troygoode You're right, the tablenames change on every env checkout, but there is a pattern to it -
<constant-table-name>-<env-name>
.
We pass in the env name to your lambda function via the cloudformation template. So, in your lambda function you can have some code like this:
let tableName = "<constant-table-name>";
if(process.env.ENV && process.env.ENV !== "NONE") {
tableName = tableName + '-' + process.env.ENV;
}
Having said that we can also possibly expose the amplify/backend/amplify-meta.json
file (which could probably include the dynamo table names created by the GraphQL transformer) to the lambda function as an env variable and the lambda function could then probably use that. Let me know what you think about both the solutions.
there is a pattern to it
@kaustavghosh06 Yeah, that is true for the DynamoDB table names. Unfortunately that doesn't appear to be the case for _every_ resource (e.g the Cognito User Pool ID).
we can also possibly expose the amplify/backend/amplify-meta.json file
Great idea! That is actually much more useful than aws_exports.js
for the back-end stuff. The downside is it can't just be passed straight through into Amplify.configure()
, but I understand that you don't currently want to support using the Amplify JS library on the backend.
which could probably include the dynamo table names created by the GraphQL transformer
That'd be swell!
I'm closing this ticket in favor of this new feature request based upon @kaustavghosh06's suggested solution: #689
My DynamoDB tables also include the "GraphQLAPIIdOutput" value in the table name. I'm assuming I'm responsible for adding those to the env variables.
@troygoode
I get a type error using this:
const result = await API.graphql(graphqlOperation(myQuery))
with the following query:
const myQuery = `query ListTodos(
$filter: ModelTodoFilterInput
$limit: Int
$nextToken: String
) {
listTodos(filter: $filter, limit: $limit, nextToken: $nextToken) {
items {
id
name
owner
date
description
completed
}
nextToken
}
}
`
TypeError: Must provide Source. Received: undefined
```
Any idea why this is?
And yes, I had to use a polyfill. It looks something like this:
require('cross-fetch/polyfill') // otherwise the use of `fetch` fails internally within Amplify // fetch the equivalent of aws_exports.js out of Secrets Manager, since aws_exports isn't available to backend functions; I have to manually update Secrets Manager as aws_exports.js changes... const AWS = require('aws-sdk') const secretsManager = new AWS.SecretsManager() const secret = await secretsManager.getSecretValue({ SecretId: 'my-secret' }).promise() const envSecret = JSON.parse(secret)[process.env.ENV] // get config for _this_ environment // messy because of the export/import syntax differences const AmplifyCore = require('aws-amplify') const Amplify = require('aws-amplify').default const { API, Auth } = Amplify const { graphqlOperation } = AmplifyCore // create a valid Cognito session using Amplify.Auth const { username, password, ...aws_exports } = envSecret // I also store the username & password for a service account in there Amplify.configure(aws_exports) await Auth.signIn(username, password) // okay now I can get to my data const result = await API.graphql(graphqlOperation(myQuery))
@troygoode , Auth.signUp, Auth.signIn works fine but Auth.sendCustomChallengeAnswer() throws error like "TypeError: user.sendCustomChallengeAnswer is not a function"
For your information, i'm trying to make API to verify OTP. I passed "cognitoUser" data (which returned after signIn using phone number) and OTP as parameter. Stuck here. Would you please help me?
Most helpful comment
And yes, I had to use a polyfill. It looks something like this: