Graphql-engine: Requesting an AWS Lambda setup guide

Created on 13 Jan 2019  Â·  9Comments  Â·  Source: hasura/graphql-engine

Can you please provide a guide for setting up Hasura in an AWS Lambda. For users that are looking for a complete serverless stack, this would be great. Bonus points for configurations for Netlify functions or Serverless. :)

question

Most helpful comment

Thank you both for the responses and the suggestion regarding the free tier EC2. I will look into that now but still hope for a Lambda implementation one day in the future. Thank for such a great project!

All 9 comments

As pointed out by Elgordino in discord, a Lambda execution environment would benefit greatly from a connection pooling to the database as described in this article: https://spotinst.com/blog/2017/11/19/best-practices-serverless-connection-pooling-database/

We should be able to get notifications working as well in Lambda using techniques outlined in this article. https://aws.amazon.com/blogs/database/stream-changes-from-amazon-rds-for-postgresql-using-amazon-kinesis-data-streams-and-aws-lambda/

@robnewton One of the problems I was mentioning with lambda was that graphql subscriptions won't work well. GraphQL subscriptions are long-living websocket connections and a better fit for a 1 to n scaling environment like containers and not a 0 to n scaling environment like serverless/lambda.

As we do this, would love to understand what the top motivations for moving to lambda are on your end! For example:

  1. No-ops
  2. 0 to n scaling: cost

If it's the 2 things above, while we get around to figuring out a lambda runtime, it might be faster to get started with fargate or similar on AWS which will also not involve any ops. If you start with the lowest tier and set up autoscaling, your base spend will be 8$ / month (0.25CPU + 0.5GB RAM). There will be no unexpected caveats / gotchas about running Hasura this way and no unexpected performance impact.

Thanks for the thoughtful response.

Yes, we spoke about the lack of notifications in Lambda on Discord and for me that would be a problem as our design depends upon notifications. As a suggestion, you might take a look at the above comments on the subject for an idea of handling at least the webhooks in a serverless context. For subscriptions you might consider the new websocket support in API Gateway on AWS (https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway). I hope to spark ideas about alternative implementations of Hasura still so that this can potentially be available with a 100% serverless implementation.

That said, I can understand the hesitation to invest any time in a Lambda runtime if there is not a benefit to a large enough group here. Let me share my thoughts and hope that there is time in the team's schedule enough to at least investigate it.

My main reasons for requesting this is cost and the desire to have a completely serverless stack for our application. Our application is an operational tool that is only internally facing and has only a small handful of users amounting to maybe 1000 GraphQL queries and Mutations a day. That's such a low volume of calls that if we can implement Hasura in Lambda, not only will the cost be $0 instead of $8/mo, we will also be a 100% serverless stack with no server uptime or scalability concerns (Hasura and the PostgreSQL database are the only components of the stack that don't follow this pattern). Having a lambda runtime option will allow for a more streamlined, automated and zero cost dev/test environment provisioning with much less overhead as well which I know others would find beneficial.

Hope this helped explain where I'm coming from a bit more as well as provide you some helpful articles about the ever evolving AWS platform.

Our application is an operational tool that is only internally facing and has only a small handful of users amounting to maybe 1000 GraphQL queries and Mutations a day.

@robnewton Why not put Hasura on a free tier EC2 if that is the volume of requests. Also you can put it in the same VPC as your Postgres and save lots of seconds in cold-start viz-a-viz Lambda?

I understand the use-case though but just giving another workaround apart from Fargate.

@robnewton also, totally understand where you’re coming from. But till we
get around to getting it packaged for lambda (with caveats) these are just
workarounds to solve where you’re currently at.

Thanks for the detailed notes on your use-case!

On Wed, 16 Jan 2019 at 12:34 PM, Tirumarai Selvan notifications@github.com
wrote:

Our application is an operational tool that is only internally facing and
has only a small handful of users amounting to maybe 1000 GraphQL queries
and Mutations a day.

@robnewton https://github.com/robnewton Why not put Hasura on a free
tier EC2 if that is the volume of requests. Also you can put it in the same
VPC as your Postgres and save lots of seconds in cold-start viz-a-viz
Lambda?

I understand the use-case though but just giving another workaround apart
from Fargate.

—
You are receiving this because you commented.

Reply to this email directly, view it on GitHub
https://github.com/hasura/graphql-engine/issues/1367#issuecomment-454674060,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAIAWDFuWxEwMdkJKDNIAiotPK2_61i4ks5vDs7hgaJpZM4Z9WFH
.

Thank you both for the responses and the suggestion regarding the free tier EC2. I will look into that now but still hope for a Lambda implementation one day in the future. Thank for such a great project!

@robnewton We don’t have immediate plans to have Hasura run on AWS Lambda yet. Closing this for now, and we’ll re-open this if we take this up in the future 🙂

At least for my use case I would happy to trade off subscription support to be able to run in a lambda :D

Was this page helpful?
0 / 5 - 0 ratings

Related issues

sachaarbonel picture sachaarbonel  Â·  3Comments

jjangga0214 picture jjangga0214  Â·  3Comments

leoalves picture leoalves  Â·  3Comments

Fortidude picture Fortidude  Â·  3Comments

marionschleifer picture marionschleifer  Â·  3Comments