Guide on horizontally scaling Hasura, setting up auto-scale and benchmarking how fast auto-scale works on a substrate like GKE.
@tirumaraiselvan Can you suggest a full list of topics / a skeleton that we need to cover? Would be easier for someone to pick this up then.
Here is the list ( leaving it to @rikinsk to structure them ) :
How to add more/remove hasura nodes without Downtime
How to add/remove/upgrade Postgres replicas without Downtime
Strategies for HA / Horizontal Scaling: backups, monitoring ( maybe prometheus? ), etc..
For complex setups, the hard part is not deploying it, but maintaining it: logs, upgrades, incidents, monitoring, etc...
Deploying it, is as easy a "copying and pasting" a blog post
Tracked here #940
Now that #1182 is live, this seems to be a better-documented issue to documenting a guide on scaling.
Hasura seems to scale great, and easy enough to load balance and auto-deploy.
It's the load on Postgres caused by subscriptions that concerns me. Each unique, open subscription query generates an sql query every second.
My initial thoughts --
Read/write splitting would be helpful for horizontal scaling (easy and saves using a proxy)
https://github.com/hasura/graphql-engine/issues/1847
Subscriptions should be optional on a per-table basis
Subscription permissions per-role ie "Allow role 'X' to use subscriptions"
I would prefer the per (role, table) at the server as it sits well with the existing system. The console can have UX for 'allow subscriptions for role 'x' on all tables'. https://github.com/hasura/graphql-engine/issues/1892
It's the load on Postgres caused by subscriptions that concerns me.
This is changing very soon (end of this week, work in progress). The new architecture will cut down load on postgres significantly. We'll document the architecture.
This is changing very soon (end of this week, work in progress). The new architecture will cut down load on postgres significantly. We'll document the architecture.
Is this #1934 or something else?
I am evaluating Hasura and I have concerns about subscriptions scalability, I'd like to understand how they work... my use case could be subscription-heavy.
@massimiliano-mantione Yes, the optimisations to subscriptions did go as part of #1934 and a couple of other small PRs. These changes make subscriptions highly scalable. We are a few days away from publishing numbers from performance benchmarks.
What kind of scale are you expecting? (You can DM @coco98 /@tanmaig#8316 or me / @sandip#8048 on the community Discord server if that's preferable)
Very interested in this too, also have a potential workload that would be very subscription heavy and require high scalability.
Are subscriptions still polling the db after latest optimizations?
Any news on the subscriptions polling the db as @pjoe already asked?
@pjoe @Giorat sorry for the delay in answering your question.
I would like to refer you to this article: https://github.com/hasura/graphql-engine/blob/master/architecture/live-queries.md. This talks about the current architecture and about how we can handle 1M concurrent subscriptions in our benchmarks.
Let us know if you have any additional questions 馃檪
Is there a way to scale horizontally / vertically in kubernetes w/ the cli-migrations container?
Or should the process that does the migrations should be separated from the scaling? Because I imagine, if we run 3 of the cli-migrations container, they will all attempt to lock the DB due to migrations.
@hrgui You can scale the cli-migrations image to multiple replicas. When a new rollout happens, the default strategy on kubernetes is that it will replace pods one by one. Hence, only one pod of the new rollout will be running at any point and once that finishes applying migrations, other pods will just skip migrations.
This will need health checks to be configured too.
Hi,
is this guide still in the making? I am specifically interested in whether running Hasura on Google App Engine Flexible Environment (#1550) or Google Cloud run is possible now? There are these other issues #1078 and #940 that point here but then I could not find any reference to a scaling guide.
So, is this/will this be available somewhere?
Thanks!
For anyone wondering how to run graphql-engine on GAE (Google App Engine):
Dockerfile
FROM hasura/graphql-engine:v1.3.2
app.yaml
runtime: custom
env: flex
service: hasura
network:
session_affinity: true
liveness_check:
path: "/healthz"
readiness_check:
path: "/healthz"
env_variables:
HASURA_GRAPHQL_DATABASE_URL: postgresql://USER:PASS@IP_ADDR/DATABASE
@hrgui You can scale the cli-migrations image to multiple replicas. When a new rollout happens, the default strategy on kubernetes is that it will replace pods one by one. Hence, only one pod of the new rollout will be running at any point and once that finishes applying migrations, other pods will just skip migrations.
This will need health checks to be configured too.
@shahidhk Is it still valid for the v2 migrations? Now that metadata is separate, I can see the following scenario:
Between 4 and 5 we risk unavailability I think due to incorrect metadata. Is this a possible scenario?
I think an horizontal scaling guide covering migrations and so on would be useful to have.
Hi... would absolutely love to know exactly what Hasura is thinking of in terms of horizontally scaling. A guide will go a long way to help the community understand.
I'm also looking at this - https://github.com/hasura/graphql-engine/issues/1182 and https://github.com/hasura/graphql-engine/pull/1574
It appears the metadata update once it hits the DB will trigger all instances connected to the DB to update? Is this correct?
How does this work with a rolling update across a cluster? Will it signal ALL instances to update then? Won't that break the rolling deployment?
@jync Thanks for the links to other issues. It indeed seems that https://github.com/hasura/graphql-engine/pull/1574 made it so the metadata will be automatically reloaded by all running instances.
Does that make the scenario described in my other comment (https://github.com/hasura/graphql-engine/issues/1183#issuecomment-722326103) even worse (a faulty instance restarting and overriding the metadata with an old version)?
I've seen Hasura not start if the hdb_catalog is from an older version of hasura.
If metadata versioning is incorporated, I'd imagine you do something similar? (ie no-op if the incoming schema version is higher than the version found in the metadata files)
But I'm not sure if there is metadata is being versioned. There is a metadata/version.yaml file but I think this captures the metadata schema version (ie the version that hasura understands).
Most helpful comment
Here is the list ( leaving it to @rikinsk to structure them ) :