Hi. I've tried to scale Hasura GraphQL Engine deployed to Kubernetes and got various errors after creating tables:
"error":"table \"users\" does not exist","code":"not-exists"
It seems like Hasura is not stateless and caching data into memory. Maybe we need to use something like Redis for a cache.
Hi @Maxpain177 currently modifying schema when you are running several instances of graphql-engine will result in errors like you have seen.
To change schema:
This practice works really well, in development you rarely need to run more than one instance of graphql-engine so you can make changes to the schema, and to deploy the changes to production (which is running many instances of graphql-engine), you'll just need to wrap the hasura migration apply between two kubectl commands.
We'll document this
We can also use the reload_metadata API call on other instances to sync up the metadata once schema is changed through one of the instances.
If there are 10 instances running, make schema changes through any one of the instance and then call reload metadata action on all other instances.
POST /v1/query
{
"type": "reload_metadata",
"args": {}
}
From @coco98 in #1078
Scaling Hasura is as simple as scaling up the number of instances. If the metadata or schema is updated, a rollout (or metadata refresh) should be triggered.
Let's document the process (2 commands basically, but still ;)) for various platforms:
Docker
Heroku
Kubernetes
Fargate
This is an urgent hole in our current docs.
More suggestions at #1183
When #1574 (PR for #1182) is merged, schema can be updated on any one of the hasura instances and others will refresh the metadata automatically.
Closing this in favor of #1183.
Most helpful comment
When #1574 (PR for #1182) is merged, schema can be updated on any one of the hasura instances and others will refresh the metadata automatically.