Graphql-engine: migrations could trigger 'out of shared memory' error on postgres

Created on 7 Jan 2019  路  9Comments  路  Source: hasura/graphql-engine

From andrewklau on discord.

FATA[0019] apply failed: [postgres-error] postgres query error ($.args[112].args)
File: ''
[53200] FatalError: out of shared memory
Hint: You might need to increase max_locks_per_transaction. 

Workarounds:
1) https://github.com/hasura/graphql-engine/issues/1325#issuecomment-452216310
2) https://github.com/hasura/graphql-engine/issues/1325#issuecomment-453825062

cli question

Most helpful comment

I hit this with my docker postgres instance when performing many ALTER TABLE commands. In my use case I don't see it as an Hasura issue, I want all these actions performed in a single transaction. Modifying the docker-compose to set the max_locks_per_transaction worked

postgres: image: postgres restart: always ports: - "5432:5432" volumes: - db_data:/var/lib/postgresql/data command: postgres -c max_locks_per_transaction=2000

All 9 comments

Hasura CLI concatenates actions from all migration files and sends them in a single bulk query which is then executed in a single transaction by Postgres. The advantage is that all migrations are executed in a single round-trip to the server with complete rollback support.

But, when the number of migrations are huge, this could result in errors like the one shown above.

As a solution, we could decide to execute each migration one-by-one there by executing one query per migration (i.e. one round trip per migration) and report each one's status independently.

Till the modified implementation is available, as a workaround, the user could choose to squash the migrations into a single one, by dumping the existing schema-metadata and adding it as a single migration as mentioned in this blog.

I hit this with my docker postgres instance when performing many ALTER TABLE commands. In my use case I don't see it as an Hasura issue, I want all these actions performed in a single transaction. Modifying the docker-compose to set the max_locks_per_transaction worked

postgres: image: postgres restart: always ports: - "5432:5432" volumes: - db_data:/var/lib/postgresql/data command: postgres -c max_locks_per_transaction=2000

Thanks @elgordino, ran into this too, your fix worked like a charm 馃憤

@elgordino , I also faced this issue and your solution worked perfect. Thanks. 馃憤

Just starting running into this on Azure Postgres. Don't see a way to set the max_locks_per_transaction on that.

Squashing worries me - what happens when this occurs again? If I squash then, how does it know which migrations to apply?

@howientc If you follow this blog you'll see that the state is also updated: https://blog.hasura.io/resetting-hasura-migrations/

I made a script which incrementally applies them:

#!/bin/bash

# load in ADMIN_SECRET
source .env

# as we have some gigantic number of migrations, we need to apply them piecemeal.
num_attempt=64
while true; do
  echo "migrating $num_attempt"
  hasura migrate apply --up $num_attempt --admin-secret $ADMIN_SECRET
  if [ $? -ne 0 ]; then
    # didn't work? halve the number attempted to migrate...
    num_attempt=$((num_attempt/2))
  fi
done;

I didn't bother adding anything to detect when it should stop, so if you run this, and see [INFO] nothing to apply you can interrupt it - it's done.

@howientc did you figure out how to handle this in Azure?

Was this page helpful?
0 / 5 - 0 ratings