Postgraphile: How to test?

Created on 10 May 2017  路  25Comments  路  Source: graphile/postgraphile

Hey guys, thanks for creating this awesome project. I'm wondering what's the best way to add tests to my own project. I see that this project itself is using Jest and have tests loading the example (the integration tests). However it seems to need support files in various directory and doing quite a few different things. Do you guys have an example on how you do tests on your own projects? Thanks in advance.

馃搫 add-to-docs

Most helpful comment

@mull Nope - as part of a team. The schema dump is in the same vein as a Rails schema dump which is compatible with very large teams - in fact it's pretty much essential to ensure you're all running exactly the same code in the database and that no incompatibilities have crept in (which happens quite a lot when you're working on migrations). It's also very useful to run against your production database to ensure no additional schema mutations have taken place than what you expected.

What I didn't mention above is that I use migrations to manage changes to the schema. I use pure-SQL migrations these days - one up and one down - and currently I'm using db-migrate to create/run them - here's an excerpt from my package.json scripts entry:

    "db:migrate:create": "DATABASE_URL=\"postgresql://localhost/irrelevant\" NODE_ENV=development db-migrate create --sql-file",
    "db:migrate": "db-migrate up",
    "db:migrate:dev": ". ./.env; NODE_ENV=development db-migrate up && npm run db:schema:dump",
    "db:rollback": "db-migrate down",
    "db:rollback:dev": ". ./.env; NODE_ENV=development db-migrate down && npm run db:schema:dump",
    "db:schema:dump": "pg_dump -s -O -f db/schema.sql jemsgraphs",

(I store local environmental variables in a .env file.)

Because I'm lazy I also have bash/zsh aliases to migrate/rollback: https://github.com/benjie/dotfiles/blob/master/profile_common#L60-L61

All 25 comments

I think a general write up on how to practically build projects using postgraphql might be useful. Some of the questions that arose for me:

Testing

How do you ensure that 1) you wrote your procedures/policies/grants correctly and 2) keep someone from breaking them

Organization

How do you go about organizing your files, i.e. your DB schema? What is a good way to structure your project so that someone new to it can grok it, and so that code review is relatively easy. One file per table? One for tables, one for functions, one for policies, one for triggers?

Talking to other services

While relatively easy it might help everyone to see a sample project that may for example support authenticating through Google OAuth.

FWIW with some loose pointers I would give number three a shot. I would imagine you just set up a server somewhere that handles the initial call to Google and the callback, and then calls a function in the DB.

I use jest for testing, I run a bunch of tests against the database. I have the entire schema written out to a file db/schema.sql using pg_dump -s -O -f db/schema.sql mindtree; and then tests in db/__tests__/*.test.js, e.g. this one enforces that RLS is enabled on every table in my public schema:

import { withRootDb } from '../test_helpers';

test('RLS is enabled for all tables in mindtree_public', () => withRootDb(async (client) => {
  const { rows } = await client.query(`
  WITH nsp AS (
    SELECT oid FROM pg_namespace WHERE nspname = 'mindtree_public'
  )
  SELECT relname AS "table"
  FROM pg_class
  INNER JOIN nsp
  ON (nsp.oid = pg_class.relnamespace)
  WHERE relkind = 'r'
  AND relrowsecurity IS NOT DISTINCT FROM FALSE
  `);
  expect(rows).toHaveLength(0);
}));

The test_helpers file in db has a bunch of DB test helpers, including:

const withDbFromUrl = async (url, fn) => {
  const client = new pg.Client(url);
  await client.connect();
  await client.query('begin;');
  try {
    await fn(client);
  } finally {
    await client.query('rollback;');
    await client.end();
  }
};

export const becomeUser = (client, userOrUserId = null) => client.query("select set_config('role', 'app_user', true), set_config('jwt.claims.user_id', $1, true);", [userOrUserId ? userOrUserId.id || userOrUserId : null]);

export const withRootDb = fn => withDbFromUrl(process.env.TEST_DATABASE_URL, fn);
const withAuthenticatorDb = fn => withDbFromUrl(process.env.TEST_DATABASE_AUTHENTICATOR_URL, fn);
export const withAnonymousDb = fn => withAuthenticatorDb(async (client) => {
  await becomeUser(client, null);
  return fn(client);
});
export const withUserDb = fn => withRootDb(async (client) => {
  const result = await client.query('SELECT * FROM mindtree_utils.register_user_or_log_in($1, $2, $3, $4, $5)', [null, '[email protected]', 'facebook', '123456', { firstName: 'A', lastName: 'B' }]);
  const user = result.rows[0];
  expect(user.id).not.toBeNull();
  await becomeUser(client, user);
  return fn(client, user);
});

I also have withPrepopulatedDb which adds a savepoint and the loads a bunch of SQL statements from a file to populate the DB with some fixtures so we can run tests against existing data and then rolls back to the savepoint once the test has completed; this is a bit more complex.

One issue with using jest for DB tests is because of all the transactions it can cause performance to actually slow down when running in parallel (and in rare cases it can cause deadlocks) so I run jest with --maxWorkers 1. 馃槥

I always reset the DB before running the tests:

# Clear out the test database (ignore errors)
echo "DROP SCHEMA mindtree_utils CASCADE; DROP SCHEMA mindtree_public CASCADE; DROP TABLE migrations CASCADE;"  | psql -Xq "$TEST_DATABASE_URL" >/dev/null 2>&1
# Import latest schema (throw on error)
psql -Xqv ON_ERROR_STOP=1 -f db/schema.sql "$TEST_DATABASE_URL"
# Now run the tests
jest $@

Hope this helps.

Also the register_user_or_log_in you see referenced above is what I call directly from my passport middleware to log users in.

I take it you're working on your project alone @benjie? We're contemplating using this (as a start perhaps before we have time to write our own equivalent schema) and the power of postgres to replace our existing backend. We're 2-3 developers who need to collaborate. I'm not sure if a straight up schema dump is the way to go given that conflicts & rebasing becomes harder that way. Any experiences to share in that field?

Thanks for your previous answer(s)!

@mull Nope - as part of a team. The schema dump is in the same vein as a Rails schema dump which is compatible with very large teams - in fact it's pretty much essential to ensure you're all running exactly the same code in the database and that no incompatibilities have crept in (which happens quite a lot when you're working on migrations). It's also very useful to run against your production database to ensure no additional schema mutations have taken place than what you expected.

What I didn't mention above is that I use migrations to manage changes to the schema. I use pure-SQL migrations these days - one up and one down - and currently I'm using db-migrate to create/run them - here's an excerpt from my package.json scripts entry:

    "db:migrate:create": "DATABASE_URL=\"postgresql://localhost/irrelevant\" NODE_ENV=development db-migrate create --sql-file",
    "db:migrate": "db-migrate up",
    "db:migrate:dev": ". ./.env; NODE_ENV=development db-migrate up && npm run db:schema:dump",
    "db:rollback": "db-migrate down",
    "db:rollback:dev": ". ./.env; NODE_ENV=development db-migrate down && npm run db:schema:dump",
    "db:schema:dump": "pg_dump -s -O -f db/schema.sql jemsgraphs",

(I store local environmental variables in a .env file.)

Because I'm lazy I also have bash/zsh aliases to migrate/rollback: https://github.com/benjie/dotfiles/blob/master/profile_common#L60-L61

I use CREATE OR REPLACE FUNCTION heavily in my migrations. My down migrations tend to be just a copy of what was in db/schema.sql that I was mutating in the up migration. Before merging I always ensure that migrate && rollback && migrate succeeds, and that rollback results in no changes to db/schema.sql (i.e. that the rollback was correctly implemented).

@benjie cheers! Yeah I'm not questioning the schema dump itself, your mention of migrations explains it all. :)

If I may ask one more question; are you running this project in production? It looks solid & ready to me, but it helps to know that others have taken the plunge :)

Yes, though not on any product with many users yet (not for lack of trying...)

Thanks for the info guys! @mull on organization I've broken down the schema and functions by object domain and put them into individual sql files. My main schema file then creates the schema, users, privileges and import each individual sql using \i. The downside to this is that \i path is relative to execution but it's generally not a problem for me since i always run it at project root.

You may also be interested in the discussion on https://github.com/postgraphql/postgraphql/issues/235 - particularly around agpdiff

Cheers guys. I had a look at Sqitch which seems very organized and reasonable.

@mull not to diverge too much from the topic, but a "recipe" of sorts for OAuth would be fantastic. I've struggled to know how to deal with this myself.

@mgallagher agreed. Note that I have yet to do anything with postgraphql past the (extremely good!) tutorial. I'm very unfamiliar with the Postgres & JWT combo and as such I'm not exactly sure how I would go about it. I would imagine using postgraphql through express.js and using, say, passport to handle the OAuth part. Handling the callback from Google would involve checking for user existence, any validation (domain name for example) and then a register function that knows how to deal with tokens rather than passwords.

Great insight from @benjie on helping the review process with db dumps!

On the testing side, I've set up integration tests with jest (and snapshot testing) that test the actual graphql output. Happy to share if there's interest.

To keep things manageable I'm going to close this issue as I think it's solved; but if not or you require further help please re-open it.

@benjie can you tell us a little more about your withPrepopulatedDb tricks? I havnt figured a way to manage my migrations yet and just have all my db setup in .sql files. I need a way to create the test database to use these schema. Any tips?

export const withPrepopulatedDb = async fn => {
  if (!prepopulatedDBKeepalive) {
    throw new Error("You must call setup and teardown to use this");
  }
  const { client, vars } = prepopulatedDBKeepalive;
  if (!vars) {
    throw new Error("No prepopulated vars");
  }
  let err;
  try {
    await fn(client, vars);
  } catch (e) {
    err = e;
  }
  try {
    await client.query("ROLLBACK TO SAVEPOINT pristine;");
  } catch (e) {
    err = err || e;
    console.error("ERROR ROLLING BACK", e.message);
  }
  if (err) {
    throw err;
  }
};

withPrepopulatedDb.setup = done => {
  if (prepopulatedDBKeepalive) {
    throw new Error("There's already a prepopulated DB running");
  }
  let res;
  let rej;
  prepopulatedDBKeepalive = new Promise((resolve, reject) => {
    res = resolve;
    rej = reject;
  });
  prepopulatedDBKeepalive.resolve = res;
  prepopulatedDBKeepalive.reject = rej;
  withRootDb(async client => {
    prepopulatedDBKeepalive.client = client;
    try {
      prepopulatedDBKeepalive.vars = await populateDatabase(client);
    } catch (e) {
      console.error("FAILED TO PREPOPULATE DB!", e.message);
      prepopulatedDBKeepalive = null;
      return done(e);
    }
    await client.query("SAVEPOINT pristine;");
    done();
    return prepopulatedDBKeepalive;
  });
};

withPrepopulatedDb.teardown = async () => {
  if (!prepopulatedDBKeepalive) {
    throw new Error("Cannot tear down null!");
  }
  const { client } = prepopulatedDBKeepalive;
  await client.query("RELEASE SAVEPOINT pristine;");
  prepopulatedDBKeepalive.resolve(); // Release DB transaction
  prepopulatedDBKeepalive = null;
};
const populateDatabase = async client => {
  await client.query(
    await fs.readFile(`${process.cwd()}/scripts/populate.sql`, "utf8")
  );
  // run queries
  return {/* useful data here */};
};

I drop/create/migrate the test db before running jest.

Hey!

I'm having trouble with programmatically shutting down a PostGraphile instance. Even if I pass in a pg.Pool (which I'd like to avoid because PostGraphile may need special configuration in the future) and .end() it manually, I'm still left with an open Socket. It seems to be related to watchPg: true.

'use strict'

const wtfnode = require('wtfnode')
const express = require('express')
const {Pool} = require('pg')
const createHealthCheck = require('@bahn-x/node-service-health-check')
const {postgraphile} = require('postgraphile')

const app = express()

const pgPool = new Pool()
const graphile = postgraphile(pgPool, 'postgraphile', {
    watchPg: true,
    disableDefaultMutations: true, // todo: disable all mutations?
    showErrorStack: true,
    appendPlugins: [
        // ...
    ],
    graphiql: true,
    enhanceGraphiql: true
})
app.use(graphile)

setTimeout(() => {
    pgPool.end()
    setTimeout(() => {
        wtfnode.dump()
    }, 1000)
}, 1000)
[WTF Node?] open handles:
- File descriptors: (note: stdio always exists)
  - fd 2 (tty) (stdio)
  - fd 1 (tty) (stdio)
- Sockets:
  - 127.0.0.1:57990 -> 127.0.0.1:5433
    - Listeners:
      - connect: Connection.connect @ /Users/j/playground/postgraphile/node_modules/pg/lib/connection.js:59
- Timers:
  - (1000 ~ 1000 ms) (anonymous) @ /Users/j/playground/postgraphile/foo.js:28

What I would actually like to do is have a single postgraphile.close() call that closes all DB connections that PostGraphile has opened. What do you think?

Hi @derhuerst, did you see the "Testing with Jest" guide in the docs? https://www.graphile.org/postgraphile/testing-jest/ I'm planning to overhaul it soon as I've done some improvements, but I'm not quite ready yet.

Hi @derhuerst, did you see the "Testing with Jest" guide in the docs?

Yes, I took a brief look. It doesn't use watchPg though, so I think the problem doesn't appear there. Is your proposal to just use watchPg: false in the tests?

I don't think there's much need for watchPg in the tests - the tests build the GraphQL schema during setup and delete it again in teardown, so if you later run the tests again (e.g. in watch mode) it'll build a fresh schema automatically during setup. I doubt you'd want the schema changing during a test run unless you're specifically testing the watchPg functionality?

I don't think there's much need for watchPg in the tests [...].

Yes, I totally agree.

Still, I'd love to have a .close() method, just like many other libs do that open sockets or servers, ~so that I don't have to create a pg.Pool just to be able to close it~ so that I don't have to make assumptions about which sockets PostGraphile opens.

That's a reasonable request; would you be interested in opening a PR to achieve it? I don't think it should be very many lines of code.

Here's where properties are added to the middleware, so you can add a close() method

https://github.com/graphile/postgraphile/blob/3936bc3102fd7fc3c48d54a7a010b4c6408ec3f8/src/postgraphile/http/createPostGraphileHttpRequestHandler.ts#L897-L902

The getGqlSchema method comes from here:

https://github.com/graphile/postgraphile/blob/3936bc3102fd7fc3c48d54a7a010b4c6408ec3f8/src/postgraphile/postgraphile.ts#L229-L233

But more importantly you care about the watchPostGraphileSchema call earlier in that file:

https://github.com/graphile/postgraphile/blob/3936bc3102fd7fc3c48d54a7a010b4c6408ec3f8/src/postgraphile/postgraphile.ts#L93-L97

We currently discard the return result of the watchPostGraphileSchema function; but if you read the postgraphile-core docs you'll see that it's the all-important release function that you want:

https://github.com/graphile/graphile-engine/blob/master/packages/postgraphile-core/README.md#watchpostgraphileschemapgconfig-schemas-options-onnewschema

So if we could store that function and hand it such that the code in the initial code reference above could access it, we could have a close method. We'd also want to release the pgPool iff we created it, and any other resources that we created for the user.

Was this page helpful?
0 / 5 - 0 ratings