Prisma1: Permission Testing and Development

Created on 28 Sep 2017  Â·  19Comments  Â·  Source: prisma/prisma1

Issue by timsuchanek
_Tuesday Sep 12, 2017 at 12:46 GMT_
_Originally opened as https://github.com/graphcool/cli/issues/230_


Problem

Testing

Graphcools permission queries are powerful and awesome.
But they're currently hard to test. Especially having tests integrated to a CI environment is hard to do. This Proposal tries to provide primitives, that can be used to test functions locally.

When talking about testing permission queries, there are two levels of tests that fit into this picture:

  1. Schema Tests
  2. Integration Tests

As described in this article https://medium.com/@FdMstri/testing-a-graphql-server-13512408c2fb there could also be tests written for the resolver level, but as Graphcool's GraphQL Engine is well tested and auto-generates these parts of the API, it doesn't make sense to test against the backend implementation for an App build on Graphcool.

Instead there can be a 1. Basic Schema Validation on permission queries.
This means, that when the schema has changed, the already defined permission queries would be valided against the current deployed schema.
However, just testing on the GraphQL Schema level is not sufficient, as permission queries highly depend on the actual data in a project. So 2. Testing against a real, deployed Test Graphcool Project, is also a necessity.

Development

In addition to testing, the development workflow for permissions has to improve too - it's currently a very manual process. Queries should be written and tested immediately against the real permission endpoint.

Solution

The idea is to create 3 new tools that enable a better Permission development/testing flow.

  1. A permission playground
  2. A npm package to validate permissions on schema level
  3. Providing a CLI command to test permissions against a real Graphcool Project

Testing

Schema Testing

For Schema level testing, we will provide utilities to download the latest permission schema and validate the permission queries defined in the local graphcool.yml.
This will be a part of the CLI and also be exposed in a npm package that can be included in local test flows.

A possible api could be this:

import {PermissionTester} from 'graphcool-lib'
import fs from 'fs'

const query = fs.readFileSync('./permissions/query-1.graphql', 'utf-8')
const tester = new PermissionTester({projectId: ''})
const {valid, errors} = await tester.validate('Post.create', query)

We may even abstract it to this:

import {PermissionTester} from 'graphcool-lib'

const tester = new PermissionTester({projectId: ''})
const {valid, errors} = await tester.validateAllPermissions('./path/to/graphcool.yml')

Integration Testing

A permission query always refers to a specific mutation being performed for a node on a specific type.
It additionally has the context of a User, which is deducted from the auth token in the request. That means there are 3 informations needed in order to test permissions locally:

  1. The mutation to test (for example Post.create)
  2. The user performing the action -> Is a combination of User type name, for example Customer and the id of the user, a cuid, e.g. cj7hjwj300000cdomatsml2ao. It can also be an unauthenticated user.
  3. The node being mutated / input data of the mutation

The proposal is, that the CLI will expose a command called test:
gc test --mutation Post.create --user-model Customer --user-id cj7hjwj300000cdomatsml2ao --input ./mock-variables/Post.create.json

Which returns a table in this format

Testing `Post.create` with `Customer` `cj7hjwj300000cdomatsml2ao`
and input `./mock-variables/Post.create.json` ...

Permission Rule  Permitted
---------------  ---------
query1.graphql    ✔
query2.graphql    ✖
query3.graphql    ✔

2 queries permit, 1 prohibit

If you provide an additional parameter --permission-query, a single permission query will be tested:
gc test --mutation Post.create --user-model Customer --user-id cj7hjwj300000cdomatsml2ao --input ./mock-variables/Post.create.json --permission-query ./permissions/query-1.graphql

Testing `Post.create` with `Customer` `cj7hjwj300000cdomatsml2ao`
, input `./mock-variables/Post.create.json` and permission query `./permissions/query-1.graphql` ...

The permission permits the action ✔

The flags like --permission-query are very long and all will have single character equivalents.
Furthermore, this is the first draft of the rough functionality, but there is also potential for a local/web IDE to test permissions further.

Development

Permission Playground

In order to provide a faster feedback cycle for permission query development, the idea is to provide an online permission playground, that works with the permission endpoint, that soon will be exposed. Variables, that are currently injected, could just be added via the variable editor in the Playground.
This way, permission queries could directly be executed against the real system, so a faster iteration cycle can be achieved.

End Remark

One of the biggest questions still is:

Is the "schema level testing" even necessary if there is the integration based testing?

One argument for the schema level testing is, that it doesn't need the whole system to be set up correctly. On the other hand, the schema needs to be downloaded by the new mutated system - so it's not possible to test without depending on the Graphcool system anyway.
However, the differentiation between schema and data based tests could enhance the overview, especially for bigger projects.

These primitives are planned to be implemented in the course of the next week.

arecli rf0-needs-spec

Most helpful comment

Comment by antho1404
_Tuesday Sep 12, 2017 at 13:30 GMT_


Why not have something really close to most of the testing framework.

describe('user permissions', () => {
  it('should read article', () => {
    expect(User('xxx').canRead(Article('xxx'))).toBe(true)
  }
  it('should comment article', () => {
    expect(User('xxx').canCreate(Comment({ articleId: 'XXX' }))).toBe(true)
  }
})

With the possibility maybe to set a project id (a project made for testing) and even better a dump in json of the data for exemple like that we can run all the tests offline and faster.

All 19 comments

Comment by kbrandwijk
_Tuesday Sep 12, 2017 at 13:12 GMT_


A few random remarks (I'll add to it later when I have more time):

  • For testing permissions, I would like to define the expected result somewhere. Like most testing frameworks, the components are: input, operation, expected output.

  • Local schema validation would be great for all queries, not just PQ's, but also SSS queries. There are several solutions out there for design time/compile time checking against a graphql endpoint (for example, have a look at the Atom plugin https://atom.io/packages/graphql-autocomplete).

  • Regarding the permission playground, why not add the API to the existing playground (Relay API, Simple API, permission API)?

  • I think that integration testing with providing Typename and cuid for authentication will require another call to the system API to get the token (generateAuthToken). Not necessarily a bad thing, just mentioning.

  • Could this integration strategy also be applied to functions?

  • Is there anything 'between' schema testing and integration testing, where you could also provide the mock data yourself? It would require some heavy lifting on the backend probably, but it would still be nice if it was doable. Because integration testing is all about being in full control over the starting situation too. I did several integration testing implementations, and the steps would normally be:

    • Spin up new test database
    • Initialize data
    • Run tests
    • Kill test database
  • Related to the last point: integration tests should have no side effects. It should rollback any changes you made during the test (for example, any mutations)

Comment by marktani
_Tuesday Sep 12, 2017 at 13:15 GMT_


For testing permissions, I would like to define the expected result somewhere. Like most testing frameworks, the components are: input, operation, expected output.

Rather than building this ourselves, we should provide the needed building blocks so you as a developer can decide which of the many testing frameworks you want to use for your tests.

Comment by antho1404
_Tuesday Sep 12, 2017 at 13:30 GMT_


Why not have something really close to most of the testing framework.

describe('user permissions', () => {
  it('should read article', () => {
    expect(User('xxx').canRead(Article('xxx'))).toBe(true)
  }
  it('should comment article', () => {
    expect(User('xxx').canCreate(Comment({ articleId: 'XXX' }))).toBe(true)
  }
})

With the possibility maybe to set a project id (a project made for testing) and even better a dump in json of the data for exemple like that we can run all the tests offline and faster.

Comment by timsuchanek
_Tuesday Sep 12, 2017 at 13:43 GMT_


@kbrandwijk:

For testing permissions, I would like to define the expected result somewhere. Like most testing

frameworks, the components are: input, operation, expected output.
Yes, you definitely need to be able to specify the expectation

Local schema validation would be great for all queries, not just PQ's, but also SSS queries. There are several solutions out there for design time/compile time checking against a graphql endpoint (for example, have a look at the Atom plugin https://atom.io/packages/graphql-autocomplete).

We indeed could think about providing a more generic solution for other queries too

Regarding the permission playground, why not add the API to the existing playground (Relay API, Simple API, permission API)?

Good idea, probably the easiest solution for now

Could this integration strategy also be applied to functions?

Yes, the functions that include permission logic could be included in this testing workflow as well

Is there anything 'between' schema testing and integration testing, where you could also provide
the mock data yourself?

That is something we may want to explore further down the road, when we have more experience with the current setup.

integration tests should have no side effects

I totally agree.

Comment by timsuchanek
_Tuesday Sep 12, 2017 at 13:45 GMT_


@antho1404 That is a very nice dsl! Executing all tests locally will need you to run the whole Graphcool system locally though.

Comment by kbrandwijk
_Tuesday Sep 12, 2017 at 13:51 GMT_


I think the proposal from @antho1404, and the comment by @marktani about using existing test frameworks are great. And I don't think that would require running the system locally. The test methods might just as easily talk to an existing endpoint.

Comment by danmkent
_Thursday Sep 14, 2017 at 15:11 GMT_


I agree that the best approach is to make graph.cool work well with third party testing tools and frameworks.

To fit into the kind of integration testing I have done in the past, the requirements would be:

  • API to spin up a new graph.cool project with a schema (probably provided by the new CLI tool, provided it works when executed programmatically rather than interactively).

This would be used once per test batch.

  • API to efficiently clear the data from a project and initialise the database with a new set of data. Ideally, this would enable IDs to be specified in the imported data rather than assigned randomly, to enable IDs to be easily included in tests. I don't know whether this is possible for the graph.cool architecture. If not, I would want to provide data as JSON and get back the same data structure with IDs. The format would need to support specifying relationships in the data.

This would be used many times within a test batch to put the database into a known state (hence the importance of it being efficient)

  • API to clear down and remove the test project once the test batch is complete.

For integration tests, I wouldn't be very interested in running the tests against anything other than an actual graph.cool project running on the production infrastructure. For me, the whole point of integration tests is to check that things work in the environment I will be using them in. Testing locally would be useful for development but I would still want to confirm that there are differences in behaviour in production.

Hey folks, is this on the roadmap? I just wrote a bunch of permission queries and was manually testing them all -- would be _really helpful_ to be able to write integration tests for them using jest/mocha/tape/etc. where I setup a fake store with some fake data in it and verify I can or cannot access something. Otherwise I'm a bit worried I'll add a feature and introduce a regression which, when dealing with permissions, would be a rather dangerous bug :/

I agree. I felt slightly uncomfortable switching my production application from the old way of doing everything in the GUI console to the new version controlled system...I honestly did not manually test all of the permissions, that would have taken a while. I did a couple tests and assumed the rest would work...not good.

This basically comes down to integration testing in general, with or without permissions. Given a well-defined set of data, and queries and mutations you want to execute, you define an expected output. A permission error can be part of that expected output.
I don't see anything specific regarding permissions in that approach.

@kbrandwijk good point. I have an idea of how I'd test my custom functions, though since they're just JavaScript -- I can provide a fake object for input and use proxyquire to spy on things like graphcool-lib methods. I can't apply that to permission queries or the permissions object in graphcool.yml.

The only thing I can think of is a test that runs a full graphcool service (via a docker cluster) and sends the full HTTP requests, which I think would qualify as end-to-end testing.

@timwis Yes, that was what I meant with integration testing: using an actual running service instance (local). There are more parts that can only be tested this way (like subscription queries etc.). All the functions (hooks, subscriptions and resolvers) can be tested in isolation like you mentioned. I just don't see a way to test permission queries in isolation.

I actually just thought of something. Maybe it is possible to mock the permission query endpoint queries. That would still require a 'running' endpoint though to query against. I'll add it to my list.

Basically, you have only one permission query: SomeXXXExists, which is nothing more than the allXXX query, that returns true if it has results. This shouldn't be too hard to mock.

@kbrandwijk I agree with your reply to me, my own integration tests should catch these things...I suppose the tests I had in mind would really be testing the permission system itself.

@timwis My idea worked. I have created an endpoint where you can paste in your permission query, and actually run it. This could be a good starting point for permission testing (I think).

https://launchpad.graphql.com/l15mwzl3q~~

See below.

This is exactly the same functionality exposed by the /permissions schema that is used by the Playground, we'll update our documentation around this soon.

In short, you can evaluate permission queries to https://api.graph.cool/simple/v1/<serviceId>/permissions, by setting the Authorization header to a valid platform token.

@marktani could you give an example of how that works? And would it test the permission settings in graphcool.yml or just the permission queries?

If an end to end test (running the server and using http queries against it) is the only way to truly test permissions, is there a recommended way to wipe the data and insert test data before each test?

@marktani can you update us on the status of this feature request?

I'm building a platform which has a lot of permission queries and it's really hard to test manually. I'm becoming more "anxious" with each permission I add.

I created a simple prototype for this using Mocha.

Here is how I did it

  • I created 3 GraphQL clients using GraphQLClient from graphql-request package

    • the guest client with no token

    • the admin client using a root token

    • the default client which authenticates the current user

  • before each operation

    • I create a new user and setup the last client

    • I manually delete all the data

    • I query all the ids for a particular model

    • I delete the entries one by one using the ids

    • I do the same thing for each model

  • each test

    • is an async function

    • the requests are made to the clients (default, admin, guest)

    • instead of using then() I use await to get the data from the requests

    • assertions are pretty simple with chai.js

Here are the things that could/should be improved

  • a simpler method to erase all the db should be implemented in the Graphcool Framework because it takes a lot of time to erase the entries one by one – an in-memory database would help a lot and will also speed things up
  • the endpoint is hardcoded and I need a programatic way to obtain it
  • the root-token is also hardcoded but it could be generated automatically similarly to the tokens we get in the custom functions (hooks)
  • a better API is required for the interaction an the creation of GraphQL clients
  • custom resolver functions need to be aware that we are in testing mode to be able to change their behaviour accordingly (to skip sending real emails, etc)

The prototype works pretty well but it's slow and I don't have any control over the behaviour of the custom functions. Here is a Gist: https://gist.github.com/deiucanta/baa51aa7f82d2cd2ad5735f62e9b0e3b

This issue has been moved to graphcool/graphcool-framework.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

marktani picture marktani  Â·  3Comments

thomaswright picture thomaswright  Â·  3Comments

sedubois picture sedubois  Â·  3Comments

sorenbs picture sorenbs  Â·  3Comments

schickling picture schickling  Â·  3Comments