I'm interested to hear how others feel about the idea of using middleware installed on resolver functions to allow all resolvers in a schema to perform common functionality.
As my schema grows I've found more and more of a need to add some of the same logic to all my resolvers. It would be nice if we could add some middleware that would be executed on resolvers in the schema when we go to execute the query. This could allow users to implement things like custom error handling, logging, and authorization on all resolvers without the need to add them to each resolver on the schema.
@dereklavigne18 That sounds great 馃憤
I think it has a potential to address a bunch of feature requests like #284, #662, etc.
One important note is that we should be extremely careful not to add a performance penalty, especially for big queries that resolved asynchronously. See this discussion in #723.
+1 to @dereklavigne18. Custom error handling and authorization middlewares would save so much time.
What if it was more event oriented? As in, instead of having a middleware sort of thing, we can have an event emitter api of sorts, and have a couple of options like:
Later on we can just add more event handlers which can be plugged into.
I don't think this is a good suggestion. It definitely will make graphql resolvers to be more complicated than they should be.
As an alternative approach, you can define your resolvers as async function composition (or something like in koa pipeline). Each function is a middleware. Very likely all top level resolvers will have the same stages. For example:
// stages for loading logic
[
'authenticate',
'load',
'checkIfFound',
'authorize',
'respond'
]
Later you may create a function function which can return different pipelines for different situations. Lets say that when we want to load a list of objects we have loadCollection pipeline factory, so then it can be used like this in resolvers:
const Query = {
users: loadCollection('User') // User is a model name it can be any model name from your models,
user: loadResource('User'),
login: loadResource('User', {
authenticate: false, // lets make login query to be publicly accessible
load: actions.loadUserByCredentials, // overwrite load stage
}),
}
const Mutation = {
createUser: mutateResource('create', 'User', {
validate: ... // custom validation function
})
}
You may have different stages (with default implementations, aka middlewares) for different types of Queries and Mutations (or you can customize default implementation by passing an object with stage names)
The good point about factory pipeline is that eventually your code is
This pattern is very similar to Command pattern from Domain Driven Design. And this is what I used with GraphQL during migration from Express.
Note that there is already a middleware implementation in GraphQL-core (see middleware.py and test_middleware) where you can steal ideas from.
We wrote this naive implementation of middlewares here if you are still interested: https://github.com/unirakun/graphql-directives-middlewares
The solution given by @stalniy seems quite good to me tho!
I think we can solve this in userland instead
@sibelius The community already has
I don鈥檛 think this should be in the graphql-js core package
I've been using an approach like this: https://github.com/JCMais/graphql-yup-middleware
As @sibelius said and me and @danielrearden pointed, we already have a solution for that out of graphql-js.
Not sure if it makes sense to bring it here. I think that graphql-js exists to implement the graphql-spec.
Any implementation that does not belong to the GraphQL specification, should be implemented out of graphql-js
Most helpful comment
I don鈥檛 think this should be in the graphql-js core package