Are extensions not supported?
Apollo Server supports the tracing and cache control extensions by default. We would like to expose a plugin API for adding extensions, but for now you can use formatResponse to manually insert additional fields under extensions.
Which sort of extension are you interested in?
Ah, ok. Thanks, that is enough for me then.
The extension use case I was looking into was validating that a query was no more than N deep. So for example, you allowed clients to only nest 4 levels deep with each query.
You'd parse the query into an AST, validate & then throw an error if >4.
@martijnwalraven why not adding a extensions options besides the tracing options. A simple array would suffice.
Would you guys accept a PR?
@ruiaraujo You can already use formatResponse to add additional fields to extensions. I'd prefer to add a more structured API to do this that will also give you access to events and allow you to modify resolvers, but interested in hearing what your use case is so we can figure out what an API should look like. (For an example of an early unofficial API see apollographql/graphql-extensions).
I was aware of that repo since i check the implementation of the apollo tracing. I wanted to instrument the same way but use internal metrics. What I can do with formatResponse is to extract it from the extensions but it seems a round about way of going at it.
The GraphQLExtension class seems pretty good for my needs.
@rcrowe I think it's not the role of the extension but the role of the validationRule to parse the query into an AST and validate it. You could compute the deepness of the query by parsing each OperationDefinition and its selectionSet.
I also want to add my own extensions, but there's no way to actually implement a real extension (like TracingExtension and CacheControlExtension) and have apollo-server use it.
Adding data manually in formatResponse is a workaround, but a bad one. Are there any concrete plans to improve the API for using proper custom extensions?
Would it be possible to take custom extensions from the configuration options (similar to what express-graphql does)?
Do something like this?
let extensions = [];
...
if (options.extensions) {
options.extensions.map(function(extension) {
extensions.push(extension);
});
}
An extension seems straight forward to implement. (unless i have missed something)
https://github.com/apollographql/apollo-tracing-js/blob/master/src/index.ts
export class CustomExtension implements GraphQLExtension<TContext> {
requestDidStart() {}
executionDidStart() {}
willResolveField(
_source: any,
_args: { [argName: string]: any },
_context: TContext,
info: GraphQLResolveInfo
) {
return () => {
};
}
didResolveField(
_source: any,
_args: { [argName: string]: any },
_context: TContext,
info: GraphQLResolveInfo
) {}
requestDidEnd() {
}
format(): [string, TracingFormat] {
return [
"custommessage",
{
}
];
}
}
When batching, am I right in thinking these extensions run for each query? If so, requestDidStart/End is a bit of a misnomer, would queryDidStart/End be more accurate?
Do we also need something similar at a request, rather than query level?
For context, my use-cases are metrics/tracing. For example, I'd like metrics for both query duration and request duration.
Currently I can attach a correlation ID at a request level, which gets propagated via context to resolvers. I might also want to be able to attach an extra correlation ID per query, which would get propagated.
Thinking about it, what I'm interested in is less 'GraphQL extensions' and more 'extending Apollo to add instrumentation'. Sorry if I hijacked this thread, please feel free to point me somewhere else/open a new issue if appropriate :slightly_smiling_face:
@ThomWright Yes, they run for each request. Whether the naming makes sense depends on how you view batching. One way of looking at it is that batching is a transport-level optimization to put multiple GraphQL requests in a single HTTP request.
When you use Apollo Engine, transport batching is actually of limited use, because Engine supports HTTP/2 even when your server doesn't. That means you can run multiple GraphQL requests in parallel over a single HTTP connection, without blocking (which is one of the downsides of transport batching).
What is your use case for performing your own metrics/tracing? Have you looked at using Apollo Tracing and Engine?
Hi @martijnwalraven
Yeah I've looked at Apollo Tracing and Engine, they simply don't fit nicely into our existing monitoring infrastructure. Right now I'm reluctant to add more 'infrastucture' just for GraphQL, I'd much prefer to consolidate where possible. Saying that, I expect Apollo Engine provides extra value beyond what we're trying to do, so we might consider it as a complementary system at some point.
For metrics we're using Prometheus, and I'd like to be able to collect metrics such as request durations and error counts. Since we're batching, per-query would be useful.
For tracing (with correlation IDs) our use case is logging, but we might also consider implementing something like OpenTracing in future (I see this is already being discussed here https://github.com/apollographql/apollo-tracing/issues/12). Again, per-query would be useful.
I hope this helps, let me know if you'd like more info.
I also want to add my own extensions 馃構
As stated allowing custom extensions is the way forward and better approach than doing in formatResponse or anything else. Since we were also interested in instrumenting our code, a simple solution we have been using for long time is to hijack logFunction and use it as a way to instrument parsing, validation, request and execution cost.
sample code is here
import { LogAction, LogMessage, LogStep } from "apollo-server-core";
function traceQuery(): (message: LogMessage) => void {
const initTime = process.hrtime();
// Initialise the start times of all LogActions
const startTimes = {
[LogAction.request]: initTime,
[LogAction.parse]: initTime,
[LogAction.validation]: initTime,
[LogAction.execute]: initTime
};
return message => {
const { action, step } = message;
switch (step) {
case LogStep.start:
startTimes[action] = process.hrtime();
break;
case LogStep.end:
// metricName = LogAction[action]
// duration = process.hrtime(startTimes[action])
break;
default:
// discard status step
break;
}
};
}
Hope this helps 馃槃 ,
PS: Its really not a good approach as it is relying on the internals of apollo-server-core and might break if the API changes.
Here's an initial (I guess) PR https://github.com/apollographql/apollo-server/pull/934 which enable us to support additional extensions 馃嵒.
I did it because want to make some new extensions, one it's for cache-tags http headers caching without having to explode some functions to make it work and implement smoothly.
I'm working on this!
The new request pipeline in #1795 brought with it a new generation of extension points, most all of which are intended to replace hooks that were never officially supported in the undocumented graphql-extensions package, to which this issue refers, which was originally designed for internal use by our apollo-engine-reporting (though to be fair, at the time, apollo-tracing) packages and never intended for general consumption.
The new plugins cover a wider-range of life-cycles than just those of the request, including the server startup time. We feel that this API will be a more sustainable API in the long run, but we're certainly looking for feedback on it, as we want to make sure it covers all the use-cases prior to sunsetting graphql-extensions support. We're well aware that currently the new extension API doesn't provide resolver-level wrapping, but we're working on adding that.
The new plugin API is documented within https://github.com/apollographql/apollo-server/pull/2008 at the moment, so please do give it a look and try it out (both the documentation and the plugin API, please!). Additionally, for a concrete example of a functioning plugin, take a look at the apollo-server-plugin-response-cache plugin.
Thanks!
Most helpful comment
I also want to add my own extensions 馃構