Consider the following schema.
type Member {
id: Int!
name: String!
posts: [Post]
}
type Post {
id: Int!
title: String!
text: String!
like: Int!
dislike: Int!
}
Following is a sample query
{
member(id:4636){
posts {
like,
dislike,
title,
text
}
}
}
This would do the required fetch from database with a default limit of 10 on the latest posts by the member 4636.
I want to cache this particular query in some caching store (eg: redis) where in the key would be
member:4636:${query}
where query is the actual graphql query.
When any of posts of the member changes I invalidate all the caches with key member:4646:*
My questions are:
My current method is to parse the query using graphql's parse and visit functions and get the memberId. The response from graphql is then cached with the above key.
Doing this way I lose all of the error handling and validation which previously was handled by graphql.
I strongly suggest against using the GraphQL query text itself as part of a cache key - small variations in query may result in an explosion of cache usage. Typically a cache layer lives below GraphQL rather than above it.
Is it possible to add this sort of caching inside graphql's resolve function? (since that has access to member id as well as the query)
Yes, in fact that's what I suggest you explore. Since GraphQL calling your resolve function assumes nothing about how you are actually accessing data, this is a good point to look into a redis or memcached cache before falling through to a more expensive database query.
Thanks for the reply.
I did think about replacing the expensive db op to cache in resolve. Use of DataLoader helped further as the cache/db choice is moved to the loader plus batch ops. The problem with this approach is that it still required to fetch n components and combine them together.
Routing to cache instead of db in resolve helped only in a minor way for our case. What did improve the performance was the caching of GraphQL response.
Is there a way or method which can cache the tree so as to reduce the n component fetch.
That's a very context-dependent question, @cosmosgenius. How are you combining the fetch of n components using DataLoader? What kind of DB back-end do you have? Do you have appropriate indexes on the DB to make that query fast? I think your best bet here is to ask more concrete/specific questions on Stack Overflow, providing detailed context. It certainly is possible to put a response-level cache in front of a GraphQL server, but like @leebyron said, you can easily run into cache overflow (again, depends very much on your context where the thresholds will lie).
@wincent Thx for the response. Assuming DB back-end optimization is perfect, the problem i was trying to solve, was to cache the response from the resolver.
The n components I was talking about.
If the response from the resolver is cached I can pass cached data at the root thereby requiring no further resolution. The only fetch required would be for the data not present in the tree.
I'll post the same question on Stack Overflow and provide the ref there.
@cosmosgenius Did you get anywhere with this?
Most helpful comment
@wincent Thx for the response. Assuming DB back-end optimization is perfect, the problem i was trying to solve, was to cache the response from the resolver.
The n components I was talking about.
In the given example there are 3 components each component is fetched using different loaders and not combined into one.
If the response from the resolver is cached I can pass cached data at the root thereby requiring no further resolution. The only fetch required would be for the data not present in the tree.
I'll post the same question on Stack Overflow and provide the ref there.