Apollo-client: Cache Field Policy: Offset pagination example

Created on 23 Jun 2020  路  5Comments  路  Source: apollographql/apollo-client

I wanted to replace deprecated updateQuery usage with field policies, for the repro here I went by an example from https://www.apollographql.com/docs/react/v3.0-beta/caching/cache-field-behavior/#handling-pagination but I've also tried the offsetLimitPagination utility policy added in #6464 with same problem.

Intended outcome:
Two uses of the same query behaving independently, one being able to fetchMore items and have them merged in by policy.

Actual outcome:
More items appear to be fetched, merged and put into the cache, but the fetchMore call does result in the useQuery ultimately returning the new merged data.

How to reproduce the issue:
Go on this sandbox and note the behavior upon pressing the fetchMore button, requests and so on are made https://codesandbox.io/s/youthful-water-3qjsx?file=/src/index.js


This is the most basic form of a number of scenarios like this where I've sort of failed to get usable behavior. If I had to take a shot in the dark I would say that there is a problem in the design with how keyArgs(i think) both controls cache keying as well as useQuery invalidation(at least it seems like non-key args(variables) changing via fetchMore do not trigger a rerender like one might expect).

Most helpful comment

Field policies are so confusing that the more I am trying to understand and reading through issues the more I am getting confused!
Can you guys just clear things out on a video or docs. Its really hard to implement pagination with field policies I am finding

All 5 comments

I'm not sure I understand the problem. Do you not want the Fixed Names list to change after fetching more names? Did you leave out the word "not" somewhere in your description?

When I use this code:

const client = new ApolloClient({
  cache: new InMemoryCache({
    typePolicies: {
      Query: {
        fields: {
          people: offsetLimitPagination()
        }
      }
    }
  }),
  link
});

both lists update after I click the button, which is what I would expect. Both useQuery calls return the merged data.

I'm sorry that this is so unclear, I'm having a hard time putting this to words.

Actually the behavior I was ultimately hoping to achieve was one where the limit on the Fixed Names would still be respected(ie. via field policy read function), regardless of additional data merged in(ie. via a fetchMore/merge setup elsewhere in the app like in the example). I suppose the solution is ultimately that I will have to limit the returned data at the callsites. (ie. useQuery(GET_PEOPLE, { variables: { limit } }).data.people.slice(0, limit))

Ahh, I believe I see what you mean now.

My thinking about pagination fetch policies has evolved quite a bit over the last few months. While working on #6464, specifically, I realized that fetchMore embodies an approach where the original query receives a larger and larger list as more data is fetched, even though the variables/arguments for the original query have not changed. In this approach, it's the responsibility of the application code that consumes that data to slice it up into smaller pages (if desired), exactly as you suggested. I think this is a pretty reasonable way to handle the problem, because it gives more flexibility to the application to decide how to present the data. Also, it means you might only need a merge function and no read function, because there's no need to use args to slice the data when reading it from the cache.

However, as the documentation suggests, it's also possible to have a read function that does the slicing for you, so that reading more like the inverse of writing/merging. The additional caveat with this approach is that you need to update the variables of the original query every time you call fetchMore by attaching a callback to the promise it returns: obs.fetchMore(...).then(() => obs.setVariables(updatedVars)). This can be somewhat tricky, compared to just letting application code handle the full list, but it's doable.

Between these two approaches, I think I like the first one better, but this is a relatively recent realization, so I definitely think the documentation should cover both.

Thank you so much for clearing this up Ben. I wanted to make sure this is the intended way to go, as it felt like there might have been a blind spot or something.

Both application code solutions you've suggested seem sensible, just gotta get my head out of the old-school fetchMore gutter I think 馃榿

Field policies are so confusing that the more I am trying to understand and reading through issues the more I am getting confused!
Can you guys just clear things out on a video or docs. Its really hard to implement pagination with field policies I am finding

Was this page helpful?
0 / 5 - 0 ratings