Apollo-client: 3.0.0-beta.43 Missing cache result fields warning, cache not updated after mutation

Created on 10 Apr 2020  Â·  23Comments  Â·  Source: apollographql/apollo-client


I'm getting the following error after upgrading to 3.0 beta.43

image

Look like related to #5603 and #5146. Do i need to write keyFields manually for all type?

const link: any = errorLink.concat(authLink.concat(httpLink as any));
export const apollo = new ApolloClient({
  link,
  cache: new InMemoryCache(),
});
export default function ProductLayout() {
  const query = useProductLayoutQuery({ variables: { productId  } });
  const product = query.data?.product;

  return (
    <Page>
         <Product />
    </Page>
  );
}

// ProductLayout.gql

query productLayout($productId: Int!) {
  product(id: $productId) {
      id
      name
      description
  }
}



md5-af20dea1bc4c33cc2dedebfe26e5d1d2



export default function Product() {
  const query = useProductQuery({ variables: { productId } });
  const product = query.data?.product;

  return <div />
}

// Product.gql
query product($productId: Int!) {
  product(id: $productId) {
    id
    fullname
  }
}

Intended outcome:

The cache should up to date
Actual outcome:

The cache is not updated
How to reproduce the issue:

Versions
3.0 beta.43

Most helpful comment

Looking at the first MissingFieldError for example, we see Can't find field 'name' on object .... This means at some point you ran a query to retrieve and store the data mentioned in that error message, and included a name field in your query's selection set. Then at some other point in your application, you ran a very similar query that ended up resolving to the same entity stored in the cache, but you didn't use the name field in your selection set. This means the name data ended up being dropped from the cache, so the first query that was dependent on it can no longer find it.

I don't quite get this. The docs state clearly (https://www.apollographql.com/docs/react/caching/cache-field-behavior/#merging-non-normalized-objects) that as long as we fetch the id, the two objects can safely be merged in cache.

I am asking this, because i get tons of "Missing cache result fields" warnings, despite in my opinion, all these entities could have been merged safely. Do i need to customize InMemoryCache to fix this?

edit: and just to clarify:

if i have a list and a detail view and in the list i fetch just the id and the name of a product, but in the detail page i fetch id, name, description and so on.

Should i get this warning in this case, yes or no? i tried to debug it on my project and it seems like this is indeed the case. I get the impression that this is a total normal pattern to fetch once less data and later more data from an entity and should not trigger a warning or am i wrong?

All 23 comments

I'm having the same issue.

Same issue after upgrade to beta.43.

Do you also fetch the id of your object, in order for InMemoryCache to be able to identify the object?

Same here!

I got this after upgrading from a previous beta.

Like @kliakos mentioned, I wasn't fetching the id of the object, but that's because the object in question doesn't really make sense to have an id (it's my Query.viewer object).

I think something changed between a previous beta and this beta that changed how objects without ids were identified. Previously, they were just identified by their path within a query (so Query.viewer always referred to the same object even though it didn't have an id).

Now, it seems that whenever a new query is fetched, it overwrites the previous cached data instead of merging them.

So if I have this query

query ViewerIsAdmin {
  viewer {
    isAdmin
  }
}

and later execute this query,

query ViewerThemePreference {
  viewer {
    themePreference
  }
}

the second query will overwrite the first query. What's more is that if the first query is re-executed, Apollo will emit the warning (Missing cache result fields) and then re-execute the first query even though it _should_ be cached.

My solution was to just add a bogus id fields to the viewer object, which isn't the end of the world, but this does seem to be a significant departure from previous versions of Apollo.

I think something changed between a previous beta and this beta that changed how objects without ids were identified

The whole id behavior changed in version 3. What changed in the last beta seems to be the warning, which most likely was previously a silent check.

@travigd You might find this part of docs useful https://www.apollographql.com/docs/react/v3.0-beta/caching/cache-field-behavior/#merging-non-normalized-objects
In your case, perhaps something like this would work -

export const client = new ApolloClient({
  link: new HttpLink(),
  cache: new InMemoryCache({
    typePolicies: {
      Query: {
        fields: {
          viewer: {
            merge: (existing, incoming, opts) =>
              opts.mergeObjects(existing, incoming),
          },
        },
      },
    },
  }),
});

Thie strategy works for me, but I don't have any root-level fields (aka your viewer) so unsure if this policy works as well on the top Query type.

I have the same issue on @client marked fields (and only in SSR).
Any idea how to solve this issue for client only fields ?

I am also facing the same issue while using viewer at different places with different data? Any updates?

I am also having the same issue. Has anyone found a solution?

As mentioned in https://github.com/apollographql/apollo-client/issues/6136#issuecomment-615933193, this new warning is intentional. When query/cache mismatches like this happen in AC2, they are essentially swallowed. For example, when you have one query that's relying on a certain set of data in the cache, but run another query that ends up removing some of the fields the first query was expecting from the cache, the inability of the first query to pull out what it needs is not exposed to developers. This cache underwriting/overwriting problem has been a longstanding issue in the past, so AC3 introduced this new warning to help developers know when this problem occurs.

Looking at the first MissingFieldError for example, we see Can't find field 'name' on object .... This means at some point you ran a query to retrieve and store the data mentioned in that error message, and included a name field in your query's selection set. Then at some other point in your application, you ran a very similar query that ended up resolving to the same entity stored in the cache, but you didn't use the name field in your selection set. This means the name data ended up being dropped from the cache, so the first query that was dependent on it can no longer find it. Depending on your fetch policy this might not be an issue at all, which is why this is a development time warning only (it won't show in production). But it's also quite possible you didn't mean to do this. To resolve this cache mismatch issue (and again depending on your fetch policy), Apollo Client might decide to head to the network to fetch the data it needs, including the missing name field. But this means an extra network hit, which could have been avoided, if you just included the extra name field in both queries (since again, they're set to point to the same entity in the cache).

If you're migrating from AC2, you can definitely choose to ignore these errors if you want to, but we highly recommend looking into their cause. Addressing them will help keep your cache healthy and improve cache hits.

Thanks @hwillson.
But what I understand is you are telling to use the super set of all fields at every place? which could be very problematic to do while coding a complex project?

@anuragnagarkota This is definitely a challenging issue. These warnings are really just intended to help give some visibility into query/cache mismatching. We want to make sure developers have greater clarity into why data isn't being matched from the cache, when it looks like it should be. Using a super set of all fields in a selection set can definitely be problematic/cumbersome. There are other options, like:

  • Making sure the query that is fetching the larger set of the data is fired first so the larger set is stored in the cache
  • Consider storing the objects with differing fields in the cache as separate entities by using custom keyFields
  • Define custom cache policy read functions for the fields you know will differ between queries, to provide fallback values

I realize the above suggestions might also be cumbersome, but again the new warning is just exposing behavior that has already been happening in AC2. Ignoring it is definitely still a valid option.

Judging by the 👍's on this issue though, maybe we should consider providing a way to silence these warnings. Our full AC3 docs aren't ready yet and we're planning on having a section in them to help explain this warning, which will hopefully help as well.

@hwillson I have an entity that has 5 fields, 3 fields are always present in every queryn and 2 others that can be added in some other query. The entity is a relation to a parent entity.

So pretty much is something like this

type Users {
  id
  name
  lastName
  profiles {
    id
    name
    description 
    companies {
      id
      name
    }
  }
}

profiles inside users is an array and same is the companies inside profiles. Companies is not always present

I first start the app, I query the users without profiles. Then I insert a new profile to the user and I update the cache. The profile does not have any companies. And actually I haven't even query the companies yet. Why am I still getting this warning when I update the cache?

Another case is you should have an Id field for your document

I'm receiving the same warning for a mutation that replaces one array of strings with a different one:

react_devtools_backend.js:6 Cache data may be lost when replacing the identifiers field of a project object.

To address this problem (which is not a bug in Apollo Client), define a custom merge function for the project.identifiers field, so InMemoryCache can safely merge these objects:

  existing: ["6510de26-a9a1-4a89-8d26-17f66a063aab","8b1f4d14-b6ce-430d-b011-35ef193db440","7d4db35a-e544-4750-91a2-58ffab2f8cae","9c72e514-c773-44f6-964c-66b83b6e23fc","e5788bc0-f3c3-49c7-b783-6b0ec8c7d391"]
  incoming: ["6510de26-a9a1-4a89-8d26-17f66a063aab","8b1f4d14-b6ce-430d-b011-35ef193db440","9c72e514-c773-44f6-964c-66b83b6e23fc","e5788bc0-f3c3-49c7-b783-6b0ec8c7d391"]

That sounds a bit unwieldy, do I have to manually write merge strategy for all arrays I may have in the schema?

@anuragnagarkota This is definitely a challenging issue. These warnings are really just intended to help give some visibility into query/cache mismatching. We want to make sure developers have greater clarity into why data isn't being matched from the cache, when it looks like it should be. Using a super set of all fields in a selection set can definitely be problematic/cumbersome. There are other options, like:

  • Making sure the query that is fetching the larger set of the data is fired first so the larger set is stored in the cache
  • Consider storing the objects with differing fields in the cache as separate entities by using custom keyFields
  • Define custom cache policy read functions for the fields you know will differ between queries, to provide fallback values

I realize the above suggestions might also be cumbersome, but again the new warning is just exposing behavior that has already been happening in AC2. Ignoring it is definitely still a valid option.

Judging by the 👍's on this issue though, maybe we should consider providing a way to silence these warnings. Our full AC3 docs aren't ready yet and we're planning on having a section in them to help explain this warning, which will hopefully help as well.

"Missing cache result fields" do not explain at all why this is happening. Also the solutions you provide seem not be very elegant.

  • Making sure the query that is fetching the larger set of the data is fired first so the larger set is stored in the cache

This is against component design, so no-go. You don't want to micro-manage which component should render first!

  • Consider storing the objects with differing fields in the cache as separate entities by using custom keyFields

With this suggestion you need to manage InMemoryCache for every query you do. This is cumbersome and error prone. Why do i have to alter the behavior of InMemoryCache when i do different queries? Do i need to think: "oh, i remember that i used a different subset of this type somewhere else in my application, so i need to go to the initialization of InMemoryCache and try to invent some new id for this query? I don't see that happening. I just tried to update to AC3 and my InMemoryCache is already cluttered with cryptic merge functions

  • Define custom cache policy read functions for the fields you know will differ between queries, to provide fallback values

same problem: you need to clutter InMemoryCache and you probably don't even know why. Even worse: if you remove these queries later, you probably keep all this custom logic in InMemoryCache and no one will ever know if this can be safely removed or not.

I realize the above suggestions might also be cumbersome, but again the new warning is just exposing behavior that has already been happening in AC2. Ignoring it is definitely still a valid option.

i also realize that the underlying issue is harder than we all know, but i had the impression AC3 would be clever enough to handle this hard problem for us. Instead it just gives one of the hardest problem in programming back to developers instead of solving it for us.

Its good that AC3 now shows us that these problem exists, alltough we probably did not even realize. But I am missing clean solutions for these problems. I did not spent much time dealing with typePolicies in InMemoryCache, so i don't have a final conclusion, but it feels wrong to me. It centralizes what should not be centralized.

Edit: i don't want to sound harsh, I still love what you guys are doing (beeing a fan since meteor 0.2 something), i am just confused and I get stressed-out when i don't understand something :-(.

To give a more constructive feedback:

apollo wants usually that we name our queries, so apollo-client knows each query's unique name, right? Could "Missing cache result fields" tell us which query (name) already stored this data? So that we have a chance to fix that?

Looking at the first MissingFieldError for example, we see Can't find field 'name' on object .... This means at some point you ran a query to retrieve and store the data mentioned in that error message, and included a name field in your query's selection set. Then at some other point in your application, you ran a very similar query that ended up resolving to the same entity stored in the cache, but you didn't use the name field in your selection set. This means the name data ended up being dropped from the cache, so the first query that was dependent on it can no longer find it.

I don't quite get this. The docs state clearly (https://www.apollographql.com/docs/react/caching/cache-field-behavior/#merging-non-normalized-objects) that as long as we fetch the id, the two objects can safely be merged in cache.

I am asking this, because i get tons of "Missing cache result fields" warnings, despite in my opinion, all these entities could have been merged safely. Do i need to customize InMemoryCache to fix this?

edit: and just to clarify:

if i have a list and a detail view and in the list i fetch just the id and the name of a product, but in the detail page i fetch id, name, description and so on.

Should i get this warning in this case, yes or no? i tried to debug it on my project and it seems like this is indeed the case. I get the impression that this is a total normal pattern to fetch once less data and later more data from an entity and should not trigger a warning or am i wrong?

I'm tackling this core issue in my app now and yeah, agreed that this is a bit funky.

Ideally if there is an object in the cache, if it gets updated through a query with a subset of the initial selection set, the new data gets merged with the old. The old values from the previous query are retained in the cache and the values in the selection set from the new query get updated. What I'm seeing is that by default an update to an entity in the cache simply overwrites it - this seems contrary to the docs, as @macrozone pointed out.

Having the same selection set throughout any sizeable app is not really an option and is also against the spirit of GraphQL, i.e. get the data you need when and where you need it.

Update: It does in fact work that way, as expected. Debugging why when it doesn't is a different matter though. In my case it was an obvious issue with a selection set somewhere that did not include an id field. Apologies for jumping to conclusions too quickly, keep up the good work! :+1:

@macrozone The way to think about the InMemoryCache configuration API is that it gives you a single place to define any non-default behavior of your graph that should apply uniformly across your entire application. When you think about it this way, you're not cluttering anything. You're saving yourself from having to think about concepts like object identity or field storage elsewhere in your application.

You are right that some information, like the shape of individual queries, should not be centralized, and should not have surprising cross-component constraints, like needing to add fields to one query to satisfy other queries that will later consume the same cache data. Some of these problems can be eased by combining (or compiling) all your queries/fragments into one big query, but that approach has its own problems, like delaying the whole response until the slowest part finishes, and potentially over-fetching object fields. I think we can improve this situation dramatically though some combination of build-time validation/feedback, automatic query transformation, cache cleverness, typePolicies configuration, and better errors and warnings, but we're not there yet.

However, the keyFields configuration definitely does make sense to centralize, because you're going to have lots of problems if you apply different definitions of identity to objects of the same type in different parts of your application. As you probably know, the cache can automatically compute IDs for objects that have __typename and id fields (and thus automatically merge their fields, when their identities are known to match), so you don't usually need a custom keyFields configuration for types that follow this convention. These defaults are intended reduce the amount of custom configuration you need, but your mileage will vary. Another way to keep your typePolicies short and maintainable is to use helper functions to generate repetitive policies.

I also want to stress that these warnings are definitely too noisy right now! They are surfacing useful information, in the sense that the kind of problems they're warning about would be hard to diagnose without this kind of information, but that doesn't mean the problem is always really a problem. In your example of a list/detail view, it should be fine to fetch just the id first, and then the rest of the fields later. You may be getting a warning in that case, but that doesn't mean anything bad is happening—unless you consider triggering a network request bad, which it can be, of course, when you don't expect it to happen… and in those cases it's convenient to have some sort of warnings that can help diagnose why the result from the cache wasn't good enough.

The honest reason these warnings aren't perfect in the official 3.0.0 release is that we knew we could continue iterating on them after the release, because changing the content or frequency of a development-time warning is not a breaking change. I'm not completely happy with the current noise level, but I didn't want this particular rough edge to hold up the (already long-overdue) AC3 release. Now that that's behind us, we will definitely keep iterating on these warnings, and we welcome any ideas to make them more useful and precise.

@benjamn thank you for your clarification

Its this

You may be getting a warning in that case, but that doesn't mean anything bad is happening—unless you consider triggering a network request bad, which it can be, of course, when you don't expect it to happen

that triggered me when i tried to understand these warnings. I got the impression that I do something horribly wrong. That something is not supposed to happen like it did. That's why I and probably many others got confused.

We were aware of what we fetched when, we used fragments and tried to have a good incremental loading and a good trade-off between overfetching and reducing network calls. Now all of sudden apollo client tells us that we doing something wrong, while in fact, everything is (probably?) fine.

Problem with warnings is, that they get ignored or turned off. When we have this warning because of some query-combination that is totally ok and on purpose, we can't let it just be there. Because then, if later there is an actual problem, it will get ignored.

So personally I don't believe that this kind of warning is useful at all. Maybe it would be better to toggle some diagnostic/benchmark mode that checks if many similar but different queries are run with a lot of smaller cache misses and give some summary about the "health" of the caching.

Maybe this could also be done through the apollo cli. Similar to the type generation it could even statically give some hints about what different queries using the same types are used. E.g. in a classic list - detail situation, you would expect to have at most two queries where one is a subset from the other ideally.

An overly noisy warning is no warning at all, because people will become desensitized to it and start ignoring it. Moreover, the more important instances of the warning are being drowned out by all the less important instances in the console.

I'd much rather see some granularity and clarity about these warnings to differentiate between "hey just fyi you're hitting the network" and "this might actually be an error in your cache"

Was this page helpful?
0 / 5 - 0 ratings