I have a watched query and I use optimisticResponse in a mutation to update only one object from this query:
return client.mutate({
mutation: LIKE_MUTATION,
variables: {id},
optimisticResponse: {__typename: 'Mutation', userLike: {id, __typename: 'User', liked: true}}
})
When I use mutation with optimisticResponse like that, my whole pure list update (all itens on my watchedQuery changes). When I remove optimisticResponse only my changed item on list is updated.
Is this expected?
Seeing the exact same behavior. It defeats the purpose of optimistic UI in the first place because rerensering all items can crush performance.
Don't know if this is related but when I use optimisticResponce the cache (InMemoryCache) grows every time a mutation is done.
When using a whole lot of queries it grows so bad that after about 15 Updates chrome crashes...
@barbalex Possibly related to #4210 ?
Any news about this issue? Seeing the exact same behavior :/
+1 same problem here.
Please, someone?
I made a reproduction (https://codesandbox.io/s/apollo-client-issue-4141-lxib0) and found out what causes the issue.
The reproduction renders a list of two people using a Person
component that is wrapped with React.memo
. Since the Apollo cache should always return the same objects if the data didn't change after a mutation, re-rending can be avoided by comparing the person objects with ===
. The "Update name" button runs a mutation that changes the first person's name.
If no optimisticResponse
is used for the mutation, the isPersonEqual
function returns false
for the first person and true
for the second person, as expected:
rendering person (ID 1)
rendering person (ID 2)
> "Update name" button clicked
> Mutation result received
isPersonEqual (ID 1): false
rendering person (ID 1)
isPersonEqual (ID 2): true
However, if an optimisticResponse
is used (uncomment lines 46–52), isPersonEqual
always returns false
, even if the person didn't change, which can make performance optimizations with React.memo
and shouldComponentUpdate
a little difficult.
rendering person (ID 1)
rendering person (ID 2)
> "Update name" button clicked
> Optimistic update applied
isPersonEqual (ID 1): false
rendering person (ID 1)
isPersonEqual (ID 2): false <- should not be false
rendering person (ID 2)
> Mutation result received
isPersonEqual (ID 1): false
rendering person (ID 1)
isPersonEqual (ID 2): false
rendering person (ID 2)
I think the problem is that optimistic updates use their own CacheGroup
, which is why previously created objects aren't reused when results are retrieved from the cache. Since https://github.com/apollographql/apollo-client/pull/5648, all optimistic updates share a single CacheGroup
but the cache for the root EntityStore
is still separate. The two CacheGroup
s are created here:
I changed the second line to this.sharedLayerGroup = this.group;
to make Root
and Layer
s share one CacheGroup
, which seemed to resolve the problem but I don't know if that has other, unintended consequences.
Any update on this?
After a lot of thinking over the past few months, I believe InMemoryCache
is uniquely equipped to improve upon the current promise of "===
on a best-effort basis, if you play your cards just right" by making a much stronger guarantee: If any two cache result objects are deeply equal to each other, they will also be ===
, no matter how they were computed, and no matter where they appear in the result trees for different queries.
The general idea is to perform _canonicalization_ (sometimes called _internalization_) as a post-processing step, which happens to be a feature of the Record
and Tuple
proposal TC39 is currently considering, which means this trick has a good chance of becoming natively supported in future versions of JavaScript. In essence, every cache result object would be a Record
, and any arrays within the result tree would be represented as Tuple
s. Of course this means result objects have to be immutable, which they already are (#5153).
Canonicalization can be performed in _O(|result|)_ time, but the InMemoryCache
result caching system allows substantial parts (subtrees) of that work to be memoized/amortized. This is the same system that currently provides ===
for repeated query reads most of the time (#3394), and every case where result objects were already ===
translates to a fast path for canonicalization, which is why I say InMemoryCache
is uniquely equipped to implement this idea. External code that attempted to implement the same post-processing logic would have to start from scratch every time, whereas InMemoryCache
can cache unchanged result subtrees, so it can avoid redoing deep canonicalization work. That said, the cost of canonicalization is an implementation detail, because the results are always the same.
Avoiding memory leaks in a system like this is a challenge, but one I understand pretty well by now. Ultimately, this trick should also limit the total amount of memory Apollo Client needs to retain, since deeply equal subtrees share a single canonical representation. Finally, while canonical objects are valuable and worth retaining, they are not irreplaceable—the cache could jettison its canonical object pool at any time to reclaim that memory, at the cost of losing the ===
guarantee for past results.
This is definitely too big of a change for Apollo Client v3.3, which we're hoping to finalize today, but I think we can get this into a v3.4 beta release soon!
@benjamn amazing! Thanks for the detailed update! Sounds really promising.
but I think we can get this into a v3.4 beta release soon
Awesome!
My implementation of https://github.com/apollographql/apollo-client/issues/4141#issuecomment-733091694 in #7439 is now available for testing in @apollo/[email protected]
(or later). Please give it a try when you have the chance!
This change reduced the rendering time of a large component in my app by about 50%. Thanks, @benjamn! 😊
Same for us, we noticed a large improvement in performance with this change 🎉
Thank you @benjamn!
This is awesome @benjamn ; I scanned the PR just for curiosity and really like how elegant it looks.
I had a naive question, I was kind of surprised, it looks like most of the canonicalization changes were on the readFromStore
side of things. I expected to see a few more changes to the writeQuery
path.
With the disclaimer that I really know very little about the internal client impls details, my concern specifically for the optimistic response case is that our flow (for this bug) was (or will be) something like:
optimisticResponse
write path causes "jitter" such that the whole DAG looks changed / all leaf identities are changed (i.e. that's what this bug is about, both our changed leaf + all other leaves had new identities)Which is great, insofar as we get back stable identities for all leaves now, but my worry is the extra work that canon.admit
now has to do, by re-scanning all (say) 100 leaves, without being able to leverage that 99 of them _should_ be in its weak set but aren't (due to the observed/historical whole-DAG identity change in step 3).
Basically is the core issue here that optimistic cache updates are still (?) deeply changing identities when they don't really need to?
And yeah, fixing that back up in read from store is great (and faster than a React re-render penalty), but it could be even faster/cheaper for canon.admit
to do its job if the write path (maybe specifically for optimisticResponse
or just in general? although the "in general" case seemed to work before) would stop unnecessarily jittering every leaf?
I also admit:
a) this is an internal/perf impl detail b/c the bug is no longer observable, which is great
b) the canon.admit
-on-read also makes sense for fixing other classes of bugs/being a very generalized fix, so is great/not questioning that at all (other than musing should it be done on write? would that be cheaper), and
c) I am very much guessing at the internal impl details for "optimisticResponse
jittered the whole DAG"--that's an just observed theory based on our original bug
d) canon.admit
may also be fast enough that my theorized "unnecessary jitter in optimistic response" just really isn't a big deal, even for cases like "1000 of nodes in a query". I.e. we don't have big data floating around the client side (...although that assumption frequently gets me into trouble with React re-renders).
Anyway, feel free to just scan/sanity check/not think too deeply about this, just wanted to type it out if anything for my own understanding. Thanks for the great work!
@stephenh I owe you a longer answer, but I agree #7439 hides the referential identity churn without really addressing the underlying reasons for that churn.
Specifically, because of the CacheGroup
separation, optimistic cache reads during optimistic updates currently have no opportunity to reuse non-optimistic result objects. Effectively, there are two separate result caches (one per CacheGroup
), which makes some aspects of the problem simpler, but definitely creates extra work for canon.admit
behind the scenes.
Even with #7439 in place, I think it makes sense to keep working on this issue from the other direction. After a couple days of investigation, I think I have a solution that involves removing most of the CacheGroup
logic, which should improve cache performance (hit rate) for optimistic reads. PR coming soon!
I owe you a longer answer
Not at all!
Your description of an optimistic write creating its own separate-ish cache makes a lot of sense, in terms of needing a non-canonical staging area for its data, and why this would (pre #7439) be causing identity changes. And is plenty sufficient for my "just a user building a mental model" purposes.
Great to hear you've got a plan for optimizing specifically the optimistic read/write side of things; really impressive, thanks!
Most helpful comment
After a lot of thinking over the past few months, I believe
InMemoryCache
is uniquely equipped to improve upon the current promise of "===
on a best-effort basis, if you play your cards just right" by making a much stronger guarantee: If any two cache result objects are deeply equal to each other, they will also be===
, no matter how they were computed, and no matter where they appear in the result trees for different queries.The general idea is to perform _canonicalization_ (sometimes called _internalization_) as a post-processing step, which happens to be a feature of the
Record
andTuple
proposal TC39 is currently considering, which means this trick has a good chance of becoming natively supported in future versions of JavaScript. In essence, every cache result object would be aRecord
, and any arrays within the result tree would be represented asTuple
s. Of course this means result objects have to be immutable, which they already are (#5153).Canonicalization can be performed in _O(|result|)_ time, but the
InMemoryCache
result caching system allows substantial parts (subtrees) of that work to be memoized/amortized. This is the same system that currently provides===
for repeated query reads most of the time (#3394), and every case where result objects were already===
translates to a fast path for canonicalization, which is why I sayInMemoryCache
is uniquely equipped to implement this idea. External code that attempted to implement the same post-processing logic would have to start from scratch every time, whereasInMemoryCache
can cache unchanged result subtrees, so it can avoid redoing deep canonicalization work. That said, the cost of canonicalization is an implementation detail, because the results are always the same.Avoiding memory leaks in a system like this is a challenge, but one I understand pretty well by now. Ultimately, this trick should also limit the total amount of memory Apollo Client needs to retain, since deeply equal subtrees share a single canonical representation. Finally, while canonical objects are valuable and worth retaining, they are not irreplaceable—the cache could jettison its canonical object pool at any time to reclaim that memory, at the cost of losing the
===
guarantee for past results.This is definitely too big of a change for Apollo Client v3.3, which we're hoping to finalize today, but I think we can get this into a v3.4 beta release soon!