Currently, writeFragment writes a single data item at a time. This is fine for updating 1 or 2 items, but when you have 100 items you want to update, it is a prohibitively expensive operation!
This is because writeFragment force renders GraphQL components' children, so there are potentially components that are being re-rendered 100 times (if you call writeFragment from a loop).
I'm suggesting a new method writeFragments, which accepts a fragment, and an array of { id, data } objects like so:
client.writeFragments({
fragment: gql`...`,
data: [
{ id: 'Type1', data: { field: 'value' } },
{ id: 'Type2', data: { field: 'value' } },
...
],
});
This way, the store can be updated a single time, and the children of GraphQL components are also re-rendered a single time.
There are some questions around using variables here, but this is more of a high-level concept.
Thanks and let me know if this is unclear at all! 馃槂
This issue has been automatically marked as stale becuase it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions to Apollo Client!
@benhjames oooh interesting idea!
+1000
This is an awesome idea, maybe this idea could be expanded in something like a transaction API.so that write queries and fragments could be applied in batches.
I am also pretty sure that batched operation results also cause multiple rerenders.
Something interesting I've found:
https://github.com/apollographql/apollo-client/blob/latest/src/data/proxy.ts#L303
https://github.com/apollographql/apollo-client/blob/latest/src/data/store.ts#L174
Apparently, there are:
It's pity that it's not a part of a public API :(
I have an app which has to update its data store in response to events incoming from socket.io channel without using graphql mutations or subscriptions, so making big consistent updates with current imperative API is huge pain at the moment, unfortunately.
@haizz This is old code. The new cache implementation is here: https://github.com/apollographql/apollo-client/tree/master/packages/apollo-cache-inmemory/src
I've managed to make batched updates by dispatching APOLLO_WRITE manually (with the latest stable version).
Does anybody know what could be the corresponding workaround for the new cache implementation?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions to Apollo Client!
up
@benhjames I do think this is an awesome idea and have told stalebot to leave it alone and added a post 2.0 milestone to track it!
Could we look holistically at a typical bulk update cycle. For me is should start with a filter/map/reduce operation against the in-memory-cache. If Id and __typename are preserved (by this operation) then it would be far easier to have some kind of bulk merge operations that I can perform against the store as opposed to a graphql writeFragment based approach.
The in-memory-store should also augment objects that have been updated in this way with an isDirty flag and allow update,insert,delete mutations pre-registered against the initial query result. In this way we can control the commit operation in a declarative manner.
To help provide a more clear separation between feature requests / discussions and bugs, and to help clean up the feature request / discussion backlog, Apollo Client feature requests / discussions are now being managed under the https://github.com/apollographql/apollo-feature-requests repository.
Migrated to: https://github.com/apollographql/apollo-feature-requests/issues/7
Most helpful comment
@benhjames I do think this is an awesome idea and have told stalebot to leave it alone and added a post 2.0 milestone to track it!