Apollo-client: Don't request the same query twice while in flight

Created on 24 Nov 2016  Â·  11Comments  Â·  Source: apollographql/apollo-client

See https://github.com/apollostack/react-apollo/issues/338, when two identical queries with identical parameters are requested at the same time, it results in two identical server requests.

This situation is quite easy to achieve when using React, since the components requesting the data are independent and can be mounted at any time.

I think it would be good if apollo-client kept a lookup table of in-flight request promises and if a new request matches an existing one, it gets the same promise.

Most helpful comment

Is there any solution yet ? Maybe a basic setup of the network layer to prevent this issue ?

All 11 comments

@wmertens good point. I think this could even happen at the NetworkInterface level.

I would expect this to be done just before the networkinterface gets
called, so it has all the necessary information, and the networkinterfaces
themselves don't have to implement it.

It could even be something very simple that does a shallow equal of the
query and the variables.

We have the same issue over here.
Let's say you have a currentUser query, and various components which consume the same returned data. Each component will make individual queries to the server, which is not intended.

For now a solution could be memoized resolvers, which I think is not the best solution because of the cache invalidation strategies.

What if we could pass as an option to agreggate queries with the same name when connecting the component to the graphql apollo-client function.

I'm a bit surprised that you guys have components that have the exact same data requirements. What does that look like in the UI? Is it just the same UI component rendered multiple times, or are you in fact not using all the data in the query in the individual components. If the latter is what you're doing, could you hoist the query to a component higher up in the hierarchy as a workaround?

Correct on both counts, but the clean solution would be to have proper
caching by apollo-client and not having to put things in globals… passing
the props down is a no-go, these are translations.

On Wed, Nov 30, 2016, 3:44 PM Jonas Helfer notifications@github.com wrote:

I'm a bit surprised that you guys have components that have the exact same
data requirements. What does that look like in the UI? Is it just the same
UI component rendered multiple times, or are you in fact not using all the
data in the query in the individual components. If the latter is what
you're doing, could you hoist the query to a component higher up in the
hierarchy as a workaround?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/apollostack/apollo-client/issues/956#issuecomment-263890453,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADWlrBWT9LDV6cRT3c8kISVtt0rPI-_ks5rDYu0gaJpZM4K7YY7
.

@helfer let me just add a bit more color to @sachag's situation:

  1. The concern is that it's creating a ton of watched queries in Apollo, which clutters up dev tools and also might cause performance drawbacks since we're basically executing the same query many times.
  2. The case for Sacha is a current user query - he has an HOC that just asks for some data about the current user, and it's used on a ton of different components. I actually think this should be a reasonable thing to do - creating reusable HoCs and relying on query caching seems reasonable IMO.

Yeah I think this is a reasonable ask from the core api. I think it would be kinda weird for the network interface to handle this to be honest. The query slug is pushed into redux. It seems like a simple state flag of "unresolved" could be added onto that slug, where a single promise is multiplexed to all requesting actors of the same query. This is very much how dataloader works on the server

Any particular reason for closing this? Was hoping to see this as a feature haha

@thebigredgeek I think it was automatically closed because I mentioned the issue in the corresponding PR. :wink:

Is there any solution yet ? Maybe a basic setup of the network layer to prevent this issue ?

This is still an issue!

Was this page helpful?
0 / 5 - 0 ratings