I am calling getProfile:
https://www.bungie.net/Platform/Destiny2/1/Profile/4611686018429783292/?components=200,202,204&count=1
and checking
"currentActivityHash": 3897312654,
"currentActivityModeHash": 3199098480,
"currentActivityModeType": 10,
"currentActivityModeHashes": [
3199098480,
1164760504
],
"currentActivityModeTypes": [
10,
5
],
to get the active characters current activity. This is working well, and is updated quickly, but I am randomly getting an out of date result (up to 30 seconds out of date).
i.e. if I was on Bannerfall, and then switch to Altar of Flame, the API will return Altar of Flame, but if I refresh enough, it will randomly return the old result of Bannerfall. I have been able to reproduce on my end across app instances, so I am pretty confident it is not due to stale data in my app.
It feels like there is one server which I am randomly hitting which lags on getting updated, but I have tried it where i affinitize and dont affinitize. Note, I have only started to notice this in the past week.
I am happy to debug more, but I wanted to post here in case there was a larger issue. Is the server I am hitting identified in any of the headers (maybe X-BungieNext-MID2 or CF-Ray)? If so, I can see if it only reproduces with a specific server.
UPDATE: Looking at the X-BungieNext-MID2 header, servers 27 and 35 consistently return old / out of date data.
UPDATE 2: I happened to affinitize to server 27 and its data was delayed by exactly 1 minute. It seems like maybe the time is off by a minute on a couple of the servers.
There’s another recent issue noting that the affinity cookie name changed, in case that’s at fault here?
On Sep 4, 2019, at 20:52, Mike Chambers notifications@github.com wrote:
I am calling getProfile:
https://www.bungie.net/Platform/Destiny2/1/Profile/4611686018429783292/?components=200,202,204&count=1
and checking
"currentActivityModeHash": 3199098480,
"currentActivityModeType": 10,
"currentActivityModeHashes": [
3199098480,
1164760504
],
"currentActivityModeTypes": [
10,
5
],```to get the active characters current activity. This is working well, and is updated quickly, but I am randomly getting an out of date result (up to 30 seconds out of date).
i.e. if I was on Bannerfall, and then switch to Altar of Flame, the API will return Altar of Flame, but if I refresh enough, it will randomly return the old result of Bannerfall. I have been able to reproduce on my end across app instances, so I am pretty confident it is not due to stale data in my app.
It feels like there is one server which I am randomly hitting which lags on getting updated, but I have tried it where i affinitize and dont affinitize. Note, I have only started to notice this in the past week.
I am happy to debug more, but I wanted to post here in case there was a larger issue. Is the server I am hitting identified in any of the headers (maybe X-BungieNext-MID2 or CF-Ray)? If so, I can see if it only reproduces with a specific server.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
Interesting! It would definitely be helpful to know if the issue manifests with the same X-BungieNext-MID2 value each time (that number does indeed represent an identifier for a server)
@floatingatoll Yes! I logged that issue. I am using the correct cookie name.
@hellocontrol-bng thanks! Ill use that to see if its the same server each time.
@hellocontrol-bng Thanks for the suggestion. I updated the report. Servers 27 and 35 are consistently returning out of date data.
Something important to note is that that we do not guarantee true real time availability with the API. If you're not affinitized it may seem like we provide real time data albeit inconsistently, but we do cache data by necessity.
Another thing to remember is that that affinitization needs to be maintained on a per-account basis, which is generally going to happen automatically for you in our most common scenario (a user requesting their own data, or a user requesting other users' data but during their own session/not aggregated with requests from other users). Depending on your architecture you may have to be careful about your management of cookies and the consistency of what server you hit.
For example, if you have three servers of your own and they're each affinitized to three of our servers, but each of those servers on your side is requesting data from the same user, then that data is going to appear inconsistent because you are pulling it from servers that are potentially caching that data at different times. To maintain consistency in what you see returned, you will need to make sure that your affinitization is per user session - or perhaps to even go in a different direction, and ensure that all of the servers on your farm are hitting the same server. The API was never designed with the server farm/"aggregating live account data about a single account from multiple requestors" scenario in mind, so this is a less convenient scenario for folks who may be conducting such activities. There may come a time where we have better native support for this scenario, but time constraints vs. our core use cases (namely, apps such as the Companion that are user session based) means that providing better support for these scenarios is a lower priority than other API concerns.
With this in mind, hopefully it will help you to make an architectural plan for acquiring data from the API more consistently.
I'll add some documentation on this subject when I get time, as I imagine you're not the only one running into this issue!
or perhaps to even go in a different direction, and ensure that all of the servers on your farm are hitting the same server.
@vthornheart-bng Thanks for the explanation. I can hardcode my code to hit a specific server which i know has timely / non-delayed updates. But that means all of the traffic for users of my app will hit that server. Before I do that though, i just want to make sure I am being a good API citizen and not doing anything that would be frowned upon.
How much traffic are we talking about for your use case?
I should note that what you're seeing isn't that some servers have time delayed updates: what you were seeing is that some servers happened to still have old cached data compared to another server that you happened to hit. In the end, Server 27 vs. other servers all hit the same backend - and whether the data is old or new depends on how long the data you're requesting has been cached on that specific server. The cache right now is set to one minute - so at any given point, any server may seem like it's serving old data compared to another, but it's purely coincidental based on the traffic coming to that server. The consistency in what you get back is what you'll achieve by hitting a single server, but the caching that happens is a necessity for us given the load of requests.
@vthornheart-bng How much traffic are we talking about for your use case?
Hopefully a lot! Its a free mobile app (which we havent released yet), and we hit the server once a minute to check for status. I think that if on the upper end of traffic we would not want to hit just one server, then Ill avoid that path.
My main concern is that for some users, they will get older (up to a minute) data, which limits the usefulness of checking status, but at least now I know how to ensure users are getting consistent data.
Indeed, in this kind of situation what might be helpful would be to have your architecture pool known users into pools. For instance, you could spread your requests over all of our servers, but user X's requests always hits Server Y. That kind of distribution would work well, but it'll require some work - perhaps you could do something like let a new user's first request be assigned to a server, and then store off that specific cookie so that all future requests use that same cookie. Something like that to spread the load out, which will at least make the scenario a bit more feasible with the way we're set up on our side.
Indeed, unfortunately from the perspective of real-time data, I'd avoid advertising that your app will provide real time status information since we can't offer it through the API, and particularly in scenarios like this where we'd be seeing consistent high/batch processing style loads. I wish we could - it's a neat idea! We just don't have the infrastructure support to handle that kind of real time-oriented use case. We're a bit more well suited for that kind of batch access for PGCRs, as that's all offline data: but we have to be significantly more careful of/have to throttle and potentially block applications more aggressively when it comes to real time data, as ultimately we have to prioritize game traffic over API traffic and the two request the same resources.
One request I would make is that you might consider actually having the client app make the calls, and only make those calls when the user is using your app. That would fix both the architectural challenge of affinitization and the potential load on our servers/potential blockages you'd experience due to hitting us in bulk with a lot of live server requests. If those requests were only being made when a user was actively looking at their data, that fits much better with the type of requests that the API was set up to handle.
That kind of distribution would work well, but it'll require some work - perhaps you could do something like let a new user's first request be assigned to a server, and then store off that specific cookie so that all future requests use that same cookie.
Yeah. That is what I just implemented.
One request I would make is that you might consider actually having the client app make the calls, and only make those calls when the user is using your app.
Yeah, this should be handled by the app sleeping, but Ill also see if I can detect when the screen is off (but not asleep), and throttle those requests also.
Thanks for taking the time to give input on all of this.
Good stuff!
Sweet, no problem - and thank you for taking a look at those potential ways of throttling requests, I appreciate it!
The only pain to all of this is if that cookie identifier changes up on us. I'll be hitting some folks up to see how frequently we can expect this to change. My understanding was that this should be an extremely infrequent occurrence.
“Any response cookies whose names start with Q6dA7jjm” would work, given the names to date, but that’s a bit clumsy.
On Sep 5, 2019, at 12:12, Vendal Thornheart notifications@github.com wrote:
The only pain to all of this is if that cookie identifier changes up on us. I'll be hitting some folks up to see how frequently we can expect this to change. My understanding was that this should be an extremely infrequent occurrence.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/Bungie-net/api/issues/1030?email_source=notifications&email_token=AAAWUDCHMGITOSYYH3XIOQLQIFKYFA5CNFSM4ITZA5K2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6ANL2Y#issuecomment-528537067, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAWUDAWHZKVZTMKMPKZ6ATQIFKYFANCNFSM4ITZA5KQ.
I spoke with some folks on this issue, and it sounds like the cookie is going to be changing out more frequently from now on - at least with every Bungie.net deployment.
This means I need to change my recommendation somewhat. I'm going to replace the content in the page about affinitization shortly. The new suggestions will be as such:
If you are accessing PGCR data, you should hit stats.bungie.net. It will naturally make un-affinitized requests, which will make it such that you don't have to manage your affinitization cookie.
If you are accessing live data from a client application, retain any cookies we send you as is our existing advice. You'll be affinitized and data will be consistent for your user.
If you are accessing live data in a batch-style application where you have a server farm pulling live data, _instead of_ the advice above about storing the affinitization cookie, instead store information such that a given user's data will always be requested by the same server on your farm, and then have that server resend whatever cookies it gets as if it is a client application. Doing so will force your individual servers to be affinitized and for request for a given users' data to also be affinitized, and thus data will be consistent. This does mean that your requests won't be as easily spread out across all of our servers, but you'll at least have some spread-out if you're working with a farm of servers on your end.
I have updated the documentation here to reflect the gist of what I mentioned above:
https://github.com/Bungie-net/api/wiki/Affinitization:-benefits,-drawbacks,-how-to
Ideally there will be a future where you don't have to worry about any of this - but whether we get there depends on if we have the time and resources to work on it.
Not something that would happen overnight but wouldn't a distributed cache also solve the issue?
You could probably do without an affinity cookie as well, once that's in place.
However I can imagine with most requests being players checking their own data - I assume - the main benefit would be that the Bungie.net platform appears uniform no matter which gateway you go through, without really improving the caching itself.
Exactly the situation - and definitely one of the options that we've had floating in our backlog for some time now. But with our core use case being as you identified - players checking their own data - it's tough to justify the changeover with our current resources and competing priorities.
Any guidance on naming scheme for the affinity cookie? Will it at least always start with the same string? or is it machine generated / potentially random each time?
Unfortunately, we can't give any guidance on that which would be reliable. For now there's a prefix that's been consistent for the last few iterations, but that's something that could potentially change in future iterations. If you try the approach mentioned above (keep a user always making requests from a single one of your own servers/single client instance, and honor any set-cookie headers by continuing to send them in subsequent requests until they expire - or other solutions involving honoring set-cookie headers and continuing to pass them consistently for a user, without trying to pay particular attention to which cookie(s) are being sent), it should do the trick for your purposes.
On Sep 5, 2019, at 18:56, Mike Chambers notifications@github.com wrote:
Right now, it appears that if you pass the affinitinty cookie to the server, it is not returned on the response. This makes it difficult to send the affinity cookie on each request, since I dont know the name, and thus cant save it to resend on each request.
The other option is to capture all of the initial cookies, and then just send those on every request (since they contain the affinity cookie). However, that means Ill ignore any new / updated cookies returned by the server on other requests. Not sure if those change, or if that will cause issues. (This is what im doing right now).
I hope im not coming across as difficult. Just want to make sure I am using the APIs correctly. Do we know if the cookie follows a naming scheme like suggested above
One warning here — the way browsers and many scripting agents implements “cookie jars” is by fully implementing cookie support, and Bungie is suggesting that we may need to do so here:
1) When you receive a Set-Cookie header, you store that cookie until it expires, overwriting any previous value stored
2) You transmit all stored, non-expired, Cookie headers to the server with every request
Your suggestion that you “ignore any new / updated cookies” is not a valid implementation of cookie handling, and you'll eventually find it doesn’t work out for you when your cached affinity cookie expires after the server transmits a new one.
I encourage taking the time to implement enough cookie handling to meet 1) and 2) above for this purpose. Many languages offer this as a builtin or pref’able automatic behavior.
Aye, if it helps - floatingatoll elaborated further on the approach I'm recommending. Check out whatever http client library you're doing and see if it supports cookie handling in an automatic manner, as I am guessing this will be the path of least resistance for your situation. It's definitely a bit more involved - but unfortunately this use case is pretty far from the intended one in the API, so it's definitely a bit less convenient to do what you're intending consistently.
For now I'm going to close this out, let me know if you run into any new problems!