Hello all, appreciate the help there in advance!
With AWS Ruby SDK v3 modularization, at high time, we are releasing almost 200 service gems on RubyGem almost everyday using rubygems Post API. Starting from last Friday, we are seeing 429 (too many request) and failing to publish gems and need to wait a long time (at least hours to see a bit less 429). Then also a lot 502s.
Is this a signal that "awscloud" account has been blacklisted due to our release speed? Is there any suggestions for retry control or publishing pace recommend from yourside? If our account is blacklisted, anyone can help move it out from blacklist?
Let me know if more info. is needed, thanks!
Is this similar to ?
https://github.com/rubygems/rubygems.org/issues/2074
https://github.com/rubygems/rubygems.org/issues/1678
Can we request an increase on the API rate limit for "awscloud" account if possible?
looking at #1885, mind helping add our account for an exemption list or higher rate limit? Thanks so much in advance!
There are no blacklists or whitelists. The limits are the same for everyone.
We have added back off to our push limit (was 100 push/10 min/ip), which are as following:
100 req in 10 min
200 req in 100 min
300 req in 1000 min
4000 req in 10000 min
The response has Retry-After
header with time left until the reset, you could use it for back off at client end.
@dwradcliffe @sonalkr132 Thanks so much! That's helpful, I'll update our gem publish in batches with sleeps and using Retry-After
for retry time for now.
A quick follow up question: will it be possible for increasing limit per account in the future?
With the growing size of aws gems, if there are too many retries like "502" that need to be retried, we could easily be in the 300 section, and introducing hours of latency at release.
Probably we have to have more hosts doing the publishing?
Another quick Q:
Are Different API call types count together or Separate?
Such as
100 GET req in 10 min or 100 (GET + POST) req in 10 min?
I'll update our gem publish in batches with sleeps
I did so too :)
Although I determined this more with a trial-and-error; I have significantly fewer gems than you have described here though.
One reason why I personally sometimes publish lots of new versions is because I am working on some gems, with lots of smaller changes; I like to get the smaller changes applied and move on quickly, rather than wait until everything is finished, and then push one big release.
I think it would make sense if there may be some more fine tuning allowed. Personally I don't need any of this, but I can see where people or perhaps companies may need a bit more flexibility; could consider a bit more flexibility in regards to the hardcoded limit approach. (I can not really evaluate, though, because I guess you folks are the ones who can determine better what you'll ultimately need; for my small use cases, rubygems.org behaviour as it is is just fine, even with the rate limit; I just added sleep() calls for batch-removal of older gem versions of mine. :D )
Soft update, we are doing 70 gems per batch with sleep interval of 10 minutes, still got hard throttle to retry after 216000000 seconds
in the 2nd to 3rd batch. though looks like #2087 is already deployed?