There is no elegant way to implement throttling for github API calls through PyGitHub. The best way I can find to make sure my app doesn't hit the throttling limits is to keep checking before every call, and maintain a count of the number of calls made to the APIs, and then continuously checking if that limit is reached or not - while refreshing the rate limits periodically. It is not simple to implement a wrapper class, or apply decorators/descriptors to the calls because there various classes (like GitHub, Repository, Releases etc) that can make a call to the API endpoint. It seems implementing this feature inside PyGitHub to help throttle/ratelimit/wait calls would be the best approach IMO, since every call the the REST API can be tracked. Will such a feature be encouraged? Does it make sense to work on a pull request?
anyone?
See also #1233 for excessive requests.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
PyGithub is great, many thanks!
Is there a convenient way to instruct PyGithub not to throw a rate limit exceeded exception, but back-off and retry the operation? Otherwise, I have to do that and wrap each of my calls, which is possible, but feels wrong. That's me then performing a framework/library functionality that would better fit the actual framework/library, in this case PyGithub, then every user doing the same in their code.
Most helpful comment
PyGithub is great, many thanks!
Is there a convenient way to instruct PyGithub not to throw a rate limit exceeded exception, but back-off and retry the operation? Otherwise, I have to do that and wrap each of my calls, which is possible, but feels wrong. That's me then performing a framework/library functionality that would better fit the actual framework/library, in this case PyGithub, then every user doing the same in their code.