If a download is interrupted half way through, there is no way to continue the download in a subsequent pip run; it'll start each download afresh.
For those like me unlucky enough to have long commutes with intermittent internet, it's painful.
Closing this, I don't believe the benefit outweighs the cost of implementing this. We would accept a PR for it, but I don't believe tracking the issue to be valuable.
I'm struggling with this. I moved to a country with shitty internet and I'm trying to download a massive 170 MB package. Is it possible to download the file manually with wget and get pip to use the downloaded file instead of trying to download a new one?
@minthos Yes. You can download a distribution manually and just give pip the path to the downloaded file and it'll install it.
pip install /path/to/my-large-file.tar.gz
pip install /path/to/my-large-file.tar.gz
I have confirmed it works. Thank you.
Qt's "pyside2" package is offered via download.qt.io which limits download speed to ~50kb/s (it seems it does that server-side!) and the package is ~100MB. This means a download of 30-60 minutes people need to start all over if there's just a minor hiccup.
The former-disney-now-open-source 3d engine project panda3d (python 3d engine) is similarly huge and changing its default install method to pip - I assume this might cause similar problems.
Is there any chance this might be revisited? I think this may be a bigger problem for a subset of your users than you might realize...
@JonasT Hi!
Honestly, unless someone finds the time to implement this and submit the PR, I doubt that this is going to happen any time soon.
@pradyunsg How would one do that? And does it make sense to work on a closed issue? The simplest option for me is to keep hammering pythonhosted.org until pip succeeds at installing Qt5.
As FreeDownloadManager is also 40+ MB and my mobile speed ranges from 5 minutes to 3+ hours, i'll use the HTTP Range header and requests, the easy HTTP(S) lib: https://stackoverflow.com/questions/22894211/how-to-resume-file-download-in-python
def resume_download(fileurl, resume_byte_pos):
resume_header = {'Range': 'bytes=%d-' % resume_byte_pos}
return requests.get(fileurl, headers=resume_header, stream=True, verify=False, allow_redirects=True)
@CTimmerman if its as easy as you claim, please open a pr with the feature ^^
And does it make sense to work on a closed issue?
Not every PR needs to close an issue. :)
As @dstufft said, we would accept a PR for it.
Ah, here it is. I'll drop my complete code link here in case someone knows how to add this to pip. I've only found a duplicate line (prepare.py:207), and bug (no retry by default?): https://gist.github.com/CTimmerman/ccf884f8c8dcc284588f1811ed99be6c
@CTimmerman You should follow https://pypi.org/security/ to report security issues.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Most helpful comment
I'm struggling with this. I moved to a country with shitty internet and I'm trying to download a massive 170 MB package. Is it possible to download the file manually with wget and get pip to use the downloaded file instead of trying to download a new one?