On macOS with python and python3 installed, pipenv hangs with this Pipefile:
[packages]
awacs = "*"
awscli = "*"
aws-shell = "*"
botocore = "*"
jetstream = "*"
mycli = "*"
sops = "*"
troposphere = "*"
troposphere-cli = "*"
[requires]
python_version = "3.6"
Here's what it shows:
$ pipenv install --three
Creating a virtualenv for this project...
鉅婻unning virtualenv with interpreter /usr/local/bin/python3
Using base prefix '/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6'
New python executable in /Users/nikolay/.local/share/virtualenvs/aws-0MMXWS-b/bin/python3.6
Also creating executable in /Users/nikolay/.local/share/virtualenvs/aws-0MMXWS-b/bin/python
Installing setuptools, pip, wheel...done.
Virtualenv location: /Users/nikolay/.local/share/virtualenvs/aws-0MMXWS-b
No package provided, installing all dependencies.
Pipfile found at /Users/nikolay/Projects/DevOps/aws/Pipfile. Considering this to be the project home.
Pipfile.lock not found, creating...
Locking [dev-packages] dependencies...
鉅窵ocking [packages] dependencies...
Notice the cursor leftover here as well:
鉅婻unning virtualenv with interpreter /usr/local/bin/python3
Hey @nikolay, thanks for taking the time to open this. Unforunately, I'm currently unable to reproduce the problem locally. Could you specify which version of mac OS you're using, and the version of pipenv as well?
The locking procedure may take a while but shouldn't be significant (longer than a minute or two) on modern hardware. How long were you leaving this task running? Does this also happen when you run pipenv lock
?
My macOS version is 10.12.3 (16D32)
.
I canceled it after more than 30 minutes. Same packages install in a couple of minutes with: pip install -U -r requirements.txt
It happens with pip lock
, too, unfortunately.
So, without being able to reproduce this on my end, all I can do is speculate at this point. My guess is either something in do_download_dependencies
or get_downloads_info
. Are you comfortable cloning the pipenv repo and doing some debugging here to see where the code is hanging?
If not, I'm not sure if we can proceed any further here. My only immediate hunch is the proper_case
call to https://pypi.org/simple/
not timing out. We should probably add a timeout here, but I'd like to verify where the failure is happening before we start making changes.
Hey @nikolay, since I'm unable to reproduce this, and we don't have any meaningful way to move forward here, I'm going to close this out. Please feel free to reopen this if you have further details or are able to pinpoint where things are hanging. Thanks!
I've noticed this happening as well. Here's how I reproduced and investigated it:
In cli.py
I added the following:
import faulthandler
import os
faulthandler.register(14) # 14 == SIGALARM
print("kill -s ALRM {}".format(os.getpid()))
Then I ran pipenv lock --verbose
and the last few lines it printed before hanging indefinitely were these:
numpy==1.13.3 requires numpy==1.13.3
incremental==17.5.0 requires incremental==17.5.0
werkzeug==0.12.2 requires Werkzeug==0.12.2
------------------------------------------------------------
Result of round 5: stable, done
^C
Aborted!
It seemed to have completed but that ^C
is me having to CTRL-C the process and kill it.
I ran pipenv lock --verbose
again and waited until it got to that point and then ran kill -s ALRM {the pid}
and this printed:
Result of round 5: stable, done
Thread 0x000070000ab73000 (most recent call first):
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 293 in wait
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/queue.py", line 164 in get
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/concurrent/futures/thread.py", line 64 in _worker
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 862 in run
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 882 in _bootstrap
Thread 0x000070000a670000 (most recent call first):
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 293 in wait
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/queue.py", line 164 in get
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/concurrent/futures/thread.py", line 64 in _worker
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 862 in run
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 914 in _bootstrap_inner
File "/Users/jackdanger/.pyenv/versions/3.5.3/lib/python3.5/threading.py", line 882 in _bootstrap
Current thread 0x00007fffce4233c0 (most recent call first):
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/urllib3/util/connection.py", line 73 in create_connection
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/urllib3/connection.py", line 141 in _new_conn
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/urllib3/connection.py", line 284 in connect
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/urllib3/connectionpool.py", line 850 in _validate_conn
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/urllib3/connectionpool.py", line 346 in _make_request
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/urllib3/connectionpool.py", line 601 in urlopen
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/requests/adapters.py", line 440 in send
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/requests/sessions.py", line 618 in send
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/requests/sessions.py", line 508 in request
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/requests/sessions.py", line 521 in get
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/utils.py", line 481 in resolve_deps
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/cli.py", line 1083 in do_lock
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/cli.py", line 1974 in lock
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 535 in invoke
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 895 in invoke
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 1066 in invoke
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 697 in main
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 722 in __call__
File "/Users/jackdanger/.virtualenvs/tensorflow-1.0-DYoUwvq2/bin/pipenv", line 11 in <module>
The last line of pipenv before going into the requests library is utils.py:481
in the resolve_deps
method which looks like this:
478 if 'python.org' in '|'.join([source['url'] for source in sources]):
479 try:
480 # Grab the hashes from the new warehouse API.
481 ------> r = requests.get('https://pypi.org/pypi/{0}/json'.format(name))
482 api_releases = r.json()['releases']
483
484 cleaned_releases = {}
That requests.get
is likely hanging indefinitely so I changed it to requests.get('https://pypi.org/pypi/{0}/json'.format(name), timeout=5
and, while it definitely hung for a little while, the process did finish correctly the next time through:
Result of round 5: stable, done
Updated Pipfile.lock (9a1095)!
I've opened #885 to fix this with a (more generous) 10-second timeout
FYI - in my case it was hanging because Pipfile.lock was read-only. Before blaming pipenv, check whether it's possible to write something to that file. Bad I realized that after hours spent trying to figure out what the heck is going on. Cheers 馃嵕
Most helpful comment
I've noticed this happening as well. Here's how I reproduced and investigated it:
In
cli.py
I added the following:Then I ran
pipenv lock --verbose
and the last few lines it printed before hanging indefinitely were these:It seemed to have completed but that
^C
is me having to CTRL-C the process and kill it.I ran
pipenv lock --verbose
again and waited until it got to that point and then rankill -s ALRM {the pid}
and this printed:The last line of pipenv before going into the requests library is
utils.py:481
in theresolve_deps
method which looks like this:That
requests.get
is likely hanging indefinitely so I changed it torequests.get('https://pypi.org/pypi/{0}/json'.format(name), timeout=5
and, while it definitely hung for a little while, the process did finish correctly the next time through:I've opened #885 to fix this with a (more generous) 10-second timeout