Pip: Deprecate pip, pipX, and pipX.Y

Created on 6 Oct 2015  Â·  102Comments  Â·  Source: pypa/pip

Currently, people are regularly running into problems over confusion about which particular Python a particular invocation of pip is going to manage. There are many cases where what Python a particular invocation of pip* is going to invoke is unclear:

  • pip3 install --upgrade pip will overwrite pip and possibly switch it from pointing to 2.7 to 3.5.
  • pip3 isn't specific enough, you might have 3.4 and 3.5 installed.
  • pip3.4 isn't specific enough, you might have 3.4.0 and 3.4.1 installed.
  • pip3.4.0 isn't specific enough, you might have multiple copies of 3.4.0 installed in various locations.
  • We don't have a good answer for what the pip binary should be called on alternative Python interpreters like PyPy (pip-pypy? What if we have two versions of PyPy? pip-pypy2.6? What if we have PyPy and PyPy3? pip-pypy-2.6 and pip-pypy3-2.6?).

Overall, it's become increasingly clear to me that this is super confusing to people. Python has a built in mechanism for executing a module via python -m. I think we should switch to using this as our preferred method of invocation. This should completely eliminate the confusion that is caused when python and pip don't point to the same version of Python, as well as solve the problem of what do you call the binary on alternative implementations.

In addition to the confusion, we also have the fact that pip install --upgrade pip doesn't actually work on Windows because of problems with the .exe file being open. However python -m pip install --upgrade pip does work there.

I see only three real downsides:

  • There is a lot of documentation, examples, code or other instances of inertia using just pip and this will be churn for those.
  • It's 10 more letters to type.
  • It doesn't work on Python 2.6.

For the first of these, I think the answer is to just have a very long deprecation cycle, in the order of years. I wouldn't even put a specific date on it's removal, I'd just add the warnings and re-evaluate in the future. Luckily we've shipped support for python -m pip for quite some time, so it won't be something that people need to deal with version differences (mostly).

The second of these I don't really have a great answer for, I think that 10 extra letters probably isn't that big of a cost to pay for the reduced confusion and the default answer working on Windows. We could possibly offer a recipe in the docs to restore pip, pipX, and pipX.Y via shell aliases.

This last item is the biggest sticking point for me. As far as I know, Python 2.6 still has far too many users for us to drop it since, as of 6 months ago, it was still ~10% of the traffic on PyPI (source). The problem with Python 2.6 is that it only supports -m when the target is a module not a package. I see four possible solutions to this:

  • Don't deprecate pip* on Python 2.6.
  • Add a module like pipcli.py that people can invoke like python -m pipcli instead of python -m pip.
  • Move pip/ to _pip/ and make pip.py.
  • Document that due to limitations of Python 2.6, it will need to be invoked as python -m pip.__main__.

I don't really like the pipcli idea, the other three all have pros and cons, but I think I personally live with either not deprecating pip* on Python 2.6 and/or documenting that it needs to be invoked as ``python -m pip.main on Python 2.6).

What do the @pypa/pip-developers think?

cli backwards incompatible maintenance

Most helpful comment

I tried to come up with something that satisfies (nearly) all the concerns raised in this issue with the status-quo. I think a good position to end up in would be:

  • _pip_ does not provide any CLI wrappers; only supports python -m pip
  • _pip-cli_ mimics python -m pip in-process as a pip wrapper
  • Include _pip-cli_ in _virtualenv_'s installed-by-default packages

Reasoning for choices here:

  • Ease-of-use inside a virtualenv; it's still pip.
  • Windows upgrade UX improves

    • pip install --upgrade pip will work on Windows :tada:

    • would still need to do python -m pip install --upgrade pip-cli though

    • slightly less of a problem since updates to that might be much less frequent

  • Existing documentation says "pip install ..." which will still work in venvs.
  • Not providing of pipX and pipX.Y

    • they're redundant in a virtualenv + issues listed in OP; they don't scale well

    • kinda sorta avoid conflicting with the distro's executables to an extent (when the distro doesn't provide unqualified variants)

Inputs and comments welcome on the above.


Other Notes:

  • I don't see any compelling arguments for pipX and pipX.Y. They should probably get deprecated/removed.
  • Related: #4625 (Overwriting existing wrappers)

    • relevant here: this would make _pip-cli_ installation behave nicer with the distro-installed pips

  • Having the "sanity check" checking equivalence of python -m pip and pip, shipped as a part of _pip-cli_ sounds tempting to me.

All 102 comments

i like the move from pip to _pip,
that way pip's implementation goes to a more "private" namespace
the expense is breaking tools that go into pips internals

The other expense is that if we ever want to make a public API we're either limited to having a single namespace (whatever is in pip.py) or we need to change it back to a package (and possibly break Python 2.6 again, unless we've deprecated it by then).

Of course, we may never make a public API in which case, the point is moot.

I can't help but think that @warsaw and @ncoghlan probably have some opinions on this too.

Maybe @bkabrda too! and @tdsmith

Don't underestimate the power of that first point. The intertia is _high_: lots of tools will assume pip, and _lots_ of documentation will be wrong. Having a long deprecation cycle is basically mandatory here. Otherwise, I think this is a good idea: +1.

-1 on removing pip I would have to change all of my deployment scripts.
+1 on removing pipX and pipX.Y

On Tue, Oct 6, 2015, at 08:16 AM, Cory Benfield wrote:

Don't underestimate the power of that first point. The intertia is
_high_: lots of tools will assume pip, and _lots_ of documentation
will be wrong. Having a long deprecation cycle is basically mandatory
here. Otherwise, I think this is a good idea: +1.

— Reply to this email directly or view it on GitHub[1].

Links:

  1. https://github.com/pypa/pip/issues/3164#issuecomment-145839321

Don't underestimate the power of that first point. The intertia is _high_: lots of tools will assume pip, and _lots_ of documentation will be wrong. Having a long deprecation cycle is basically mandatory here.

Yea, completely agree. I essentially assume that we should not have a defined removal date (and possibly never) and just have it log a message to stderr.

I essentially assume that we should not have a defined removal date (and possibly never) and just have it log a message to stderr.

I don't think that will work. Just printing annoying warnings with no defined "your shit will break no later than X" date doesn't really help: people will just ignore the warnings. I think if you want to do this you should decide a date (possibly one _far_ away, but still). One possible date to start the discussion: when the last RHEL LTS release with Python 2.6 stops being supported (that's very far away still, but worth discussing).

-1 on removing pip I would have to change all of my deployment scripts.
+1 on removing pipX and pipX.Y

I don't think it makes sense to deprecate (not talking about removal any time soon) pipX and pipX.Y without also doing it for pip since that is arguably the worse offender of them all.

One possible date to start the discussion: when the last RHEL LTS release with Python 2.6 stops being supported (that's very far away still, but worth discussing).

This is good idea.

One possible date to start the discussion: when the last RHEL LTS release with Python 2.6 stops being supported (that's very far away still, but worth discussing).

That's November 30, 2020 for the end of RHEL 6 Production 3 phase. They have a super special extended life cycle beyond that, but perhaps we could just target 2020 and if we roll around to 2020 and Python 2.6 is still somehow in wide support, we push it back further.

Another alternative would be to stop requiring that pip execute in the same Python environment that it is installing things into...

To be honest, I'm not sure how pip install -p python2.7 is any better than python2.7 -m pip install. We have to inspect the Python we're installing into to get information from it, so either we're going to subshell into that Python to shuffle data back and forth (like my half done virtualenv rewrite does) or we'll need to continue to be executed by the same Python environment. Feels like shuffling deck chairs more than anything else.

For that particular idea the main benefit would be that you would only have to upgrade pip once.

That much is true, the flip side of that is it makes it (somewhat) harder to support versions of pip older than what pip itself supports, since the installs are no longer independent (or you'll need to keep around an older copy installed somewhere else). On the other hand, pip could more easily drop support for running the main pip binary command in a particular Python, while keeping compatibility for installing _into_ that version of Python. It would (continue) to enable bundling all of pip into a single zip file that can be executed somewhat independently of actually having it installed.

It doesn't address the fact that pip install --upgrade pip on Windows blows up because the OS has a handle open to pip.exe though, which I don't think can be solved without using python -m, at least if I understand @pfmoore correctly.

I think @sYnfo wants to comment on this more than I do, since I no longer maintain Python in Fedora/RHEL.

Ah, that's right, I forgot. Sorry!

(But I personally think that even if these are removed, we'll still provide them on distribution level in form that is best for the set of Python interpreters that we ship; at least that was my first idea when I read the proposal... We have a general policy for Python executables that mandates this in Fedora.)

  • +1 for deprecation - but I'd be happy enough simply to deprecate in documentation, not make the scripts themselves moan at the user.
  • It's only 6 more letters on Windows - py -m pip :-)
  • For 2.6, I'm OK with either letting people continue to use pip or advising python2.6 -m pip.__main__. I don't think it's worth making changes to pip to give them any other solution.

The inertia issue is huge, and I don't think we should fight it directly. Rather, we should switch the documentation to use the "python -m pip" form, and make that form official PyPA policy (by which I mean we take pains to use that form consistently in whatever we post, etc). Maybe offer PRs for the install documentation of well-known projects to switch them to the new form. We can worry about formally deprecating and/or removing the scripts once the python -m pip form starts to gain a bit of traction in common usage.

I think my success criterion is making this the incantation that people get from StackOverflow. Shoot for the moon, etc etc.

I think publishing a linkable intent-to-deprecate message with a rationale may help convince third-party maintainers to accept documentation PRs.

The messaging here is tricky, because deprecating pip doesn't mean deprecating _pip_...

I think the only real alternative is Daniel's suggestion. I think the current situation sucks and I can't really think of a way to save the attempt to manage pip versions using a suffix or prefix or anything that doesn't end up actually specifying which interpreter you want to run under.

I'm +1 for deprecation—it seems to make a whole lot of sense from upstream point of view and I don't see any issue this could cause downstream.

In Fedora this would mean shipping all the binaries during the deprecation period, as we do now; and not shipping them at all afterward. Perfect sync with upstream, hopefully. :) I'll make sure all the Fedora docs get updated, when this goes official.

( @bkabrda If I understand the guidelines correctly, they only mandate shipping all the MAJOR.MINOR executables _iff_ there are any executables in the first place, this issue seems to be about removing the pip executables entirely, right? )

Also +1 to either keeping pip or using python -m pip.__main__ with python 2.6.

A couple of points that haven't come up:

  • Doesn't this just move the problem to python? Why is python (of various versions and in various virtualenvs) any easier to keep track of than pip?
  • If you're using a virtualenv (which most people should be), pip unambiguously refers to the active one. An alternative solution would be to encourage people to always use venvs, which solves other problems as well, e.g. dependency conflicts from dropping everything into a global environment.

I'm not big on the idea, but I don't think I've ever run pip outside of virtualenv, which is usually activated into my current shell, so the extra typing doesn't gain me anything (I'm selfish!). Of course, if it's not activated, I probably have to type a lot more than that anyway...

How would people feel about making an exception for virtualenvs? There's no mystery for which python it is using, but then again it might be too confusing to have it work differently between for virtualenv and the system install, so I'm feeling pretty humble about its quality as a suggestion. I'm also not sure there's a good solution for checking that.

Also,

pip3.4 isn't specific enough, you might have 3.4.0 and 3.4.1 installed.

Does this really come up often? I'm curious how you're distinguishing between them if so, since I didn't think python would normally install itself any more specific than by the pythonX.Y name (and unless they have different prefixes, sounds like it would also try to share site-packages anyway).

Oh, I guess that does apply having multiple 3.4.0 installs too, but then it seems like you'd definitely be using full paths to distinguish whether you're using "pip" or "python -m pip", but since you're using full paths anyway, the argument against having extra wordiness goes away.

I may be a little disconnected here, but why not replace pip with an alias that was effectively something like:

alias pip=/usr/bin/env python -m pip

Or a script that was akin to this to be installed into /usr/local/bin/pip.

That way we don't lose pip, and it universally works errywhere.

Also not sure if that solves the problems you're facing, just my $0.02. :rage3:

@erikrose
It does sort of shift the problem to python, but typically the confusion that I've seen arise stems from when python and pip disagree. Like you might have /usr/bin/python and /usr/local/bin/python and if python points to /usr/local/bin/python and pip points to /usr/bin/python it's a recipe for confusion. So we completely eliminate the confusion cased by having the two disagree, but removing any ability for it to disagree. You're still left with confusion about what python means but I don't think there's anything at all that can be done about that, and particularly not by us.

A virtual environment makes the issue _harder_ to hit, but I think that it's still possible. My gut tells me that new users (the ones most likely to get tripped up by this) are probably not going to be religiously using virtual environments (if they're using them at all).

@rschoon
We could possibly do that, it'd require a permanent special case inside of pip when pip installs itself (because there's no setup.py in a virtual environment) but it's certainly a possibility. My main fear with that is it feels like it'd just always be better to use python -m pip anyways because it works inside and outside of a virtual environment instead of having to remember to switch commands based on whether you're in one or not.

I'm not sure exactly how often the 3.4.0 and 3.4.3 thing comes up. I know that it's not super unusual on OSX since you have system provided Python sitting in /usr/bin/python and you might also install the Python.org installer or Homebrew or Macports or pyenv (or some combination of the above).

@mattrobenolt

Basically, because it's super confusing if you're doing something like running myvenv/bin/pip without activating the virtual environment first or if you have a copy of Python installed to a non standard location /opt/my-app and you run /opt/my-app/bin/pip and expect to manage /opt/my-app/bin/python.

it feels like a loss for virtual environments to lose a basic pip script when they don't suffer from this problem.

as for real pythons I guess I wouldn't mind seeing a more exact pip-<python binary path> type console script, and let the distro packagers tack on simpler scripts for system-managed pips.

-1 on deprecating pip, +1 on the others.
I want to expand on some framing thoughts I have, that might help for discussing what is such a large issue. Skip to the break if you just want to read my thoughts on solutions.

For a start, you want to look at _why_ we have this problem, and is it analogous to anyone else? It comes from having and allowing multiple pythons on the one running system. So system package managers do not have this problem, because for instance, you don't (can't) run debian jessie and debian wheezy live at the same time on one system; so it doesn't need to manage a libreoffice3.5 and libreoffice4.3 for example. However many other language package managers start having to deal with the same problem as pip's when they too have multiple versions installed. As @erikrose mentions, even python itself already runs into this issue on deciding on what python now is when more than one is installed.

I also want to look at the issue from the POV of majority python users. Note that this has become a real pain point mostly (or more-so) for people with more than 2 pythons installed. Otherwise pip would work, or simply pip2 and pip3 (and there probably wouldn't be enough inertia for everyone to start discussing). Beyond that it starts getting really complicated. But I'd believe most python users are happy using just one python. Even if their system somehow gives them 2 at some point, or they manage to install multiple, instead of upgrading / (removing previous and installing a new one) - if they got things right, they'd be fine with only one python going, and by extension, that python's pip. For all of these users suddenly taking away pip makes absolutely no sense and is just painful.

The other big source of pain is when someone merely tries to install a new python over a previous existing one, but the story for the entire environment being migrated to the new one (or "taking over the old one") isn't there. In that case I want to make the distinction that this is the install story's fault, not the existence of multiple pythons. For instance someone hoping to "install a new python!" but not getting it on their path. Even uninstalling the old python, might do nothing about giving them the environment they want (the new python on their path).

Also note that while the number of users collecting problems with managing pips may be small, their complaints are the only ones heard. I'd venture that their opinions are probably the majority on this discussion as well (because they're the ones with the issue). The silent majority doesn't care until things get changed for them, but we should still look to represent their use case fairly. Not of course, that those complaints therefore can't be valid.


Now with that in mind, here's solutions I like:

  • clearly pip<versionstuff> doesn't scale well at all as a solution. So I'm in agreement for removing it. Most of the time its better to wait for a decent solution (if it exists) than try and keep going with one that creates as many issues as it solves.
  • Stop replacing pip (or other pip2s or pip3s, even) without asking. Even system package managers do this. They ask beforehand. So should we. This way at the very least, the user sees straight away that there's an issue and perhaps can make a decision for themselves what they want pip to be. One can do a lot with this - look at who's python installation the existing pip comes from - its the same one as the current pip trying to install itself, then this could be fine. Otherwise make the user say --yes or --replace or answer Y to a prompt. Note that this could help with the same problem with other userland-programs-that-come-from-pypi. Make sure the user wants the executable script replaced. Even if this means we have to wait a long time to tell people that they might have to interact after calling pip install by default (for scripted uses of pip), and give them time to add --replace-scripts (or w/e) to their callouts, so be it.
  • Start outputting some information about where things come from in an install! This could solve a lot of issues straight away. If I install a package with pip, pip doesn't speak about

    • what python it's on/from

    • what version it is, where its installed

    • where it is installing the new package(s)

This would all be super useful information to know. It will immediately show me if the pip I'm calling on the command line is not the actual pip that I want, which is what tricks a LOT of users. It will also show me I'm installing for the right python and into the right site-packages.

I especially believe implementing the last two points above, would remove a lot of average user-problems in relation to this issue. In many cases they would be empowered to know the problem and solution themselves.

One option I just thought of, once we actually remove the scripts (if that's what we do) we could just make a pip-cli package which restores them. This would both make it trivial for people to get the old behavior back (just install pip-cli) and make it easy to keep the scripts inside of virtual environments (just have them install pip-cli too).

Sent from my iPhone

On Oct 6, 2015, at 1:16 PM, Marcus Smith [email protected] wrote:

it feels like a loss for virtual environments to lose a basic pip script when they don't suffer from this problem.

as for real pythons I guess I wouldn't mind seeing a more exact pip- type console script, and let the distro packagers tack on simpler scripts for system-managed pips.

—
Reply to this email directly or view it on GitHub.

This would both make it trivial for people to get the old behavior back (just install pip-cli) and make it easy to keep the scripts inside of virtual environments (just have them install pip-cli too).

As a point of reference, if you're not aware, this is how grunt works in the node world. Not sure if best example, but it's an thing.

The tl;dr is you $ npm install grunt into your project for a local install, then $ npm install -g grunt-cli to give yourself a global $ grunt, which ends up just using the local install'd package.

From my understanding, that sounds similar to what you're proposing?

@mattrobenolt in a sense it's not helpful, because that is still only talking about one install of / one version of nodejs on your system. If you have two node installs, which one does the global grunt script belong to?

Node also has the "advantage" in this sense in that its package installs are location-local by default, which is the opposite of python. Even in a virtualenv, you are installing packages "globally", but inside in an isolated environment (instead of your system one).

If you have two node installs, which one does the global grunt script belong to?

It shouldn't technically matter, since it's just a shim into the real pip that's installed into your virtualenv or whatever you're doing.

In our case, it could install into system python, whatever that python is, it's an implementation detail. It just needs to shell out.

because that is still only talking about one install of / one version of nodejs on your system

You're assuming people don't use nvm or any other virtualenv-like tools? Same idea.

But again, my context is mostly limited to being a user, so I'm sure there are many contexts that I'm not taking into account. Not claiming to have the solution, just citing examples of other things in the wild that do similar things.

Even though since the grunt-cli code is fairly stable, it indeed "shouldn't much matter" who put it there, the point of contention is which grunt it calls. As I said in node-land 99% of the time this will be a path-local grunt, and everything is decided for you. You already know what version of nodejs your project on your current path is using. However unfortunately that's not the case with python.

If I have python 2 and python 3 installed, and I call pip, even though this pip was provided by a pip-cli (equivalent of install -g crunt-cli) from one of the two pythons, which python's pip should the global pip script call? Here we no longer have a path-local system to guide us.

+1 for Ivoz's take, but in particular:

Start outputting some information about where things come from in an install! This could solve a lot of issues straight away.

This would be very useful (both to novice users and to experienced users from other platforms) no matter what the decision regarding invocation syntax.

+1 on deprecating pip.+ without deprecating pip but it also shouldn't be too much work to fix automation scripts and in fact I think automation would benefit more from knowing exactly what version of python to run pip from. I haven't read all of the arguments yet but I probably will and then return to give more opinions

My first ever GitHub +1. I think it's fair to consider docs that _don't_ recommend python -m pip flawed, since this invocation is much less prone to failure on a hand-bodged Python install like the typical novice developer's laptop.

From my perspective: I'm in agreement about the motivations for this change. Django has plenty of analogous problems with people using the wrong python version to run django-admin; moving to the python -m approach for both pip and django-admin would be an elegant way to to address this issue.

My only concern is bullet point 2: 10 more characters to write. From a UX perspective, I'm concerned about introducing boilerplate that needs to be typed in order to run _anything_. Especially when dealing with new users, having a "Just trust me" magic incantation format isn't ideal.

One suggestion (although it requires a change to Python, rather than PyPA): Make py (or some other shorthand) a shortcut for python -m. Yes, this means having 2 ways to invoke python, but you could defend it as "py runs modules, python runs code". The other downside to this would be that it would only benefit new python versions, unless it was backported to 2.7/3.[345].

@freakboy3742 yeah, I mean this also would mean that flake8, pep8, etc. should all move to this convention too (which given the fact that pyflakes is very dependent on the version of Python makes a bit more sense). py could also be distributed as a package for people looking to opt in early, but it conflicts with py.test's py module too if I remember correctly.

+1 on advocating for python -m pip as default and preferred, rather than just pip.

I've been teaching beginners (kids even) and introducing newbies to Python for quite a while now. In addition to the points by @dstufft , everything gets complicated when you get to virtualenv territory because the Python being used is not the default Python on the system path. _In particular_, these complexities get worse if using other Python distributions; e.g. with Anaconda Python, you can create a conda env without pip (e.g. you forget to conda install pip) and then the pip on the path happily continues to install in the root env and not your conda env. In the scientific space, many people are using Anaconda Python as their first, default Python.

In this scenario, using python -m pip ... will tell you if pip is not present in the _active_ python.

As for perceptions of discomfort, it also exactly mirrors other very common invocations, e.g. python -m pdb, python -m ipdb, python -m cProfile, python -m timeit, python -m pstats, python -m SimpleHTTPServer, python -m json.tool, python -m gzip, python -m filecmp, python -m zipfile, python -m encodings.*, python -m mimetypes, python -m tabnanny, python -m pydoc, python -m unittest, python -m calendar, and probably a bunch of others I don't know about.

I'm quite sure making this change would make pip easier to explain to beginners. I have to explain the -m switch anyway for pdb and cProfile so this change would be a net simplification for my students. (Venv would become much easier to teach too if all you had to do is call the correct python executable and not mess with paths and "activation", but that's a grumble for another time)

It would be useful to at least keep the pip command for the case in which it is intuitive and (mostly?) unambiguous: inside a virtualenv.

Doing that would also help with documentation inertia.

I quite like 'Move pip/ to _pip/ and make pip.py.' as an option. It seems viable, and while distruptive to the folk poking around in pip/ today, worth it to improve the user experience - particularly since we don't offe r a public API today.

I'm a +0 on deprecating since it's not a usecase I've ever run into. Using python -m would be a lot of extra typing for users not familiar with aliases.

Since we're looking at a significantly long deprecation path is it necessary to come up with a hack for Python 2.6? Assuming a long enough deprecation path could we not assume that Python 2.6 will be of such a low usage that a hack is not necessary?

I think I agree with @audreyr - in a virtualenv, it's unambiguous, unadorned (no pip2/pip3/etc), and the Python layer of tooling (activate, et. al.) makes it so that you get the "right" binary without resorting to telling users to configure their shells.

I have an even more radical proposal though: what if you didn't even run pip as an installer? Increasingly often, what I want is virtualenv --requirement project/requirements.txt project; if I want to "install" a new thing, it's time for a new virtualenv. Of course I break this rule all the time, upgrading existing venvs, removing dependencies and such, but this is mostly a bad habit that I think I should get rid of, especially now that wheels make new-virtualenv-creation fairly fast.

pip as an command line is really nice to have. Nobody knows about virtualenv yet, and there are some utilities that you want to use outside a virtualenv. What if I want to install the latest Mercurial or Stackless Python with pip, I can do this right now.

I can never get the npm/bower incantations right on the first try. Even if you remove the executable, somebody will still look at some old docs hanging around in the net, and will use the OS provided python-pip and/or python3-pip, and it will fail. Then that somebody will still try to sudo pip install -U someoldpackageyoudontwanttoupgrade and oops, there goes the OS yelling at you. Yes, I've done that. Just replacing this with sudo python -m pip will not avoid this situation.

Wouldn't it be easier to focus on making pip behaviour about packages (and itself) --user installable by default, then get rid of/replace pip2/pip3/whatever with links to the latest pip, so that it's found first in the user's path (I already have mine in ~/.local/bin/), working from its own wheel, and safely tucked away from whatever the OS "thinks is best". I submitted a workaround to allow pip to work with virtualenv when it's set to user installs by default (https://github.com/pypa/virtualenv/issues/802) that also works with pyenv.

This is for people who don't really care which python version they are working with, and want something to "just work". Then some admin will still want to use "sudo pip install for everyone" and it just works. The user just tries "pip install for me" and it just works.

I'd rather not have to worry about "/whereismypython/python -m pip install -g --userDev --Ireallymeanit --pleaselistentome blah". And going back to "python setup.py install blah" seems to me like a step backwards.

-1 from me on dropping pip and its version specific symlinks, as the stick of deprecation needs to be wielded _very_ lightly.

We already have a significant ongoing problem with change fatigue in the Python ecosystem, as there are three major low level tech transitions (Python 2 -> Python 3, easy_install/eggs -> pip/wheels, unsafe by default -> secure by default) still in progress, and a _lot_ of work remaining in propagating those out through the redistributor channels (direct upstream consumers that actually come talk to us online are the tip of the iceberg when it comes to Python's user base). Adding a "pip" -> "python -m pip" transition on top of that isn't worth the pain right now (as a rough guesstimate, I'd say my opinion on that might change by 2017 or so).

However, I do think it's worth emitting a message whenever pip is run globally that states:

  1. Which Python version it is installing for
  2. Suggests " -m pip" to target a specific installation

For example:

"RuntimeWarning: no venv detected, installing into '/usr/lib/python2.7/site-packages'. Pass '--user' for user-specific installation, or run '<other_python> -m pip' to target a different runtime".

I also see an opportunity to tie the python -> py transition into the Python 2->3 migration, but that's a topic for python-ideas rather than here.

@cjrh +1 for advocating python -m pip especially in documentation of a PyPA preferred install for new deployments (esp. science and data science as well as education). conda and brew complicate things as Caleb mentions.

Agree with @audreyr in a virtualenv.

The transition can initially be documentation. @ncoghlan I think there is already a significant amount of pip/conda/brew troubleshooting being done by maintainers of third party projects in data science/science to walk end users through the many permutations. I agree emitting better warnings would be helpful too.

python -m pip install ...
pip install in a virtualenv

I also really like @glyph's suggested virtualenv --requirement project/requirements.txt project

Actually, all three approaches deemphasize version numbers in execution. This may actually be an unexpected benefit toward moving some projects to Python 3.

-1 on removing pip I would have to change all of my deployment scripts.
+1 on removing pipX and pipX.Y

My reasons are that I have basically 2 use cases for using pip (which have somewhat already been mentioned by @audreyr and others here but I'll repeat):

  1. I'm inside a virtualenv and I know which python I want to execute (and will be executing): the virtualenv's python which I chose at the time of creation of that said virtualenv
  2. I want to install a package which is not provided by my distribution and will use pip to do so, in which case I don't really care which python I'm using, I am using the default system-wide python.

I realize these are just my personal use-cases but I would believe they are pretty common among people who use pip.
I've only used pip3 a couple times to install some python3-only packages system-wide which is a use-case which should slowly (probably faster than the proposed deprecation warning though) disappear since distributions are slowly moving to python3 by default.

And to close this comment, I'm not for using a different syntax inside and outside virtualenv's, this will just be confusing for a lot of people (cf. @Samureus comment on how confusing npm is)

How about providing pip via a New command that only defers to python -m pip

If the problem case is when pip and the current version of python differ, how about deprecating the use of the script in that case only? That is, it starts emitting deprecation warnings if you're doing that now, and later forbids it entirely.

Suggestion for specific check.

  1. If pip is being executed via a direct path, ignore this check entirely. e.g. venv/bin/pip willl not trigger this check.
  2. For each pip name define an "equivalent python name". If the pip name starts with 'pip' this is pipname.replace("pip", "python", 1). i.e. we just transfer the suffix to python, so pip2 -> python2, pip3.4 -> python3.4, etc. If the pip name doesn't start with 'pip' this defaults to just being 'python'.
  3. If the corresponding python isn't equal to sys.executable after doing lookup and canonicalising paths, raise the warning. This should say something like "Warning: pip is installing installing into a different python than is on the path. You probably want to run {pythonname} -m pip.{__main__ if it's 2.6}" only better thought out than that. :-)

+1 for the comments from @ncoghlan and @DRMacIver

Since the problem here seems to be "sometimes people are confused", can we start with some helpful log messages/warnings instead of trying to change basically every single Python project's README as the first step?

Is there a "recommended best-practices" for using pip somewhere? Like, "on Debian, install only python-virtualenv and python-dev and then always use virtualenv to get pip" (or whatever the best-practice is actually supposed to be).

Also, if the main issue is "when 'python' is different from what 'pip' points at, bad things happen" isn't that possible to detect and issue some sort of Fearsome Warning? Or even refuse to work without --do-the-bad-things or something?

Other than additional keystrokes to type python -m pip install ..., is there any technical reason not to recommend that end users (scientists and data scientists) use this approach?

@willingc afaik this command works everywhere.... except in ubuntu

@nanuxbe Thanks. What's the ubuntu constraint?

@willingc Doesn't work on Python 2.6. Hopefully not an issue, but it could be for some people.

@willingc not sure what the constraint is but they also do this with virtualenv.
$ python -m pip install --upgrade pip
/usr/bin/python: cannot import name defaultdict; 'pip' is a package and cannot be directly executed

+1 to what @ncoghlan said above.

It feels like everything in python ecosystem is in transition. Let's not add another thing there, if possible.

@willingc On ubuntu 14.04 (the current LTS), pip has been made into a separate Ubuntu package that has to be installed with the OS package manager. AFAICR this is same for python2 and python3. For python3, venv has also been ripped out by the OS and made into an OS package. Like @nanuxbe points out, expected commands don't work, e.g. also python3 -m venv. And then If you upgrade pip (I _think_ on py2 only), you get the InsecurePlatformWarning message. These are all known and complex issues that must make it very hard and frustrating to work on the pip project, and I thank them for all their hard work. Everyone complains when the tiniest unexpected thing happens, while nobody cheers when things "just work". I can't pretend to know what all the issues are, and the ones that I do know about I don't completely understand either. These are Hard Problems and I have awe and gratitude for those who choose to deal with them.

w.r.t. python -m pip versus pip: I'm changing my vote to "leave things as is".

I checked to make sure and it seems that right now conda will automatically add pip to any new env, 2.6 all the way to 3.5 (I'd expect it does this on all platforms). So the case I mentioned earlier about forgetting to install pip into a conda env can no longer occur (so you can't have the situation of running a pip in the wrong conda env).

The only Windows issue remaining then is that currently you can't upgrade pip with pip, but that's a minor issue because conda upgrade pip works just fine. If we wanted to be super generous to non-conda users on Windows, we could detect for pip install -U pip and just give a message that says. "Because of how Windows works, pip itself is locked while running so you can't upgrade pip like this. Do it like this: python -m pip install -U pip", and that will be perfectly adequate.

I don't think all the pain of yet another highly visible change is worth what is probably going to have marginal benefits in the long run.

w.r.t. pipX.Y: I've never used, nor trusted their use due to all the problems given in the top post. I've no opinion here as it doesn't affect me. I'd like to say "get rid of them" but I've no idea who is affected.

Personally I'm finding the change-fatigue argument somewhat compelling, and this seems like something that should be put off for a while. That said, it's not _really_ possible to slow down change overall, all we can do is plug the dam until it breaks, if there are real problems. So I think the root cause here is that we need a better way of _managing_ change in the broader community, so that we can keep improving at pace, without making everyone feel exhausted with the things to keep up with.

Personally I'm finding the change-fatigue argument somewhat compelling

I agree. I didn't realize other people were fatigued by this (I guess since I pay attention it doesn't fatigue me).

this seems like something that should be put off for a while

Well the length of time for deprecation as written in the original issue was "on the order of years". I think starting to move people towards python -m pip gently in the meantime won't hurt. pip, pipX.Y, etc won't be disappearing immediately.

In the context of "this will change in several years" I wonder if this will still affect change-fatigue.

Should someone, perhaps me, propose as a PEP that "python install" be an alias for "python -m pip"?

Starting with PEP 453 in Python 3.4, if I understand correctly, installing python includes installing pip. So it seems a feasible step for the python executable itself to invoke pip if started like "python install ...".

If it makes anyone feel any better, It's not like everything is perennially evergreen in conda land either, conda on my mac is currently hosed because one of the infrastructure scripts used in constructing an env seems to have lost an executable bit in a recent update ;)

Thanks all for your insights. After reading @cjrh excellent explanation for ubuntu, I'm cool with recovering from fatigue. FWIW, older installs of conda can mess up links and paths to packages as we found at the DjangoCon DjangoGirls. We essentially needed to completely uninstall anaconda and python then reinstall python and pip install the DjangoGirls packages.

I'm wondering if there is a matrix by Python version that spells out which command for each version and os for pip and virtualenv. A cheatsheet or a small script. My apologies if it is already in the docs.

@ajdavis

Starting with PEP 453 in Python 3.4, if I understand correctly, installing python includes installing pip.

Unless you're using a common linux distro python :D

btw python -m pip is 1 character less to type than python install. I'm not sure of any advantage of this, as well as it possible being semantically confusing - python install uninstall ..., python install -U pip?

@willingc yes unfortunately a LOT of python installers over the years have played pretty dirty with their install process, not cared about cleaning up after themselves, only cared about themselves, left themselves all over the path, etc which makes it extremely frustrating for some newer users. I'd say this is mostly the fault of the installers themselves though.

As far as

I'm wondering if there is a matrix by Python version that spells out which command for each version and os for pip and virtualenv. A cheatsheet or a small script. My apologies if it is already in the docs.

I'm not aware of any major differences? Apart from that windows always needs to upgrade pip with python -m as previously pointed out. That's mostly a cruft of windows, though.

It's also worth noting that I believe a lot of the other current complexity actually relates to the Python 2->3 transition, rather than being inherent in the way pip itself works. Specifically, a system Python 3 stack on Linux and other *nix systems requires different commands (python3, pip3) from any other Python environment (whether that's a Windows install, a virtual environment, or a conda environment).

Since that's just one of many complexities affecting the Python user experience on Linux (distro packaging policies have historically been designed around "assume every machine is running mission critical production services, even if it's actually an individual's laptop"), several of us involved in the Python 3 migration recently started a Linux SIG to better coordinate cross-distro efforts.

One idea I'm now suggesting we consider is the introduction of a user-configurable "py" shorthand to match the behaviour of the Python Launcher for Windows: https://mail.python.org/pipermail/linux-sig/2015-October/000000.html

@lvoz Yeah, much pollution from the past. FWIW, the Django Girls install issue took myself, @honzakral, and @jambonrose a good 30-45 minutes to troubleshoot and resolve. It was far from obvious to 3 experienced developers what was happening.

Thanks to all of you for helping move things forward. You are doing a great job.

-1 to drop pip +1 to drop others

Since pip 10.0 drops support for Py2.6, I just want to just poke at this issue once now.

I'd say my opinion on that might change by 2017 or so

Ping @ncoghlan? :)

Move pip/ to _pip/

This has been done, albeit as pip._internal. #4700

We could possibly offer a recipe in the docs to restore pip, pipX, and pipX.Y via shell aliases.

The warning should probably link to a section in the documentation which contains this snippet and a nice wordy explanation as to why this was done.

Another alternative would be to stop requiring that pip execute in the same Python environment that it is installing things into...

I like this idea from @dholth too. This should probably also have something like #4145 in place to be a little safe.

A number of Linux distros no longer ship an unqualified pip by default, since they don't ship python2 by default. So my preference at this point:

Inside an activated virtual environment, I think at least an unqualified pip should continue working, and affect the active virtual environment. It's unambiguous in that situation, so breaking it doesn't avoid any existing user pain. I'm more ambivalent on pipX, and pipX.Y (with X.Y matching the version of Python in the venv), since relying on that makes upgrading to a different version of Python harder than it needs to be, and is completely redundant inside a virtual environment.

Outside an activated virtual environment, I think it would make sense to start more actively discouraging them and recommending the python -m pip. That emphasises pip's position as being essentially a plugin manager for Python runtimes and aligns with what we've had in the standard library docs for a while now (https://docs.python.org/3/installing/#basic-usage).

Edit: noted my ambivalence about the Python-version-qualified names when inside an active virtual environment.

I think recommending different commands for inside a virtual environment and outside a virtual environment is confusing (and a pain, how do we do that on say pypi.org?). Whatever we do here there should be a single set of command(s) as the officially recommended mechanism.

I'm OK with the recommendations being consistently in favour of using python -m pip, the only thing I'm not OK with is actually breaking pip install inside an activated virtualenv - it's not ambiguous, and it works cross-platform (except perhaps when upgrading pip itself on Windows).

I mean, we don't have a mechanism for having commands that only exist inside of a virtual environment. If it works there it's going to work everywhere.

Right, I was thinking in terms of a script entry point that was always installed, but also always failed if you tried to run it outside a virtual environment (i.e. it would be a runtime check, rather than a "no file here" scenario)

-1 on dropping pip. Most users that I've seen typically have a single installation (conda has Python 3.6), and this makes no sense from that perspective. The only problem is the Python 2 issue and that will start to go away relatively soon.

So, we're now in a situation where we don't support Python 2.6. So python -m pip works everywhere and I'd be in favour of using it as the canonical way of running pip in the docs (and yes, there's python2/python3 on Unix, and py on Windows, but we can probably rely on users to work that part out).

For the case of virtual environments, we could probably just install the pip executable when installing in a virtualenv (without --user or --target) and not otherwise, leaving it to the system packages (or ensurepip) to install a pip executable as needed in that situation. People who want to continue using pip can, as a convenience - we wouldn't drop that option, just stop mentioning it in the docs.

One option could be to move the script wrapper to it's own package pip-cli or something like that, and just have that included in the list of packages that virtual environments install by default.

How would we handle cases like pip 10, where different versions of pip-cli are needed to support pip 9 and pip 10 (because pip.main moved)? If the user upgrades pip, we don't have a means to force upgrade pip-cli. So we'd end up with the same breakages that we currently see with OS wrappers.

The option of having pip-cli call pip in a subprocess is not reasonable, IMO. I know it's what we recommend for external tools, but I don't think that adding the overhead of an additional process on what will certainly remain the most common way of invoking pip for a long time, is really acceptable. You're effectively doubling the cost of Python startup, which is already high. I'd rather see some concrete measurements instead of gut feelings, but in the absence of measurements, I'm concerned about the cost of an extra subprocess.

If we do want to go down the pip-cli route, and we can't justify the subprocess approach, then either we need to give pip-cli a dispensation to use pip's internals, or we need to formally expose pip.main as a supported API (with whatever caveats we deem appropriate).

Having pip-cli use pip.__main__.main should be fine.

Or just use runpy: https://docs.python.org/2/library/runpy.html, it's what -m uses anyways.

+1 on using runpy.

+1 on using runpy.

Ditto. :)

TIL about runpy.

@Juanlu001 we're still supporting Python 2.7. :)

There's a bit more discussion over at #5508 (before @dstufft pointed out it was a duplicate of this issue!)

Personally, I think a big source of confusion noted here is the mismatching pip vs python -m pip.

ISTM this is something that can be checked using sys.executable and PATH. Additionally, pip should not warn/fail when invoked via a path (like bin/pip) which can be checked using sys.argv[0]. The edge-case I see here is: on an OS where sys.argv[0] is something different from what is used to invoke the script. AFAICT, it's not a problem on Windows or MacOS 10.13 (High Sierra).

This should be a good thing to do, regardless of whether we decide to remove the pip* wrappers. Am I missing something here?

I think what you're missing here is that mostly, the pip executables in question are ones shipped by the distro, which pip can't update (for one reason or another - permissions, installed in a location other than where a normal pip install goes, PATH issues around user vs system installs, etc).

Personally, I suspect that this isn't a problem that we (pip) can solve, simply because the wrapper script that's messing things up isn't owned by us. That's why my inclination is to stop shipping the wrappers, and avoid adding to the confusion - make it clearly a case of "to run pip, use python -m pip, if you use a pip command, talk to whoever delivered it, it's not us". My second preference is to just ship an unversioned pip wrapper, and direct people hitting the issue to their distro vendor (we could write a "detect the config" script that we could ask people to run when they hit an issue like this - but by definition we can't ship it with pip because it's diagnosing issues where pip won't run correctly...) My third preference is the same, but we continue shipping versioned wrappers on Linux only (I believe they promote the bad practice of having multiple versions of Python on PATH, but I don't see how we're ever going to get Linux users to stop doing that).

But to answer your "regardless of whether we decide..." point, yes, writing a script that tries to diagnose the user's environment and report on misconfigurations and problems would be really useful. It's just that shipping it with pip would be of limited use, because often people wouldn't be able to run it in precisely the cases they need it. We could link to a standalone script from the docs, though.

See https://github.com/pypa/python-packaging-user-guide/issues/396 for discussion of turning some recommended troubleshooting steps into an executable recipe.

That wrapper script is sort of owned by us. They're just using the one that we're generating, it'll just take a long time for a change to percolate out.

What I was getting at is that they don't install it where we expect them to (AFAIK, that's why sudo pip install -U pip breaks with the wrapper still looking for pip.main). There's also extra logic that we may need to put into the wrapper to handle disentangling systems that have pip installed system-wide as well as in user-site, and that is arguably a bug in our current scripts - although as far as I know the situation is muddled enough that I've never seen a good example of how to reproducibly get into that state...

@ncoghlan Thanks for that cross-reference to that issue. :)

@pfmoore That makes sense and makes me feel that separating the wrappers from _pip_ the package would be useful; whether it's complete removal or moving it out to a _pip-cli_ package.

I tried to come up with something that satisfies (nearly) all the concerns raised in this issue with the status-quo. I think a good position to end up in would be:

  • _pip_ does not provide any CLI wrappers; only supports python -m pip
  • _pip-cli_ mimics python -m pip in-process as a pip wrapper
  • Include _pip-cli_ in _virtualenv_'s installed-by-default packages

Reasoning for choices here:

  • Ease-of-use inside a virtualenv; it's still pip.
  • Windows upgrade UX improves

    • pip install --upgrade pip will work on Windows :tada:

    • would still need to do python -m pip install --upgrade pip-cli though

    • slightly less of a problem since updates to that might be much less frequent

  • Existing documentation says "pip install ..." which will still work in venvs.
  • Not providing of pipX and pipX.Y

    • they're redundant in a virtualenv + issues listed in OP; they don't scale well

    • kinda sorta avoid conflicting with the distro's executables to an extent (when the distro doesn't provide unqualified variants)

Inputs and comments welcome on the above.


Other Notes:

  • I don't see any compelling arguments for pipX and pipX.Y. They should probably get deprecated/removed.
  • Related: #4625 (Overwriting existing wrappers)

    • relevant here: this would make _pip-cli_ installation behave nicer with the distro-installed pips

  • Having the "sanity check" checking equivalence of python -m pip and pip, shipped as a part of _pip-cli_ sounds tempting to me.

@pradyunsg Overall, this looks like a good plan to me.

+1 for @pradyunsg's plan from me.

The one case of pipX I'm aware of that does see use is pip3 install --user ... on Linux systems, but changing that to python3 -m pip install --user ... instead isn't too much of a burden.

If distros decided they wanted to add the wrapper script back to their python2-pip and python3-pip packages for backwards compatibility reasons, I also think we'd be OK with that.

I think the strategy would be pretty straightforward for pipX, pipX.Y:

  1. Release n

    • Deprecate pipX, pipX.Y; shows a warning about removal when they are used.

  2. Release n+1 (or n+2)?

    • Remove pipX, pipX.Y from _pip_.

It's straightforward enough that I think we can let n here be 18.0; scheduled sometime next month. The only thing here would be: how long do we run the deprecation? I'm on the fence on that.


For pip, it gets more interesting. I'm pretty sure we don't want this change to be overly disruptive for user workflows _and_ want provide a smooth way to transition to _pip-cli_. One thing I think we should do here is add a special case so that upgrading _pip_'s pip doesn't overwrite _pip-cli_'s pip during the transition but vice-versa works.

Ideally, we'd have some sort of "beta" period for using _pip-cli_ where we could ask users to test out using _pip-cli_. That'd help iron out issues before we deprecate pip in _pip_.

  1. Release n

    • Deprecate pip of _pip_.

    • Add _pip-cli_ to _virtualenv_; ensuring it gets installed after _pip_.

    • Suggest users to switch to _pip-cli_.

  2. Release n+2

    • Remove pip from _pip_.

I think the 2 release cycles should be enough time to gather user feedback on this change and react appropriately. I do think the release before the one we remove pip in should say "the pip wrapper will be dropped in the next release" instead of "in a future release" like they usually do.


Now, I'm not sure if and where we should include some variant of the sanity-check/debugging information?

  • in _pip_'s pip? In Release n + 1?
  • in _pip-cli_'s pip; in the run up to Release n?

@pypa/pip-committers Let's deprecate pipX and pipX.Y in pip 18.1?

Is our plan just to tell people to use python -m pip then?

Yes. That's what we'd have.

It's a little bit trickier than that, since "python" may not refer to the
right thing.

That means any deprecation warning will need some heuristics based on
sys.executable, shutil.which and the running platform to decide what a
suitable replacement would be.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Miserlou picture Miserlou  Â·  71Comments

dstufft picture dstufft  Â·  170Comments

jaraco picture jaraco  Â·  99Comments

shredder12 picture shredder12  Â·  86Comments

pradyunsg picture pradyunsg  Â·  83Comments