As noted in https://github.com/pypa/pipenv/issues/1137, pipenv
doesn't currently check for requirement conflicts between [packages]
and [dev-packages]
, which can cause problems when attempting to generate a flat requirements.txt
that covers both, as well as causing general weirdness when dependencies conflict.
The way pipenv install
works (running the installation and then updating the lock file) means it is also relatively easy for the local environment to get out of sync with the lock file: if the installation request covers an already installed package, then it won't be updated locally, but the lock file will be updated to the latest version available from any configured package indices.
This issue doesn't cover an individual feature request. Instead, it establishes the goal of helping to ensure consistency between the lock file and the local environment by structuring installation commands to modify the lock file first, and then use the updated lock file to drive the actual installation step. (We don't have any kind of target ETA for how long this work will take - writing it down is just intended to help ensure we're all heading in the same direction, and are all comfortable with that direction)
A key aspect of this is going to be clarifying the division of responsibilities between pipenv install
, pipenv uninstall
, pipenv lock
, and pipenv update
(and potentially adding one or more new subcommands if deemed necessary).
Proposed substeps:
pipenv sync
subcommand that ensures that the current environment matches the lock file (akin to pip-sync
in pip-tools
). (Implemented by @kennethreitz for 10.0.0, with pipenv sync
ensuring versions of locked packages match the lock file, while pipenv clean
removes packages that aren't in the lock file. I like that change relative to pip-sync
, as it means that including an implicit sync in another command won't unexpectedly delete anything from the virtual environment)pipenv install
with no arguments, but switch to recommending the use of pipenv sync
when setting up a fresh environment (that way pipenv install
is used mainly for changing the installed components)pipenv lock
to always ensure that [packages]
and [dev-packages]
are consistent with each other (resolution for #1137)pipenv update
to be equivalent to pipenv lock && pipenv sync
(pipenv update is instead being removed entirely - it's old behaviour was actually comparable to what is now pipenv sync && pipenv clean
)pipenv lock --keep-outdated
option that still generates a fresh Pipfile.lock
from Pipfile
, but minimises the changes made to only those needed to satisfy any changes made to Pipfile
(whether that's package additions, removals, or changes to version constraints)pipenv install <packages>
to be equivalent to "add or update entries in Pipfile" followed by pipenv lock && pipenv sync
(implemented in #1486).--keep-outdated
option to pipenv install that it passes through to the pipenv lock
operation.pipenv install --selective-upgrade <packages>
feature that's semantically equivalent to "remove those package entries from Pipfile (if present)", run pipenv lock --keep-outdated
, then run pipenv install --keep-outdated <packages>
(this is the final step that delivers support for https://github.com/pypa/pipenv/issues/966). If just a package name is given, then the existing version constraint in Pipfile
is used, otherwise the given version constraint overwrites the existing one. The effect of this on the current environment should be the same as pip install --upgrade --upgrade-strategy=only-if-needed <packages>
in pip 9+ (except that Pipfile.lock
will also be updated).If anyone wants to pick up one of these substeps, make a comment below to say you're working on it (to attempt to minimise duplication of effort), then file a PR linking back to this issue. (Steps listed earlier will need to be completed before later steps are practical)
Open questions:
Resolved questions (at least for the immediate future):
pipenv lock
will continue to default to upgrading everything by default, with pipenv lock --keep-outdated
to request a minimal update that only adjusts the lock file to account for Pipfile
changes (additions, removals, and changes to version constraints)pipenv lock --skip-lock
will continue to work as it does today (even though it means the lock file and the local environment can get out of sync: use pipenv sync && pipenv clean
to resync them)pipenv install
and pipenv install <package>
will continue to imply a full pipenv lock
by default, with pipenv install --keep-outdated
needed to request only the minimal changes required to satisfy the installation requestpipenv install <package>
will continue to retain the existing version constraint in Pipfile
if none is given on the command line, even for the new --selective-upgrade
optionpipenv uninstall <package>
will just remove the specified package[s], and hence may leave no longer needed dependencies in the local environment. Running pipenv lock && pipenv sync && pipenv clean
will clear them out.Note: the original proposal here was just to ensure that [dev-packages]
and [packages]
were kept in sync when generating the lock file, and the first few posts reflect that. The current proposal instead covers the lock file driven symmetric update proposal first mentioned in https://github.com/pypa/pipenv/issues/1255#issuecomment-354585775
Noting a potential implementation challenge here: I'm not sure that pip-tools
currently supports constraints files, in which case we'd need to add that support first in order to gain the full benefit of this approach.
However, install-time conflict detection should be possible regardless, since pip is used to handle the actual installs.
You don't need to wait until install time per se. You can do this when you resolve the full transitive dependencies for the lock file.
You can mostly do this with LocalRequirementsRepository
per @vphilippon's suggestion for not updating unrelated dependencies in https://github.com/pypa/pipenv/issues/966#issuecomment-339707418. It looks sort of like https://github.com/taion/pipf/blob/1feee35a2e4480bc7e5b53bfab17587d37bdf9dd/pipf/pipfile.py#L175-L185.
See also https://github.com/pypa/pipfile/issues/100. It is still a root issue that Pipfile.lock
can even represent multiple incompatible sets of dependencies for the different groups.
@taion We're not going to switch Pipfile.lock
to a flat representation, since we want to support tools relocking the deployment dependencies without even looking at the development dependencies. Yes, that does mean the [dev-packages]
section may drift out of date, but that's the purpose of this issue: ensuring that when the [dev-packages]
section does get updated, it will be resync'ed with the deployment packages section.
If we can detect any conflicts at lock time instead of at install time, that will be excellent, though.
In practice from a user perspective that's just a special case of https://github.com/pypa/pipenv/issues/966 and other related (closed) issues.
Like worst case you just cache the resolved transitive dependencies in the lockfile. Yarn and npm already do that anyway. It'd let you update all your prod deps without hitting PyPI for the dev deps.
Unless I'm missing something there's no active benefit of letting things drift out of sync. As someone who's spent a lot of time with these package management patterns, most of these deviations from how existing tools work just seem to cause pain.
I'll also add that npm does in fact support updating only production dependencies and not development ones, and it does in fact use a flat lockfile (with cached resolutions).
Interestingly enough, Yarn doesn't for its bulk "update dependencies" subcommand (though the interactive version does split prod and dev deps), and it doesn't look like anybody's complained about it, but that can probably be chalked up to the npm-alikes using caret ranges for dependencies by default, plus people generally being good enough about following SemVer to seldom cause users problems from bumping dev deps. The Python ecosystem, not so much.
@taion An explicit pipenv lock
already fully resolves both [packages]
and [dev-packages]
, so we shouldn't need to worry about configuration drift in that case (and that's the case where I think it's most desirable to be detecting dependency conflicts).
If there's a symmetric solution that also allows both pipenv install package
and pipenv install --dev package
to detect (and resolve) inconsistencies, then I agree that would be a good way to go. However, I'd also being OK with an asymmetric solution, if that was easier to implement (hence the framing of this proposal).
For the symmetric approach, I think one way of framing & implementing that would be to pursue the approach that @vphilippon described in https://github.com/pypa/pipenv/issues/966#issuecomment-339791934, but structure it as follows:
pipenv lock --keep-outdated
option to request a minimal lockfile update rather than the default comprehensive onepipenv install package
to be defined as "edit Pipfile -> pipenv lock --keep-outdated
-> pipenv install
"(Note: for security management reasons, I really want to keep the default behaviour of pipenv lock
as upgrading everything - however, I do think a feature like --keep-outdated
has a place when other mechanisms are in place to ensure timely responses to reported security issues in dependencies)
The behavior you're describing for pipenv install
is mostly what #966 gets at, but it's not exactly the same. Imagine you have X = "*"
. Doing pipenv install X
should probably still upgrade X
in the lock file, even though there's no change to Pipfile
. Pipenv needs to keep track of what _actually_ gets updated.
In other words you can't fully split out the "install" from the "lock". See e.g. https://github.com/taion/pipf/blob/1feee35a2e4480bc7e5b53bfab17587d37bdf9dd/pipf/pipfile.py#L150-L155 – when regenerating the lockfile, I explicitly pass in the packages that are getting installed to discard the old constraints for those.
The complaint anyway isn't that something like an explicit pipenv lock
updates all dependencies to the latest available; it's that pipenv install Y
touches the version for an unrelated X
. In other words, people shouldn't really need to explicitly run pipenv lock --keep-outdated
.
Yarn's handling here is a pretty good example in terms of CLI. Yarn has yarn lockfile
, which generates a lockfile without changing anything, and is explicitly marked as "[not] necessary", and it has yarn upgrade
which upgrades everything. If for some reason you do manually edit your dependencies, the next step is to just run yarn install
, which both updates your lockfile (minimally) and installs the necessary packages.
I've never had to explicitly generate a lockfile outside of installing my updated dependencies, and I don't see why I would want to do so. It's not a typical action with a locking package manager, since almost any action you take to modify your installed dependencies will update the lockfile anyway. The set of user-facing interactions instead looks more like:
(1) and (2) should always apply minimal changes to my lockfile (or generate it if necessary), while (3) should essentially disregard the existing lockfile. But for just generating a lockfile, (1) is sufficient (since it's going to be run from a dev environment where everything is installed anyway).
P.S. Yarn splits out yarn add
(add a dependency) from yarn install
(install all dependencies). It was sort of confusing and annoying at first, but the more I've used it, the more I like that distinction. Having a single install
command that both adds specific dependencies _and_ installs all the project dependencies is sort of weird, if you think about it. Those two actions aren't the same thing.
The first few paragraphs in https://lwn.net/Articles/711906/ do a pretty good job of describing my mindset here: "moving target" should be the default security model for all projects, and opting out of that and into the "hardened bunker" approach should be a deliberate design decision taken only after considering the trade-offs.
Hence the choice of option name as well: --keep-outdated
isn't just inspired by pip list --outdated
, it's also deliberately chosen to look dubious in a command line (since keeping outdated dependencies around is inherently dangerous from a security perspective).
For the specific case of pipenv install X
, it should not implicitly do an upgrade, because pip install X
doesn't implicitly do upgrades.
pipenv
is currently inconsistent in regards to that latter point though, since it does the following:
pip install package
(no implicit upgrade)pipenv lock
(implicit upgrades)Thus the idea of ensuring consistency by always using the lock file to drive the installation (unless --skip-lock
is used), and offering pipenv lock --keep-outdated
to match pip's only-if-needed
upgrade strategy.
I think we're in agreement on the substantive points on the desirability of upgrading dependencies.
And, sure, pipenv install -U <package>
to update a single package would be better to be parallel with Pip.
I am saying, though, in a normal workflow, users have no reason to run pipenv lock
, with or without --keep-outdated
or whatever. The sets of commands a user might run for various scenarios are:
pipenv init
pipenv install
pipenv install <package>
(and note again that Yarn calls this add
, which is a good idea)Pipfile
: pipenv install
pipenv update
All the above actions keep Pipfile.lock
in sync.
Or, to put it another way, I've never had to run npm shrinkwrap
(with npm 5), and I've never had to run yarn lockfile
.
(And the reason it's not so useful to just generate a lockfile with updated dependencies is because the very next step in a dev workflow is to reinstall the updated dependencies, then run the test suite.)
FWIW, I run a tool which creates dependency update PRs automatically (Dependabot), including for Pipfiles, and I would love a pipenv lock --keep-outdated
option. At the moment we edit Pipfiles to lock down every dependency except the one we're updating when we create a PR.
In my experience, PRs to update a single dependency are much more likely to be merged than ones that update many at once. Generating them automatically helps ensure the moving target keeps moving.
Right, and PR generation is where I think pipenv lock --keep-outdated
will be useful: it assumes the "pipenv install && run the tests" step will happen remotely on the CI server, rather than on the machine where the PR was generated. Similarly, pipenv lock
is for the case where you're aiming to generate a batch update PR that you're going to test in CI, without worrying overly much about applying the change locally.
In that vein, if pipenv install <requirement>
were to become equivalent to "update Pipfile + pipenv lock --keep-outdated
+ pipenv install
", then it would make sense for pipenv update
to become equivalent to pipenv lock && pipenv install
.
While yarn
makes a distinction between yarn install
and yarn add
, if we were to ever add such a distinction to pipenv
, I'd be more inclined to borrow from the pip-tools
terminology and add a pipenv sync
command to say "Make the current environment match the lock file". However, doing that's out of scope for this proposal: this should focus specifically on eliminating the opportunity for discrepancies to arise between deployment dependencies and development dependencies.
@greysteil
The equivalent for the Greenkeeper/&c. use case with npm is just the following, right?
$ npm install --package-lock-only <package>@latest
In fact does npm5 even expose a way to _just_ create a package-lock.json
outside of npm install
?
@ncoghlan
Greenkeeper-like services are very cool and worth using, but it'd be odd to consider that a core use case (and as noted above, even that case of "update a single dependency and the lockfile, but don't install things" admits a cleaner expression).
Oops, I hit enter too soon. Outside of Greenkeeper-style workflows, it's almost vanishingly uncommon to _just_ bump my dependencies without syncing them locally. It only makes sense if you somehow don't have a local copy of the deps installed. And again PR-only services are an exception, but only that.
The description here is "Python Development Workflow for Humans", and as such it's worth keeping in mind what actual development workflows look like. And we have a pretty good existence proof that explicitly running the "generate lockfile" operation is almost never necessary, so it'd be worthwhile to not think too much in terms of those.
The specific issue here, BTW, is that Pipenv uses *
ranges by default. If I do pipenv install flask
, I get flask = "*"
in my Pipfile
. That means that, if I want to upgrade my installed version of Flask in a clean way, that sort of wants to be a lockfile-only change.
An upgrade operation could in principle switch a *
requirement to a >=
range or something in Pipfile
, but that creates a weird asymmetry between "upgrade" and "initial install". Arguably the best resolution here might be to use >=
ranges for initial installs anyway. At least, I'm not sure I can see any good reason for using *
ranges there.
@taion - yep, agreed, not expecting you guys to keep us / other dependency update tools in mind too much, although very grateful if you do! I only mentioned it because I know I would have used pipenv lock --keep-outdated
manually if I'd been updating dependencies at my previous workplace (we had a one PR per dependency update policy).
For reference, npm
added a package-lock-only
option in the latest release (changelog).
@greysteil What do you do if someone has a *
range in Pipfile
, though? Like in my flask = "*"
example above. As-is, given that this is how Pipenv adds dependencies by default, you need to know which requirement to un-pin when rebuilding the lockfile, no?
_I don't understand the working of Pipenv in anywhere near the detail that others on this thread do, and don't want to take it off-topic. If the below is useful, great. If it's not, sorry!_
We use some nasty hax:
pip-tool
, but don't yet. Turns out that's not so bad as the next step will error if it's unresolvable)Pipfile.lock
that only updates that (top-level) dependency, wepipenv lock
to generate a new lockfileEnd result is that the user gets a PR with their Pipfile
unchanged (if the requirement was a *
) but their Pipfile.lock
updated to the use the latest version of that dependency.
As I understand it, pipenv lock --keep-outdated
would allow me to use the following flow instead:
pipenv lock --keep-outdated
to generate a new lockfileSo, that would be a bunch better. Ideal would be to be able to run pipenv lock --keep-outdated <some_package_name>
, but beggars can't be choosers! :octocat:
Suppose my Pipfile has:
flask = "*"
numpy = "*"
flask
transitively depends on werkzeug
. Suppose there are upgrades available for flask
and numpy
, and that I want to upgrade my version of flask
, but not upgrade my version of numpy
.
If I weren't using Pipenv, I'd just run:
$ pip install -U flask
$ pip freeze > requirements.txt
If I were using pip-tools, I'd run:
$ pip-compile -P flask
$ pip-sync # Or: pip install -r requirements.txt
Well, just doing some hypothetical pipenv lock --keep-outdated
wouldn't do anything. On the other hand, fully rebuilding the lockfile would upgrade NumPy as well.
But unless I'm misunderstanding, the strategy you describe above would only update the version of flask
in the lockfile, but not the version of werkzeug
. This might then be an invalid set of dependencies, if the new version of flask
also _requires_ a new version of werkzeug
.
As such, for this case, which I think is a very common use case for humans upgrading dependencies, there should be some way to upgrade a single package and its transitive dependencies (if needed), but not anything else.
The problem here specifically is the *
dependencies. The way npm handles this is to start with something like:
flask = ">=0.12.1"
And then on requesting an upgrade, bump that to:
flask = ">=0.12.2"
In which case the strategy above of running something like a pipenv lock --keep-outdated
would do the right thing. But if you're using a *
range, then I don't see a real way to do this without some special tooling like pipenv install -U flask
, that then drops flask
specifically as a suggested pin when rebuilding the lockfile.
Sorry, I wasn't clear enough about step (iii) above. We lock all of the top-level dependencies to the version in the lockfile, but leave the sub-dependencies unlocked.
@greysteil I see – so this can potentially upgrade transitive dependencies of other direct dependencies?
Yep - theory is that if your top-level dependencies are specifying their sub-dependencies properly (i.e., pessimistically) then updating your sub-dependencies shouldn’t be dangerous.
(I’d rather the behaviour was closer to Ruby’s Bundler, where only sub-dependencies of the dependency being updated can also be updated, but strong-arming pipenv to do that didn’t seem worth the hackery.)
@taion By design, pipenv
focuses almost entirely on the "moving target" security model I described in my LCA talk. Thus, the two intended usage models are as follows:
Interactive batch updates:
Pipfile
Automated selective updates:
Now, there are currently CLI design & implementation problems affecting both of those usage models - for interactive batch updates, there are ways for the lock file and the local environment to get out of sync, and for the automated selective updates, there are challenges in implementing the tools that do the selective updates, as well as in enabling the related interactive selective operations to add new dependencies and modify the constraints on existing ones.
But that core design philosophy of "Default to running the latest version of everything unless explicitly told otherwise in Pipfile
" isn't going to change - we just want to add enough selective update support to better handle the case where most lock file updates are submitted by an update management service, rather than being submitted by humans.
Awesome to read that automated tools are a use case you're thinking about @ncoghlan. 🎉
If you ever want feedback on what Dependabot / others would like/need then let me know. Python's not my home language, so I've not been able to contribute to pip/pipenv yet, but I'd love to help in any way I can.
@ncoghlan I am totally in agreement with you here, and I think the approach you described makes sense. Using a LocalRequirementsRepository
isn't very hard to implement from a code perspective and this is in line with my thinking about the project as well, and this discussion has been highly productive
@techalchemy OK, I'll revise the initial post to cover the symmetric proposal (i.e. handling installation of new packages via pipenv lock --keep-outdated
+ pipenv install --ignore-pipfile
, rather than running the installation first the way we do now).
That will still leave us with several open questions (like how --skip-lock
should work in that model), but it should be enough to start breaking out individual feature requests (like adding --keep-outdated
to pipenv lock
for requesting the only-if-needed upgrade strategy)
OK, I've updated the initial post with some proposed substeps that will allow us to reach a state where it's genuinely difficult to get the lock file and the local environment out of sync.
Of the proposed substeps, I think the pipenv sync
subcommand and the pipenv lock --keep-outdated
option are already clear enough for interested folks to look at implementing them, but the idea of redefining the semantics of all of the other operations that may install or update packages in terms of Pipfile edits and the behaviour of pipenv lock
and pipenv sync
likely requires further discussion.
Looks great. On the open questions, I totally agree with the proposed behaviour for pip install --selective-upgrade <package>
(i.e., do an upgrade that honours the existing version constraint, unless given a new one). That's behaviour you'd often want when updating a single package - I can imagine people running pip install --selective-upgrade django
where you've got a < 2.0
constraint that you want honoured.
Worth clarifying some of the copy above – the proposed pipenv install --selective-upgrade
should be roughly identical in effect to a standard pip install -U
.
And, again, in practice, people have thought about this a lot in building other package managers. Perhaps in a vacuum, the workflows you describe in https://github.com/pypa/pipenv/issues/1255#issuecomment-355435921 make sense, but, like, is the goal here to build "Python Development Workflow for Humans", or is it to experiment with new development workflows?
Pipenv is a really cool project that makes a lot of things better, but I think there's a pretty fundamental confusion in aims here. Locking package managers are pretty much standard these days, and it's great that Python will have a blessed one, but I don't think conflating "exploring a new theoretical package management workflow" and "building a package management workflow that behaves the way people expect" is a positive.
Again, note e.g. http://bundler.io/v1.16/man/bundle-install.1.html#CONSERVATIVE-UPDATING and many, many others.
What would change my mind here is if you can attest that you've worked extensively with some of these existing tools, and found the dev workflow lacking. I can tell you, though, that as a human, seeing Pipenv apply these unusual defaults, regardless of the reasons, does not make for pleasant DX.
To make that litmus test more explicit:
npm i
, yarn add
, and bundle update <gem>
/bundle add
don't touch all your other dependencies?npm up
, yarn upgrade
, or bundle update
?If not, then essentially the goal of the defaults here seems to be proselytization for a novel workflow, rather than building a tool that will work as users expect.
Otherwise this seems like an exercise in deliberately selecting unexpected defaults that on net are just going to frustrate users – not what one expects from a package that describes itself as "for humans".
@taion Yes, it's deliberate that we're actively pushing folks towards implicit batch updates of all of their dependencies, and yes I have extensive experience with a lot of different package managers. In my experience, the consequence of the conventional defaults are that people regularly run antiquated versions of the components they use, never check them for CVEs, only update them when another component forces an upgrade, and don't even realise that that approach is highly problematic.
So for pipenv
, we're deliberately taking our cues from mobile and modern desktop operating systems (i.e. defaulting to auto-updates that require deliberate action to avoid), rather than following other package managers and traditional server operating systems. It's not that we don't know how those work - it's that we think the typical outcomes of the traditional default behaviours are bad, and we want to try to avoid them.
Adding --keep-outdated
(and a corresponding PIPENV_KEEP_OUTDATED=1
environment variable) then allows more experienced developers to explicitly say "I know what I'm doing, and you can trust me to manage my dependencies responsibly".
I've removed the --selective-upgrade
open question, and rolled those details into the description of that proposed feature.
However, I've added a new open question related to pipenv uninstall
. While I'm prepared to defend having pipenv install
imply pipenv update
by default (as per the above discussion with @taion), having pipenv uninstall
do that seems significantly less defensible to me - having an operation to remove a dependency potentially add new transitive dependencies due to upgrades in other components would be very strange.
On the open questions, I can't see any use for pipenv install --skip-lock
from the world of automated dependency update tools. If there's no human use case (I can't think of one) then I think it's safe to deem it redundant.
np uses --no-package-lock
(and the Yarn equivalent). The idea there is to run your tests with all your npm dependencies unlocked to the latest compatible ranges, explicitly ignoring the lockfile.
That specific use case is possibly less relevant given the existence of tox (and the difficulty of building anything like np for Python), even so for something like a library, it sort of makes sense to have both a locked-down set of dependencies so that people jumping in don't have to deal with a broken build environment, and to allow people to test against latest compatible dependencies before e.g. a release.
Aye, I can definitely see the testing use case, but the part that's less clear to me is the benefit of leaving the lockfile unmodified in that situation (particularly since pipenv update
doesn't currently offer that option):
git push
, so the lockfile modifications won't appear anywhere else anywayThere's also pip install --upgrade <whatever>
to bypass the locking mechanism and the declared Pipfile constraints entirely.
Right now, the option makes sense, since it's the closest equivalent pipenv
currently offers to pipenv --keep-outdated
(whereby you can add something to Pipfile
without auto-updating everything else).
For now, I'll leave it as an open question - the right answer will hopefully become clearer once pip lock --keep-outdated
and pip install --keep-outdated
exist.
Are the proposed changes still expected to fix the discrepancies between [packages]
and [dev-packages]
? In particular, I was running into #1220, which refers to this issue, but I can't see how the proposed changes are going to fix that inconsistency. Or is the dev/non-dev inconsistency different from the one described in #1220 and should that issue be separately investigated?
@matthijskooijman It is not explicitly stated in the top post, but https://github.com/pypa/pipenv/issues/1255#issuecomment-354585775 (which is mentioned in the post as current) does state that there needs to either be a way to “detect (and resolve) inconsistencies” in packages
and dev-packages
, or an alternative is needed. This to me implies that the inconsistency will be resolved.
The intent is for that inconsistency to get resolved at the pipenv lock
step (pipenv sync
will then inherit the self-consistent lock file)
(Note: this wasn't clear previously, so I've added a separate bullet point calling that step out)
I just noticed that pipenv does not regenerate the lockfile when you change the underlying python implementation between CPython and PyPy.
pipenv install --python pypy
pipenv install
host-environment-markers
section will still contain "platform_python_implementation": "PyPy"
@joshfriend That's actually intentional, but see https://github.com/pypa/pipenv/issues/857#issuecomment-368223561 for some comments on why it's currently problematic.
Keep the current behaviour for pipenv install with no arguments, but switch to recommending the use of pipenv sync when setting up a fresh environment (that way pipenv install is used mainly for changing the installed components)
I think now that install
calls sync
, we can still allow users to use install
to boostrap new projects.
update pipenv lock to always ensure that [packages] and [dev-packages] are consistent with each other (resolution for #1137)
i just took a stab at this https://github.com/pypa/pipenv/commit/fdebdc3c423dce83c13d4e384acb703291109f1e
it has implementation details for sure (sub deps that exist for a develop package but not a default package won't get removed, for example), but it's a definite improvement.
fixed the implementation details. https://github.com/pypa/pipenv/commit/daa56e1290b259af8e2410b2f4d799ca71601a37
@kennethreitz Good point regarding pipenv install
reading fine for the fresh install case: it only reads strangely in the resyncing case, where it may end up doing upgrades and downgrades to make versions match. I've marked the bullet point about recommending pipenv sync
as complete, and struck through the bits we decided not to change.
With your implementation of the #1137 changes, that just leaves the --keep-outdated
and --selective-upgrade
enhancements that together implement #966.
Given the chosen implementation approach for #1486, the --selective-upgrade
feature is going to need to pass --upgrade-strategy=only-if-needed
to the underlying pip install
call in addition to setting keep_outdated
when updating an out of date lock file.
Working on --keep-outdated
now :)
$ pipenv lock --keep-outdated
now preserves all version numbers from the previous lockfile, unless the version numbers are pinned.
Also added an additional Pipfile
configuration option (currently we only have allow_prereleases
): keep_outdated
:
[pipenv]
keep_outdated = true
--selective-upgrade
is done.
Huzzah! Closing this, as any further limitations that aren't already covered by #857 can be reported as new issues after 10.1 is released :)
working on a bug, but will be resolved shortly
fixed
Most helpful comment
--selective-upgrade
is done.