Pipenv: recommended method of use in crontabs?

Created on 28 Jan 2018  Â·  43Comments  Â·  Source: pypa/pipenv

This may just be a documentation issue, but what's the recommended way of using console scripts provided by a package that has been installed into virtualenv managed by pipenv in a crontab?

With traditional virtualenvs, you'd just use the absolute path to bin/{whatever}, but with opaque virtualenvs I'm guessing there is or should be a different way.

Most helpful comment

Thats... less elegant than I'd hope from the latest and greatest in the world of python packages.

All 43 comments

I think pipenv run {whatever} is what you want.

How does that know which project's environment to use?

Run it from the directory with the Pipfile in it.

Sorry be dense, but how do I do that from a crontab?

@cjw296 cd some/project/dir && pipenv run script should be sufficient.

Thats... less elegant than I'd hope from the latest and greatest in the world of python packages.

@cjw296 This isn't an implementation we've been targeting. We're happy to review PRs if you have better ideas on how you'd like it to work.

You sure about that being happy? This looks like a great plan: https://github.com/pypa/pipenv/issues/1049#issuecomment-357103428
@kennethreitz 's https://github.com/pypa/pipenv/issues/1049#issuecomment-358332237 was both disappointing and unhelpful in its vagueness.

@cjw296 It's not clear what you're referring to. Jace's proposal is around changing the default choice, not the functionality around virtualenv management. All of the pieces discussed in that issue are readily available to users right now and have been for almost a year.

Even if we did implement the change to our default behaviour, you would still have to navigate to the directory to run the script. I fail to see how that makes the situation here better. Would you mind clarifying?

With venv-in-project, I could just put /abs/path/to/project/.venv/bin/{whatever} in a crontab (identical, bar the name of the venv subdir, to what I do now).

It's not very clear when I'd need to set the various env vars to have them take effect. Once when I'm creating the venv? Every time I use it?

If it's the former, and I guess it probably is for my use case, would be nice if it were a command line option to pipenv install.

Only tangentially related: I currently manually maintain a collection of venvs outside my vcs dir during dev, and I actually want more than one (one per python version is most common). An no, I don't like tox.

@cjw296 if you're calling the script directly and not using pipenv run that's true. That's also true with the current pew default though. pipenv --venv will give you the location of any project which you can then place in your crontab.

The environment variable should be set in the .profile, .bashrc, .zshrc, etc. of the crontab user. Once this is done, the first time you run pipenv for that project, a local virtualenv will be created for that directory. The directory will continue to exist, regardless of the environment variable being set, but will only be used by pipenv if the variable is present.

As discussed in the previous issue, these are usually binary choices. People either want their projects centralized, or stored with each project separately. Once you set the variable, pipenv will use that behaviour. Otherwise, anytime you forget to pass that flag a new centralized environment is created and that's usually a more painful user experience.

As for maintaining multiple venvs, the current proposed solution is tox or the tox-pipenv plugin. There may also be options in pew to allow for this. Those are the only integrations we have at this time, but we're happy to review a proposal for a better option if you have one. Let's try to keep to the subject for this issue though to avoid bleed over.

I was afraid that would be the answer w.r.t. the environment variables. Smells like the annoyance people face with REQUESTS_CA_BUNDLE; magic that has to infect your whole environment rather than, in that case, just using system-provided SSL certs like everyone else.

Honestly, the "type" of location to use (eg: in project, through pew, (for future?) through conda) feels like it should go in Pipfile and the actual location should be in Pipfile.lock.

I'm realising that an important use case for me is to be able to use entry point scripts by absolute path alone, without any env vars, magic working directory scripts, etc.

I'm afraid I also can't agree about your "binary choices" comment. I work on a huge variety of projects, and I'm not in a position to make that kind of decision for a lot of them. Furthermore, the choice, for me, varies on what I'm doing: for library development, I want centralised venvs. For application development, I want local envs.

Just want to take this moment to try pushing #1210 again. I feel it is pretty clear a lot people want per-project configuration. .env would solve most (if not all) these problems if Pipenv reads it for commands other than run and shell.

@uranusjr we've discussed this several times since the project started, first with a .pipenv file and later after .env was introduced. Kenneth is strongly opposed to storing this kind of information on a per project basis, which is why we're working in our current state. I agree it would be an added nicety though, there's been significant demand for it.

@cjw296 the location definitely wouldn't go in the Pipfile.lock, since that's not deployment independent. Kenneth also hasn't wanted to have environment details in the Pipfile since they're not true for every user of the Pipfile, only a single instantiation.

If you're hosting a variety of projects on the same user account, I'd say that's likely a "bad smell". For local development, Kenneth has pushed direnv as an alternative option for swapping between projects, but that's all we have at the moment.

@nateprewitt - you only ever work on one project per user account? Really? You have a different user for every library you work on?

@cjw296, there's no need to be combative. I think we've strayed pretty far from the original issue here.

I have PIPENV_VENV_IN_PROJECT set on my system and that's what I use for development. I've been a vocal supporter of having this be the default, and for project level parameterization, but that's been passed on by the project for the time being.

I don't think I've ever needed to cron parts of my development workflow for a non-application/service piece of software. I typically only cron for productionized systems or for automating things locally. In the first instance, yes, I would isolate different applications to different users. In the second, I wouldn't have an issue with an extra 4 characters in a command I likely won't look at again for a while. As I've said above, this is the current state of things. We're happy to review suggestions on changes and work towards a better solution given current constraints.

It's interesting that you view my comments as combative, the responses of this project's maintainers feel the same way on a number of issues. "We're happy to review suggestions on changes", I'm afraid, doesn't appear to be born out in a number of cases, including this one.

When I say that, I'm not referring to your specific comments above, but to the overall intransigence shown towards use cases people are asking for. I'll leave this open in case others have a similar desire, but I'll pass on Pipenv for now and go find something more supportive of my use cases.

@cjw296 I am just not seeing how anything Nate said means he only works on one project per user. What’s more confusing to me is the use case where you are developing on projects where you are not able to decide where your own virtualenvs are stored. That isn’t a workflow I’m familiar with.

One option you have is to create your own virtual environments if you aren’t able to work with pipenvs creation options. Then you can install pipenv in the virtualenv to do day to day management, but you can call scripts in the virtualenv directly from cron. Currently I call scripts directly from cron because pipenv is a cli application to manage an environment, I don’t see the value add for cron most of the time. That’s what I did when I was using pip + virtualenvs. What am I missing?

We ARE happy to entertain solutions beyond what has been suggested, but have clarified which things have already been considered. I haven’t seen any alternatives, but I do want to clarify that pipenv isn’t meant to set restrictions on distributed project teams. The ‘binary choice’ comment applies to a developer system, I don’t see how it would force everyone to follow suit?

cd some/project/dir && pipenv run script should be sufficient.

That's a little optimistic I think.

That gets fairly hairy given a combination of cron's typically sparse idea of a path and the way someone might have installed pipenv (e.g. in /usr/local, which you'd get in Debian when issuing a sudo pip install pipenv). I agree with @cjw296 I'd have hoped pipenv had a better answer here.

@JosefAssad what do you mean by cron's sparse idea of a path? If your working directory is the directory of the project, pipenv will find the relevant virtualenv

and the way someone might have installed pipenv (e.g. in /usr/local, which you'd get in Debian when issuing a sudo pip install pipenv).

Obviously if pipenv is not on the path that is available to cron, you should call it by its full path.

what do you mean by cron's sparse idea of a path

On Debian, the cron path is /usr/bin:/bin.

Obviously if pipenv is not on the path that is available to cron, you should call it by its full path.

Yes, obviously. Then it complains that pew can't be found, which sudo pip install pipenv dropped in /usr/local also, at which point one has to adapt the PYTHONPATH also in the cron command (I am assuming, I got tired of experimenting at this stage). This is why I described the solution as hairy.

For reference, the way I do this in cron with bog standard virtualenv is to not so simple either, I source the virtualenv's activate script and run the command by full path after that. So it's not like that's pretty either, but as I said, I was hoping pipenv had a nicer solution here.

@JosefAssad why are you installing from pip with sudo? You're giving root permissions to arbitrary code from the internet. If a package is compromised, you've just run whatever code they decided to put in setup.py. The official python documentation is very clear not to do this because it causes your exact path issues.

There are many issues on StackOverflow detailing proper installation of just about every flavor of *nix.

As I said above, the suggested approach will work but isn't ideal. We haven't really focused on improving the user experience around cronjobs. We're happy to review suggestions on how to make this better, but "me too" posts don't really progress the conversation.

@nateprewitt I appreciate the reminder about safe Linux'ing. The reason I am installing pipenv like that is, the pipenv documentation suggests using the following:

pip install pipenv

That command fails on Debian (and most reasonable Linuxes, I would guess). Since I need it available to all users, I am using sudo so it gets installed systemwide (as opposed to pip install -U pipenv). Fortunately, Debian is fairly sane about this and uses /usr/local.

I'll make sure to check stackoverflow for tips on installing *nix, I'm sure I haven't learned everything there is to know in my 18 years of using Linux.

At any rate, even if I decide to pip install -U pipenv to appease the id == 0 gods, it's still a python path which "fresh off the boat" cron is going to need spelled out (not just for pipenv but also for the python libraries it depends on) and no less complex than if I had installed pipenv in /usr/local.

@JosefAssad I'm still not totally convinced about invoking pipenv via cron honestly, as is probably clear we haven't considered that usage much because you can invoke the relevant python executable directly instead (which is what I do in cases like this) -- this kind of gets around all of the other questions of setting PYTHONPATH and worrying about whether pew is also available etc.

I source the virtualenv's activate script

Out of curiosity, are you sourcing it out of habit or is this actually necessary for your use case? Does just calling the python executable in the virtualenv bin and passing the relevant script as an argument accomplish the same goal?

As yet another alternative approach you could write an invocation script along these lines:

#!/bin/sh
VENV_PYTHON="/path/to/project-hAsHpTH/bin/python"
PROJECT="/path/to/some/project"
SCRIPT="script.py"
cd "${PROJECT}" && "${VENV_PYTHON}" "${SCRIPT}"

or even the alternative case of just using the virtualenv interpreter in your shebang, or something.

I'm still not totally convinced about invoking pipenv via cron

Yeah I'm just happy we're kicking ideas around. I'm not married to pipenv having to handle this either, I'm just looking for a clean approach. I do appreciate and respect the imperative to control the scope of pipenv.

For the sake of thinking it through thoroughly, The cron use case might be a good example of what people might end up expecting from non-interactive use of pipenv. Dockerfiles might be another area where non-interactive use of pipenv might crop up.

Out of curiosity, are you sourcing it out of habit or is this actually necessary for your use case

Well, there might be a better way of doing it, but when I run virtualenv-based python projects out of a crontab, I'm sourcing the project's bin/activate since that doesn't just give me the right interpreter, it gives me the right path to that virtualenv's package library without having to muck about with python paths. That's necessary if I don't want to install all packages from my virtualenvs in my system library.

Does just calling the python executable in the virtualenv bin and passing the relevant script as an argument accomplish the same goal?

No unfortunately that doesn't work, and I think for the same reason as the (otherwise really clean) wrapper you suggested won't work either (I just tested to be triple and quadruple sure): because it only specifies interpreter, not the venv's package lib.

even the alternative case of just using the virtualenv interpreter in your shebang, or something

You already know this so forgive me but I'm just spelling it out in case others read this thread: that just picks the right interpreter again, leaving us again with the problem of pointing it at an alternate location (the venv's location) for packages.

More generally, I'm a little hesitant to hardcode interpreter location in my source code. It's really clean for the shebang to say (as I usually write) #!/usr/bin/env python and then to let virtualenv/pipenv handle setting the env right, than to write #!/path/to/project-hAsHpTH/bin/python. The project-hAsHpTH/bin/python part won't change but the /path/to/ might very well.

I know it's all a little abstract; if it's easier that I produce a minimal example just say so.

Apologies for the non-sequitur, but I am reminded of what I think is a comparable solution in git. Probably as a nod to non-interactive use of git, it has a command line switch -C which (copying from the man page here):

Run as if git was started in <path> instead of the current working directory.

The git executable depends on being able to find the .git directory inside the repository; perhaps this is a a little analogous to pipenv needing to be able to identify which venv is associated with a given python project.

If I could - say - stand in a location like /etc and call a command like pipenv --pretend-youre-standing-here=/path/to/project run passwdcleaner.py passwd (fictitious example of a python project!) then that command line switch --pretend-youre-standing-here would be analogous to git's -C and would - as far as I can tell - address the cron use case, and probably also a bunch of other non-interactive ways people might end (trying) up using pipenv.

Whether that works for pipenv, that you know better than me. :)

@JosefAssad thanks for the clear response -- the docker use case is a bit different I think, just because docker containers are usually atomic and many users are installing with --system inside a container. The use case of pew invocation is an interesting point I've been thinking about since you mentioned it -- we currently have some logic to find executables, but I don't believe we are applying this to our calls to pew presently, which is one of the reasons it won't work just by changing to the project directory.

For context, pipenv run currently works with the following code:

def inline_activate_virtualenv():
    try:
        activate_this = which('activate_this.py')
        with open(activate_this) as f:
            code = compile(f.read(), activate_this, 'exec')
            exec(code, dict(__file__=activate_this))
    # Catch all errors, just in case.
    except Exception:
        click.echo(
            u'{0}: There was an unexpected error while activating your virtualenv. Continuing anyway…'
            ''.format(crayons.red('Warning', bold=True)),
            err=True
        )

Note that which() uses project.virtualenv_location which is the same as pipenv --venv which is also dependent on a call to pew.

the docker use case is a bit different I think, many users are installing with --system inside a container

Yeah I think you're right about that, that's also what I see most people do in Dockerfiles, for better or worse.

I've banged a bit on cron and pipenv. Here's what works for me:

PATH=/usr/bin:/bin:/usr/local/bin
*/1 *  *   *   *     cd /path/to/project/ && pipenv run python stuff.py

I did have an issue with pipenv getting confused as to the location of the virtualenv which I fixed by unsetting a couple of envvars which I think virtualenv or virtualenvwrapper were using.

I think you're absolutely right, improving the discoverability of pew should allow me to omit the explicit path specification in the example above; well spotted. And then the cron invocation can indeed be reduced to more or less exactly your original proposition of:

cd /some/project/dir && pipenv run script should be sufficient

@JosefAssad - I have to admit, even on debian, I've never had problems from just using the absolute path to an entrypoint script in the bin directory of a virtualenv.

That's all I think I need from pipenv, I don't have any desire to run pipenv from a crontab; that was a suggestion from @richardcooper and @nateprewitt .

we also have the PIPENV_PIPFILE environment variable, which can be set.

@kennethreitz - can you give an example of how that would be used in a crontab?

It's pretty easy to figure out :)

That's all I think I need from pipenv, I don't have any desire to run pipenv from a crontab; that was a suggestion from @richardcooper and @nateprewitt .

Sorry, perhaps I'm misreading this, but what was the original issue if not running pipenv from a crontab?

@kennethreitz - for you maybe. For me, evidently not, given that I'm here asking the question.

@nateprewitt - please see https://github.com/pypa/pipenv/issues/1369#issue-292190514.

@cjw296 sorry, that's still not clear. You don't want a programatic or command based way to get the absolute path of the virtualenv, and you don't want to use pipenv? How were you intending to use this in the crontab?

Ok, since the dialogue has kind of halted here to just emoji responses, I'm going to close this.

To summarize this thread for any future readers, we currently only have two methods of running scripts.

Path to Virtualenv

The first is to use the virtualenv path to execute installed scripts, as you would with a traditional virtualenv. This value is retrievable from the pipenv --venv command which will produce the absolute path of the virtualenv regardless of environment variables.

e.g.
/path/of/.venv/bin/python yourscript.py
/path/of/.venv/bin/installed_executable_script.py

Pipenv run

If you want to use pipenv to run the job directly, you'll need to make sure pipenv is on the PATH of the cron user, navigate to the project directory, and use pipenv run. While this isn't an ideal interface, it's what we currently have as an option from cron'd tasks.

e.g.
cd /path/to/project && pipenv run python yourscript.py

If you have suggestions on how you'd like to see them improved, please feel free to create a new issue detailing how you'd envision it working. Thanks!


@cjw296 if you have further clarifications on how you'd expect this functionality to work, I'm happy to continue talking through this. We just need to have clearer bounds because the requirements are still pretty fuzzy.

We also have $ PIPENV_PIPFILE=/path/to/Pipfile pipenv run.

Thanks @kennethreitz that works quite well. Didn't find it in the docs so here's a PR.

I'm also struggling with the issue.

I have to run the following command from cron.

pipenv run python manage.py <my_command>

to run from cron, I'm setting following command from cPanel

cd /path_to _my_project/ && pipenv run python manage.py

But, this doesn't work. So I tried to add python path like

cd /path_to_my_project/ && pipenv run /usr/local/bin/python3.5 manage.py <my_command>

But it says,

ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?

From this issue, it seems I need to provide path to vertualenv path created by pipenv which is

/home/user/.local/share/virtualenvs/myapp.com-IuTkL8w_

But not sure how to use it in the cron command.

cd /path_to_my_project/ && pipenv run /home/user/.local/share/virtualenvs/myapp.com-IuTkL8w_/bin/python3.5 manage.py <my_command>

Also I would encourage people to not treat the issue tracker as a help forum because it is not.

With venv-in-project, I could just put /abs/path/to/project/.venv/bin/{whatever} in a crontab

just chiming in here to correct an often repeated misunderstanding. virtualenv changes some things when "activate" is launched which do not happen when its python interpreter is started directly.

@JosefAssad I'm still not totally convinced about invoking pipenv via cron honestly, as is probably clear we haven't considered that usage much because you can invoke the relevant python executable directly instead (which is what I do in cases like this) -- this kind of gets around all of the other questions of setting PYTHONPATH and worrying about whether pew is also available etc.

I source the virtualenv's activate script

Out of curiosity, are you sourcing it out of habit or is this actually necessary for your use case? Does just calling the python executable in the virtualenv bin and passing the relevant script as an argument accomplish the same goal?

As yet another alternative approach you could write an invocation script along these lines:

#!/bin/sh
VENV_PYTHON="/path/to/project-hAsHpTH/bin/python"
PROJECT="/path/to/some/project"
SCRIPT="script.py"
cd "${PROJECT}" && "${VENV_PYTHON}" "${SCRIPT}"

or even the alternative case of just using the virtualenv interpreter in your shebang, or something.

I followed @techalchemy's procedure. However, instead of virtualenv I used pipenv. And I also used the absolute path for pipenv. Without the absolute path it didn't work. Here goes my bash script:

#!/bin/sh
VENV_PYTHON="/path/to/pipenv/created/virtualenv/python"
PROJECT="/path/to/project/"
PIPENV="/path/to/pipenv"
SCRIPT="script.py"

cd "${PROJECT}" && "${PIPENV}" run "${VENV_PYTHON}" "${SCRIPT}"

Leaving this here because I struggled with debugging my pipenv cronjob. Cronjobs output to /var/mail/{username} on macos.

Was this page helpful?
0 / 5 - 0 ratings