(Hope it's OK to file a feature request here, didn't see any explicit documentation regarding that.)
tl;dr: I'd like to use pipenv to generate a bundle suitable for AWS Lambda
For reference, here is the AWS Lambda documentation for how to create a bundle with pip:
http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html#deployment-pkg-for-virtualenv
There are a couple existing pipenv issues that touch on this use case:
But neither of those directly requests this feature, so I figured it might be helpful to do so.
Hi @nicholasbishop! Feature requests are certainly welcome. I don't think this is in the scope of pipenv. Those directions essentially just tell you to copy your site packages folder along with your code. Pip does nothing but install the code to the site packages folder. You can get the location of your virutalenv by doing (on unix) open (pipenv --venv)
and then following those directions regarding copying the site package folder.
I would vote for this feature. It is strange to have 芦check style禄 functionality and not to have some of the useful pip's functionality. I tried to migrate one of my AWS Lambda project to using pipenv, but it turned out that it's easier to use original venv functionality + pip + requirements.txt. Just because I can't specify exact directory of package installation with pipenv.
@Zebradil If you mean pipenv check --style
, that functionality is planned to be removed.
I found this example on GitHub that looks sensible for packaging Lambdas with pipenv
. You could move those to a bundle
"script" in the Pipfile
and run pipenv run bundle
.
@joestump Unfortunately that's not sensible (unless you work in an environment where there's a large homogeneity of systems). I've been through similar hoops in the last few days. You might encounter both lib/python3.6/site-packages
and lib64/python3.6/site-packages
that you need to add to the zip command. Then you also need to start adding checks to see whether or not the directories exist.
In the end the easiest remains, as @Zebradil pointed out:
build:
mkdir -p $(BUILDDIR)
$(pipenv) lock -r > $(BUILDDIR)/requirements.txt
cp -R $(SRCDIR)/* $(BUILDDIR)
$(pipenv) run pip install --isolated --disable-pip-version-check -r $(BUILDDIR)/requirements.txt -t $(BUILDDIR) -U
That way you can just point the CodeUri
to build and use aws cloudformation package
as it should.
This makes it rather platform-agnostic (had to get this working on a Linux, macOS, and Windows environment). Having to step out of pipenv with a run pip install
feels weird. And pipenv --venv
as @erinxocon mentioned does not help in any of these scenarios.
In addition, I do need to check that script
section of Pipfile. That might actually allow me to keep some stuff local to Pipfile without having to bloat the Makefile with Python-specific things and allows us, where I work, to keep using our more standardised Lambda build pipeline (which supports C#, Go, Node.js, and Python).
At the end I got to the following command:
$ pipenv run pip install -r <(pipenv lock -r) --target _build/
Works with bash, doesn't work with pure shell.
But I see some sense in preserving requirements.txt
in the bundle. Just as metadata of the package.
The problem was that if I want to try the redirection soltion in a Makefile the result became unreadable fast. And like you said, I figured that having the requirements.txt
part of the zip in S3 would be an added benefit since, currently, we do not really do any release management on these functions. We always install the latest ones from the repository.
The problem was that if I want to try the redirection soltion in a Makefile the result became unreadable fast. And like you said, I figured that having the
requirements.txt
part of the zip in S3 would be an added benefit since, currently, we do not really do any release management on these functions. We always install the latest ones from the repository.
The easiest way around this is stuffing the requirements into a temp file. See below
$(eval TEMPFILE = $(shell mktemp))
pipenv lock -r > ${TEMPFILE}
pipenv run pip install -r ${TEMPFILE} --target _build/
rm -f ${TEMPFILE}
@erinxocon could we at least make pipenv --site-packages
or something return the path?
That sounds like a reasonable idea. Would you mind open an issue (and maybe create a PEEP proposal) for it? --site-packages
is probably not enough; but something like --venv-path=[key]
to return sysconfig.get_path(key)
would have everything you need for packaging (and maybe support :all-json:
and :all-list:
etc. to return get_paths()
with a certain format.
Meanwhile one of the following should return what you need:
pipenv run python -c "import json, sysconfig; print(json.dumps(sysconfig.get_path('purelib')))"
pipenv run python -c "import json, sysconfig; print(json.dumps(sysconfig.get_path('platlib')))"
Refer to sysconfig documentation on more keys to use (and the difference between purelib and platlib, although I believe these two should be identical in most virtual environment contexts).
Most helpful comment
I would vote for this feature. It is strange to have 芦check style禄 functionality and not to have some of the useful pip's functionality. I tried to migrate one of my AWS Lambda project to using pipenv, but it turned out that it's easier to use original venv functionality + pip + requirements.txt. Just because I can't specify exact directory of package installation with pipenv.