Aws-sam-cli: "sam build" feedback: Support private PyPi repositories

Created on 24 Nov 2018  ·  18Comments  ·  Source: aws/aws-sam-cli

Description:
Some companies host their own internal Pypi servers. They install requirements thru something like:

pip install -i http://some-private-server.com/pypi \
    --extra-index-url https://pypi.python.org/simple/ \
    --trusted-host some-private-server.com
    -t build

sam build needs to support passing extra options to PIP in this case

arebuild typfeature typfeedback

Most helpful comment

@sanathkr you reckon we could add an additional argument to the build command to pass in environment variables to the build container?

sam build --use-container --docker-env PYPI_USER=123,PYPI_PASS=xxx

Should be fairly easy to hook up as the BuildContainer uses Container class which already allows env vars to be passed in.

Thanks!

All 18 comments

In addition to passing the options directly in the sam build command line, pip has a couple of ways of storing the information so you don't have to explicitly pass it in via each command line:

Config file

The pypi user and password can be stored in a config file in some specific locations (e.g. $HOME/.config/pip/pip.conf). You could look to see if any of the supported pip config files are present, and mount them into the docker container.

Environment Variables

You can now use environment variables within a requirements.txt with pip 10+.

This could allow you to do something like...

$ cat requirements.txt
--extra-index-url https://${PYPI_USER}:${PYPI_PASS}@mypypi.com/simple/
privatetool==1.0.0

With this I can now run sam build and it will use my private pypi as long as I have those environment variables available. This doesn't work when building using a container as you can not pass in environment variables.

@sanathkr you reckon we could add an additional argument to the build command to pass in environment variables to the build container?

sam build --use-container --docker-env PYPI_USER=123,PYPI_PASS=xxx

Should be fairly easy to hook up as the BuildContainer uses Container class which already allows env vars to be passed in.

Thanks!

@billyshambrook I wonder if we just pass system env vars into the container instead? I have had this conversation with @sthulb about the Go builders some time back. I am not sure why we don't pass the env vars into the container already (maybe some security reason?) but I think it makes sense for build to pass the system env vars into the container by default.

Here is the code where we made the decision right now: https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/local/docker/lambda_build_container.py#L87

@sanathkr Thoughts?

We did not pass environment variables to provide isolation. When you build inside the container, the environment is not tainted by configuration you have set in your terminal. However, I think it is a good idea to pass through standard environment variables used by package managers.

@billyshambrook Do you know which env vars are standard to Python Packaging (Twine, PyPi) etc?

We could make a change to automatically pass those through, if available.

So we would have to keep a mapping of what each package manager to "standard" environment variables? That sounds like more configuration we have to keep track of and another point for customers to get confused by ("I can build locally but not in the container"). I don't see why we can't pass the system env vars. I don't agree that it taints the configuration. I should be able to define env vars to configure things in the container or locally.

Yes, it is another set of configs to maintain, but in reasonable time we will get good coverage of the standard envvars.

I think we should set clear expectations of building locally vs container. This where the confusion is. Based on how we define we could go one way or the other

It’s too constraining. If we don’t want to mount system env vars, we should at least provide a way for customers to configure.

Keeping this mapping does not scale. Provided runtimes, plugin builders, etc. I am strongly against having SAM CLI maintain and updating what env vars map to what container. It’s just another “X introduced Y” that we have to keep track of.

@jfuss @sanathkr

I had the same issue with env vars and setting specific build parameters for the pip install (CFLAGS, etc)

would not make sense something like it was suggested above by @billyshambrook --docker-env MYVAR=myval?

We'd like to pass --ignore-installed pip flag to sam build --use-container and it doesn't work.

  1. docker-env
$ sam build --use-container --docker-env PYP_IGNORE_INSTALLED=true
Usage: sam build [OPTIONS] [FUNCTION_IDENTIFIER]
Try "sam build --help" for help.

Error: no such option: --docker-env
  1. requirements.txt file

file:

pyathena==1.7.1

Output:

$ sam build --use-container
Starting Build inside a container
Unable to process properties of PredictFunction.AWS::Serverless::Function
Building resource 'InitPredictFunction'

Fetching lambci/lambda:build-python3.6 Docker container image......
Mounting /Users/rinat/Developer/Projects/Basking/bs-data/aws_cfn/ds_model/application as /tmp/samcli/source:ro,delegated inside runtime container

Build Failed
Running PythonPipBuilder:ResolveDependencies
Error: PythonPipBuilder:ResolveDependencies - Usage: -c [options]

ERROR: Invalid requirement: --ignore-installed
-c: error: no such option: --ignore-installed

Is there any workaround or another approach to pass these settings to pip in sam?

Is there any more work on this? We are facing a similar problem where we want to include private repos into our requirements.txt file but would need to hardcode user and passwords into the file to get them.

Hi @jfuss just wanted to check whether you have any recent updates on this? My use case is slightly different as I have private github repos so I'd need to pass in a github token (or ssh key) and build within the container. This is essentially the same to the question I've found on stack overflow here

Any updates on this issue? How have people installed private packages into their lambdas?

This is a direct need to any customers that have private packages. Not sure why this wasnt included in the 1.0 release.

We have the same issue, is there any way to install private python package ?

EDIT

  • I could install private pip package with following procedure:

    • pip install wheel

    • pip config set global.extra-index-url https://${PYPI_USER}:${PYPI_PASS}@mypypi.com/simple/

    • requirements.txt format is like following

$ cat requirements.txt
--extra-index-url https://${PYPI_USER}:${PYPI_PASS}@mypypi.com/simple/
privatetool==1.0.0

SAM error message is very unfriendly...

We have a workaround. Since SAM is using PIP under the covers. We just generate as part of the build system a pip.conf in the correct directory. When SAM runs, and pip is called, it loads the pip.conf. We are using artifactory virtual repositories to allow us to pull from both public pip via pip remote and local repositories within artifactory

Using a jenkins library:

def pipconf = """
            [global]
            index-url = https://${USER}:${PASS}@${HOST}/artifactory/api/pypi/${myreponame}/simple
            """

def pip_path = steps.env.HOME.trim()+"/.config/pip/pip.conf"
steps.writeFile(file: pip_path, text: pipconf)

Something that worked for us was to have the extra-index-url in the requirements.txt itself

The requirements.txt looks like this

--extra-index-url YOUR_URL
--trusted-host YOUR_TRUSTED_HOST
your_private_package == 0.20
more-itertools==7.0.0
pynamodb==4.0.0
dynamodb-json==1.3

For requirements format and other options

@shivambats yes, but if you sam build -u (build in Docker), you still have to hardcode your repository credentials. For now you can't pass them as environment variables.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ericallam picture ericallam  ·  24Comments

burck1 picture burck1  ·  45Comments

charsleysa picture charsleysa  ·  33Comments

walkerlangley picture walkerlangley  ·  41Comments

TaylorHG picture TaylorHG  ·  27Comments