sam local has an option for --docker-volume-basedir but artifacts in this directory are copied into a directory under the /tmp folder, such as /tmp/aws-sam-local-1508825798871765871 and this is then bind-mounted to /var/task inside the lambci/lambda container.
This is problematic to use sam-local in test environments or any environment where the testing may be done inside docker -- see "The Solution" section.
Recommendation is to have an option to support mounting from another volume rather than bind-mounting so it is not necessary to share host filesystem with a container.
https://docs.docker.com/engine/admin/volumes/volumes/#start-a-container-with-a-volume
Workaround: mount the /tmp folder from the host inside the container running sam local. This is not ideal as it is a security risk and exposes the container to parts of the host that it does not need access to, in particular write access, increases host-container coupling
Do you have an example of this workaround @jhovell? #502 tried to get this working, even with sharing -v /tmp but without succes.
Even if this is not ideal, it would be nice to see an example! Thanks! 馃憤
@drissamri sure. I think maybe what I did not mention was that it was necessary to run SAM Local as a docker container. As you can see it was not exactly simple to get this working. I think there is also another drawback of this approach which I am only guessing has to do with my volume driver in Docker but the performance to start the Lambda is very bad: it takes maybe 30-45 seconds to start up. But after that execution is normal. I hypothesize it is because of the loading of the application JAR into a layered filesystem (not a volume) from container 2 to container 3 in list below but I am not sure.
So if you're keeping track thats docker-on-docker-on-docker:
1) Jenkins/CI Slave
2) SAM Local with application code baked into image
3) LambCI Lambda application
This is for a Java8/Maven project. But I think it would work for any runtime with some tweaks:
First you need a dockerized version of SAM local (I have not upgraded from 2.6 yet, no idea if more recent versions still work).
Dockerfile for sam-local
FROM buildpack-deps:stretch-scm
# switch to root user to install stuff
USER root
#install Docker
RUN curl -sSL https://get.docker.com/ | sh
# Install SAM Local, since it is a tool that uses Docker that will be used for many CI builds
# https://github.com/awslabs/aws-sam-local/
RUN curl -OL https://github.com/awslabs/aws-sam-local/releases/download/v0.2.6/sam_0.2.6_linux_amd64.deb &&\
dpkg -i ./sam_0.2.6_linux_amd64.deb && apt-get install -f ./sam_0.2.6_linux_amd64.deb && rm ./sam_0.2.6_linux_amd64.deb
Dockerfile to build docker image just for docker-in-docker testing. This is built at build time just to do testing.
# docker image with sam-local installed as root so that it can mount the docker sock for docker-in-docker
FROM docker-image-with-sam-local
COPY target/my-jar.jar /mnt/my-project/target/my-jar.jar # user application artifact
ADD template.yaml /mnt/my-project/template.yaml
ADD env.json /mnt/my-project/env.json
WORKDIR /mnt/my-project
Next: This is syntax for the fabric8 docker-maven-plugin. I could translate over to docker compose or another tool, but you'll probably get the idea. The syntax is very similar to docker compose:

I would very much love this. I'm writing (brittle) shell scripts to build and run my app like its 2012 right now:
build-sam-local.sh
#!/bin/sh
set -ex
rm -r build
mkdir build && cp -r src/* build/
pipenv lock -r | pip install -r /dev/stdin --target build
pipenv lock -r --dev | pipenv run pip install --upgrade -r /dev/stdin --target build
cd build
aws s3 cp s3://washpost-perso-1-prod/lambda-compiled-binaries/psycopg2-3.6.tgz . --profile bigdata
tar -xvf psycopg2-3.6.tgz
cd ../
I'd love to be able to run a single command like docker-compose up that will build and run the whole thing, perhaps via sam-cli.
this can help me manage both the environment variables and the build process.
my production build is currently done use a buildspec.yml. I've pulled some of this into a shared shell script for local dev and prod but if I could dockerize that would make things a lot simpler
Hello,
I am currently trying to do the same. Basically, I started a project myself, then added more people to the team. Each member has its own computer (mac, linux, windows) I just spent 6 hours with one of the team members trying to install the dev environment, so I thought dockerizing my app. Currently I have gone all the way up to creating a docker-compose file which will create a golang container where I can create executable and zip files needed, I have the mongo db container, and I have the aws container where sam-local is installed. When calling the api endpoint I get the same error
{
aws | "errorMessage": "fork/exec /var/task/bin/exe/user/authenticate: no such file or directory",
aws | "errorType": "PathError"
aws |
}
This is my docker-compose file:
`
version: '3.3'
services:
server:
container_name: 'server'
build: './server'
command: bash
volumes:
- /Users/Samuel/go/src/server:/go/src/server
depends_on:
- mongo
aws:
container_name: 'aws'
build: './aws'
command: sam local start-api --docker-volume-basedir /Users/Samuel/go/src/server --host 0.0.0.0
ports:
- '3000:3000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./server:/var/opt
depends_on:
- server
mongo:
image: 'mongo:3.6.7'
container_name: 'mongo'
volumes:
- ./data:/data/db
ports:
- '27100:27017'`
What could I change on either the volumes Im mounting or the command start-api to get my executables to be reachable by the lambda container? Is there any way I can access the lambda container (Inside my running aws container) to check if the var/task folder exists correctly? I remember I once did this on my host machine (directly to lambda container) but can't remember how.
The use case would be wanting a new member to be able to start coding by just running the docker-compose file taking only 5 min instead of spending a whole day getting setup (because of environment related issues)
@snavarro89 did you end up making any progress after your post?
@NickDarvey yes, I was able to run the docker containers all together, what are you attempting do do?
@snavarro89 could you share what you did to get it running? I'm seeing a similar error to the one you originally posted: no such file or directory.
My docker-compose.yml:
version: '3.2'
services:
lambda:
image: cnadiminti/aws-sam-local
ports:
- 12020:3000
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: ./lambda
target: /var/opt
command: local start-api --docker-volume-basedir "/Users/me/source/repos/test/lambda" --host 0.0.0.0
+1
Sorry I wasnt able to update this before, I created a docker file where I downloaded that image
DOCKER FILE:
FROM alpine:3.6
ENV VERSION=0.2.2
RUN apk add --no-cache bash gawk sed grep bc coreutils git
RUN apk add --no-cache curl && \
curl -sSLO https://github.com/awslabs/aws-sam-cli/archive/v0.6.0.tar.gz && \
tar -C /usr/local/bin -zxvf v0.6.0.tar.gz && \
apk del curl && \
rm -f v0.6.0.tar.gz
RUN apk update
RUN apk add python-dev build-base
RUN apk add --no-cache py-pip
RUN python -m pip install grpcio-tools
RUN apk add --no-cache libc6-compat libstdc++
RUN pip install aws-sam-cli
WORKDIR /var/opt
EXPOSE 3000
Then on the DOCKER-COMPOSE file:
aws:
container_name: 'aws'
build: './aws'
command: sam local start-api --docker-network solinsa_default --docker-volume-basedir /Users/samuelnavarro/go/src/solinsa/server --host 0.0.0.0
ports:
- '3000:3000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./server:/var/opt
Closing. Docker in Docker is not something we currently support. Please run same outside of docker to use sam local or sam build --use-container.
Most helpful comment
Sorry I wasnt able to update this before, I created a docker file where I downloaded that image
DOCKER FILE:
FROM alpine:3.6
ENV VERSION=0.2.2
RUN apk add --no-cache bash gawk sed grep bc coreutils git
RUN apk add --no-cache curl && \
curl -sSLO https://github.com/awslabs/aws-sam-cli/archive/v0.6.0.tar.gz && \
tar -C /usr/local/bin -zxvf v0.6.0.tar.gz && \
apk del curl && \
rm -f v0.6.0.tar.gz
RUN apk update
RUN apk add python-dev build-base
RUN apk add --no-cache py-pip
RUN python -m pip install grpcio-tools
RUN apk add --no-cache libc6-compat libstdc++
RUN pip install aws-sam-cli
WORKDIR /var/opt
EXPOSE 3000
Then on the DOCKER-COMPOSE file:
aws:
container_name: 'aws'
build: './aws'
command: sam local start-api --docker-network solinsa_default --docker-volume-basedir /Users/samuelnavarro/go/src/solinsa/server --host 0.0.0.0
ports:
- '3000:3000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./server:/var/opt