Vscode-remote-release: Cannot use environment variables in containerized environment in WSL2

Created on 29 May 2020  路  10Comments  路  Source: microsoft/vscode-remote-release




  • VSCode Version: 1.46.0-insider
  • OS Version: Windows 10 2004 (19041.264)

Steps to Reproduce:

  1. Add environment variable in WSL2 export SPARK_HOME=/usr/local/spark
  2. Use SPARK_HOME in devcontainer.json, in runArgs section, like follows
    "runArgs": [ "-v", "${env:SPARK_HOME}:/workspaces/spark" ]
  3. Try to build container. Find out in the Dev containers log the following:
    [5404 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/luiso/repos/PDSNextGen/src/Databricks' && docker 'run' '-a' 'STDOUT' '-a' 'STDERR' '--mount' 'type=bind,source=/home/luiso/repos/PDSNextGen,target=/workspaces/PDSNextGen' '-l' 'vsch.quality=insider' '-l' 'vsch.remote.devPort=0' '-l' 'vsch.local.folder=\\wsl$\Ubuntu-20.04\home\luiso\repos\PDSNextGen\src\Databricks' '-v' ':/workspaces/spark/' '--env' 'HOME=/home/databricks' '--entrypoint' '/bin/sh' 'vsc-databricks-792c63781538a9c0906c38bbf9b440c7' '-c' 'echo Container started ; while sleep 1; do :; done'
  4. As can be seen, in the previous log, the mount doesn't use the SPARK_HOME environment variable. As a result, nothing is mounted in /workspaces/spark


Does this issue occur when all extensions are disabled?: Yes/No Does not apply. Need to use at the very least the remote-containers extension.

*duplicate containers

Most helpful comment

This is a problem for me as well. When using VSCode WSL2 in combination with remote containers, the environment variables set in the WSL2 environment do not get passed to docker. This is critical, as we rely on environment variables to set privileged credentials in the development container (such as PATs for access to private package repos).

When running the same exact setup, but using Windows instead of WSL2 as the root of the project, the environment variables are set appropriately in the resulting dev container.

Looking at the Dev Containers logs, every WSL2 command is executed via wsl -d Ubuntu-18.04 -e /bin/sh -c, which would seem to be the reason why the WSL2 bash environment variables are not being passed to docker, as they are not being loaded in the first place. Additionally, setting the environment variables in ~/.vscode-server[-insiders]/server-env-setup, as per https://code.visualstudio.com/docs/remote/wsl#_advanced-environment-setup-script does not work, either.

Without being able to pass environment variables from my WSL2 environment into docker, the utility is greatly diminished. Yes, I can still do this via Windows, however the advantages of using WSL2 with the WSL2 backend for docker are removed.

All 10 comments

For adding the environment variable, have tried some options:

  • ~/.bashrc
  • /etc/profile.d/spark.sh

However, it looks like in a non-interactive session, like the one that VSCode uses, it won't run any of those scripts, so I won't have any environment variable. Thus, I already have the explanation for why it doesn't work. The question would be: Is there any workaround?

Sorry! The Docker framework with its orchestration engines offers the built-in mechanism for container configuration and even more than one. It doesn't require any re-build because on-the-fly rebuild of cloud applications is not imaginable. See example below:
Say, I created locally a json file containing my service configuration
cat .container-config

{
    "localrepo": "localRepoURL:5000",
    "gitURL": "myGitServer",
    "targetContext":"default"
}
Then
My service is configured with this file using stanfard Docker mechanism
docker config create --label vscode repositoryurl .container-config
docker service update --config-add repositoryurl devcontainerVol
devcontainerVol
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged
And my configuration is exported to the Container environment:

docker exec 3704368a42e2 cat /repositoryurl | jq -r 'keys[] as $k | "export ($k)=(.[$k])"'
export gitURL=myGitServer
export localrepo=localRepoURL:5000
export targetContext=default

This is regular cloud application maintenance procedure :)

Hiya @PavelSosin-320
First of all, thanks a lot for your response
Second, I didn't know about that docker config functionality (I guess I will never finish learning.. which is a good thing)
Third, I tried to apply your suggestion. And it looks like it needs to be part of a swarm cluster (even if it is a single-node one). Will it work withour issues with VSCode owned container?

This is a problem for me as well. When using VSCode WSL2 in combination with remote containers, the environment variables set in the WSL2 environment do not get passed to docker. This is critical, as we rely on environment variables to set privileged credentials in the development container (such as PATs for access to private package repos).

When running the same exact setup, but using Windows instead of WSL2 as the root of the project, the environment variables are set appropriately in the resulting dev container.

Looking at the Dev Containers logs, every WSL2 command is executed via wsl -d Ubuntu-18.04 -e /bin/sh -c, which would seem to be the reason why the WSL2 bash environment variables are not being passed to docker, as they are not being loaded in the first place. Additionally, setting the environment variables in ~/.vscode-server[-insiders]/server-env-setup, as per https://code.visualstudio.com/docs/remote/wsl#_advanced-environment-setup-script does not work, either.

Without being able to pass environment variables from my WSL2 environment into docker, the utility is greatly diminished. Yes, I can still do this via Windows, however the advantages of using WSL2 with the WSL2 backend for docker are removed.

In the Docker Desktop Docker implementation, Docker daemon runs in the separate VM, either Hyper-V VM or WSL2 lightweight VM. Even when you use WSL2, the VM is not a singleton. Docker VM and other WSL VM don't share any environment automatically unless you pass certain environment variables to certain containers using a regular docker mechanism: --env option.
WSL running distro doesn't share the environment with the host Windows but only inherits some of them, like path.
Of course. using of env file is the solution for docker. For WSL it is tricky because although using shell login scripts is possible if it is copied into the distro FS from outside using copy env.file wsl$homeuser. Unfortunately, wsl$ integration has not really completed by MS as my pre-release 2004 271 and works horribly for now.

Thanks for the response @PavelSosin-320. The problem here isn't related to how Docker works, but rather how the context in which the Remote Container Extension invokes Docker commands when running against a Remote WSL project.

Environment variables are being set per the appropriate docker mechanisms, in my case the services.service_name.environment key from a docker-compose.yml file. By invoking all of the docker commands via wsl -d Ubuntu-18.04 -e /bin/sh -c, there is no opportunity for any shell setup scripts to be invoked.

If instead, the extension were to either:

  1. Provide an override option to set the shell, i.e. /bin/bash instead of /bin/sh, the correct ~/.bashrc file with environment setup would be loaded.
  2. Source the appropriate server-env-setup as described in WSL Advanced Environment Setup, there would be an opportunity to setup the WSL2 shell environment prior to the docker command being invoked.

Again, when launching the Remote Container via a Windows path, this works correctly. Also, when bringing up the container manually from within WSL2 (via docker-compose run), the environment variables in the container are set correctly. The only failing scenario here is when going through the Remote Container extension to a WSL2 backend, due to the lack of environment initialization hooks.

I am also encountering this issue with same VSCode and Windows versions as OP, and agree with @JP-Dhabolt 's findings.

One workaround I found is to dynamically build a .env file in the project's root folder just before launching VSCode, ie. via an alias + script (dump-env). This is obviously not optimal for end-users, but may be helpful to some in the short term.

I tried to create WSL2 Ubuntu user, manually, create Linux login script manually, copy this script to the created users' home dir and invoke this login script using wsl -d distro> -u . So, WSL 2 doesn't prevent the creation of users' environment and executes bash login script as any regular Linux. My Ubuntu-20.04 distro also supports service so, any server can be started externally with a certain environment. So this requirement can be fulfilled technically.
The main problem that WSL 2 root process /init is absolutely hard-code.

Found another workaround by replacing the docker binaries in /usr/bin with shell scripts that setup the environment and then calls the original binaries.
I renamed docker-compose to docker-compose-orig and create docker-compose with the contents:

#!/usr/bin/env sh
if [ -z $SSH_AUTH_SOCK ]; then
    # If npiprelay.exe is available use it to pipe the windows ssh-agent to WSL
    # Get it from https://github.com/jstarks/npiperelay
    if which npiperelay.exe >/dev/null; then
        export SSH_AUTH_SOCK=$HOME/.ssh/agent.sock
        ss -a | grep -q $SSH_AUTH_SOCK
        if [ $? -ne 0   ]; then
            rm -f $SSH_AUTH_SOCK
            ( setsid socat UNIX-LISTEN:$SSH_AUTH_SOCK,fork EXEC:"npiperelay.exe -ei -s //./pipe/openssh-ssh-agent",nofork & ) >/dev/null 2>&1
        fi
    elif which keychain >/dev/null; then
        # Use keychain to reuse ssh-agent across multiple logins
        keychain -q
    fi
fi
exec /usr/bin/env docker-compose-orig "$@"

Did the same for docker as well.

You'll need to the npiperelay.exe executable in the Windows Path (or change to an absolute path)

Was this page helpful?
0 / 5 - 0 ratings