Heard this from a number of people at PyCon and here: https://www.reddit.com/r/vscode/comments/bjycxh/introducing_remote_development_for_visual_studio/emcfer5?utm_source=share&utm_medium=web2x
Scenario would be to connect to a remote SSH host and then open a container on that SSH host. Currently it's possible by setting up ssh on the docker container and forwarding the SSH connection but it requires a lot of manual steps.
I would love this! My usage scenario is that we get windows VMs that don't have great specs or usage for docker, but we have another remote linux server we can ssh to to work out of. While the Remote-ssh helps IMMENSELY already for coding directly on that server, being able to connect to remote containers would be amazing.
Vote! On further note, I wish these 3 extensions are composable, like
SSH + Containers are Remote containersSSH + WSL are WSL on a remote Windows machineWSL + Containers are Containers in WSL (probably doesn't make sense, just list it here):)
You could (haven't tried myself, let me know if it works):
DOCKER_HOST=tcp://localhost:1234 pointing to the forwarding port.Note that because the automatic mounting of the local folder doesn't work in this scenario, you'd have to use a devcontainer.json on your local machine using a Docker Compose file to work around that. You could then mount a folder on the remote machine into the container using that.
Turns out WSL + Containers does make sense, since MS just announced WSL2, with support of Docker :D
You could (haven't tried myself, let me know if it works):
- Configure the Docker daemon on the remote machine to listen on a port (on the local interface for security).
- Set up an SSH tunnel from a port on your local machine to the Docker port on the remote machine.
- Start VS Code from the command line with the environment variable
DOCKER_HOST=tcp://localhost:1234pointing to the forwarding port.Note that because the automatic mounting of the local folder doesn't work in this scenario, you'd have to use a devcontainer.json on your local machine using a Docker Compose file to work around that. You could then mount a folder on the remote machine into the container using that.
Yes I think we should just connect directly to the daemon on the remote host (or via an ssh tunnel if you need), but of course the mounting is the trickier bit. In my case I need to use a remote docker host as it has resources not available on my development machine - for example other services already running (in docker swarm).
We talked about source code mount/sync also in https://github.com/microsoft/vscode-remote-release/issues/12#issuecomment-489090106
You could (haven't tried myself, let me know if it works):
- Configure the Docker daemon on the remote machine to listen on a port (on the local interface for security).
- Set up an SSH tunnel from a port on your local machine to the Docker port on the remote machine.
- Start VS Code from the command line with the environment variable
DOCKER_HOST=tcp://localhost:1234pointing to the forwarding port.Note that because the automatic mounting of the local folder doesn't work in this scenario, you'd have to use a devcontainer.json on your local machine using a Docker Compose file to work around that. You could then mount a folder on the remote machine into the container using that.
@qubitron @chrmarti So I played around with this a bit and have what you are describing above working with a few tweaks. These were the complete steps I went through. This approach has the source code on the remote VM so you don't need to rsync or anything... just use a local mount/fileshare.
[email protected]. Also assumes code-insiders is in your path:ssh -NL localhost:12345:/var/run/docker.sock [email protected]
export DOCKER_HOST=localhost:12345
code-insiders /path/to/cloned/repo/on/remote/via/mount/or/share
.devcontainer/docker-remote-compose.yml file to volume mount the remote file location instead of the local one. (You can do this via VS Code over the share for something this simple)version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
# Paths must be absolute
volumes:
- /home/your-user-here/your-cloned-repo-path-here:/workspace
- /home/your-user-here/.gitconfig:/root/.gitconfig
command: sleep infinity
.devcontainer/devcontainer.json to point to the yaml file{
"dockerComposeFile": "docker-remote-compose.yml",
"service": "web",
"workspaceFolder": "/workspace",
}
At this point you can edit and build your files in a performant environment on the remote host, or access them locally as needed (but with much poorer performance - which is why the SSH extension exists).
@chrmarti @egamma We could doc this as an advanced scenario along with the trick for connecting to two separate containers from here.
I should also mention that the reverse of what I mentioned kind of what Azure Dev Spaces does for Kubernetes. Edit source locally, trickle and build on remote. There's all sorts of nuances that you can run into when sync'ing code that cause problems if anything gets out of sync and there's timing issues, etc that is hard to get right - which is why it's best to entirely avoid it if you can. You can reverse the flow and eliminate the problems. That said, the Dev Spaces team spent a ton of time working on getting that approach working well for their scenario if that's what you are looking to do.
I think the mounting / synching of the remote filesystem is not strictly necessary and actually assumes that you have an account on (or some amount of control of) the SSH machine, which might not be true.
I would like to think about how we can get at the configuration in the devcontainer.json without the mounting / synching. A few ideas (with progressing complexity) come to mind:
The user would checkout the source inside the container, similarly to the Remote SSH case. This avoids the complexity and performance implications mounting / synching comes with.
@chrmarti Yep - totally. This is more of a statement of what is possible now without modifications, but we'll want to be able to open a container on a remote SSH host when the source is on the remote SSH host as well. I know @chrisdias heard about this variation as well at //build. (Not to discount the others -- this is just an addition to that set.)
@chrmarti One important note - if you check out the code into the container, the container has to be able to be rebuilt, so it would need to be in a separate volume mount that survives are rebuild. The SSH file mount in the scenario above is a variation on that same theme -- it's just a local bind mount instead of a volume mount.
We'd likely want to write down these variations current state to see how painful they are - just attach, single dev container with volume on remote, container based project. Single dev container can be done with just local files. It's the container based project where you may have a source tree with multiple containers in it that things get difficult (and the SSHFS addition would allow). Just allowing a DOCKER_HOST to be specified may be enough for the first two and really single container would largely work now if we allowed you to turn off the default local mount in devcontainer.json (you could use use "-v" in runArgs for a named volume or bind mount).
@chrmarti I started documenting what you can do in without changes in this vscode-docs branch. We'll likely need to pull the advanced section into a separate doc, but I'm getting the content in while we discuss restructuring.
Obviously there's features we could add here to make this better, but helping people understand what is possible now is also useful.
Here's a proposal for what we could do here to make this better. Rather than inventing a new configuration concept to support this kind of thing, we can allow people to use local devcontainer.json files as a way to define or connect to their remote environments. (Attach would also work -- but that works now.)
For image/dockerFile scenarios, there's a few enabling settings that would get added:
workspaceMount property (or something like that) that allows you to override VS Code's default mounting parameters. We can just support the value of CLI mount argument (so you could pass "type=volume,src=my-source-code-volume,dst=/workspace,volume-driver=local").workspaceFolder so that you can pick a folder in the named volume you mount since you may have multiple repos in it.For all scenarios:
"docker.host" settings.json property from the Docker extension (so you can set this at a workspace level)As an advanced feature, we can then:
"remote.containers.tunnel.host" and "remote.containers.tunnel.port" as a user or workspace settings.json property (with the host property supporting user@hostname). Like the SSH extension, complex configurations would be accomplished via SSH config files rather than individual properties.ssh -NL localhost:12345:/var/run/docker.sock [email protected] and override the docker.host setting as appropriate. We should spin up the command in a terminal window so we can get password, other input.We could probably add in a SSHFS mount or rsync feature over time since we'd have all the information necessary to set this up, but I think that's a separate topic. I've got the command line flavor of these documented already for Mac, Linux, Windows.
@Chuxel This looks good to me. I think it is the most natural way to provide the configuration options - and is how I would expect it to work. Especially respecting the docker.host parameter.
We will need settings for DOCKER_HOST ("docker.host") and DOCKER_CERT_PATH (setting TBD) and maybe DOCKER_TLS_VERIFY (setting TBD, ideally this would never have to be turned off). That will allow for secure remote connections without SSH or other tunneling technique.
I would start with that and then see if we need additional support for SSH, the user can always set up their own tunnels and there are many different types of SSH configurations making it hard to support all of them (as we learn with Remote SSH).
@chrmarti Yep, that was my thinking as well which is why I called that part "advanced". The biggest thing really is the workspaceMount and workspaceFolder. Forwarding via SSH is something I've already documented and technically you can start code insiders from the command line to get the env vars set -- but for convenience purposes we should add those in. It also allows you to vary the remote machine by workspace which is useful. We could potentially break this feature request into multiple with that in mind. There's already a feature request for workspaceFolder actually (#101) and the workspaceMount property would help with #41 among others.
BTW - Any idea why the Docker extension didn't include cert and TLS properties? Certainly if you use the SSH trick that isn't needed, but its interesting they weren't included. I mentioned them in the docs around the non-SSH path - it was just interesting it was missing from the Docker extension.
Folks, as of 0.54, you can now use workspaceMount and workspaceFolder in the non-compose case as follows:
"workspaceFolder": "/remote-workspace",
"workspaceMount": "src=remote-workspace,dst=/remote-workspace,type=volume,volume-driver=local"
This will use a volume instead of a bind mount and you can then clone your source code into it -- this will work in a wide variety of situations.
However, if you own the actual host you are connecting to, you can also specify the absolute path on the remote machine that containers your source code using a bind mount. An absolute path is needed since VS Code has no idea where the source code is remotely.
For example:
"workspaceFolder": "/remote-workspace",
"workspaceMount": "src=/home/youruser/path/to/repo,dst=/remote-workspace,type=bind"
We'll be updating our docs to reflect this change since it makes the dockerFile case significantly easier to setup.
This looks great @Chuxel! Could src in your 2nd example also be something like "." for the currently directory or ".." relative to the location of the devcontainer.json file? I tried this using 0.5.4 but the container would not start up.
One more question: You don't need to use workspaceFolder if you already use workspaceMount as in your 2nd example correct?
Relative paths are not supported by Docker. #442 is likely tracking what you need here.
The equivalent of PWD could work well
One more question: You don't need to use workspaceFolder if you already use workspaceMount as in your 2nd example correct?
@vnijs The example above actually works now so you can try it out. The reason you specify the workspaceFolder is that VS Code's automatic logic expects the workspace to be mounted in a particular location. To ensure .git support works, it will actually mount the root of your source tree even if you select a sub folder. This can lead to your workspaceFolder being /workspaces/some/sub/folder/under/your/repo. Setting it explicitly to the location you mounted avoids that problem entirely.
I'm trying to test this out but keep getting hung-up on the vscode not picking up changes to the devcontainer.json file. Is there a way to turn off caching?
Specifically, the below seemed to be working ...
{
"name": "remote-test",
"image": "jupyter/scipy-notebook",
"workspaceFolder": "/home/jovyan/remote",
"workspaceMount": "src=${env:PWD}${env:CD},dst=/home/jovyan/remote,type=bind",
"extensions": [
"ms-python.python"
]
}
But when I try to change the image to use, that change isn't picked up.
{
"name": "remote-test",
"image": "vnijs/rsm-msba",
"workspaceFolder": "/home/jovyan/remote",
"workspaceMount": "src=${env:PWD}${env:CD},dst=/home/jovyan/remote,type=bind",
"extensions": [
"ms-python.python"
]
}
@vnijs Are you aware of the Remote-Containers: Rebuild Container command? That's the one to run to get changes to be picked up. Currently it is not done automatically since the entire container rebuilds -- and any content in the container itself would be gone. (Its not actually caching so much as not deleting and rebuilding unless you tell it to.)
Got it. Thanks @Chuxel. So then it seems that src=${env:PWD}${env:CD} does not work (yet). vscode does know, however, what the local folder is as shown in the docker run command (i.e. vsch.local.folder=/Users/vnijs/remote). Is there a way to set src to vsch.local.folder ?
@vnijs To be clear, for remote, you will need to use an absolute path since you're pointing to the remote filesystem not the local one. PWD/CD is for the use case where you are trying to override the local mount which is not really what this issue is about. That's #177 and #41 - so this is a bit off topic. #442 is an enhancement request. You could set a local env var, but we do not have a specific workaround at the moment.
I'm adding support for the "docker.host" setting from the Docker extension. This should allow us to test the basic scenario. I'll close this issue as I believe we have others open that cover what is being discussed. Thanks.
(Not sure why DOCKER_CERT_PATH and DOCKER_TLS_VERIFY are not supported by the Docker extension's settings, but I don't want to add that here. If someone needs that, please file feature requests for the Remote Containers and the Docker extensions.)
Most helpful comment
Vote! On further note, I wish these 3 extensions are
composable, likeSSH+ContainersareRemote containersSSH+WSLareWSL on a remote Windows machineWSL+ContainersareContainers in WSL(probably doesn't make sense, just list it here):)