The original issue #20 was closed (as part of the April release), but it only implemented dev container support through WSL. The SSH part of the feature request is still unsupported as far as I could tell.
Feature request
When connected through SSH in a remote development session, the host鈥檚 Docker containers should be discoverable and VS Code should be able to use its Docker daemon for container remote development with devcontainer.json.
There are currently ways to make attaching to containers work described in the docs:
These do work to a point and I am able to attach to a container through SSH. However the devcontainer.json part does not work inside a Remote-SSH session. Visual Studio Code does not pick up the file and does not suggest restarting the session in a container like it does locally. Most Remote-Container options are not available in the Command Palette either.
As far as I can tell, you can get this working by skipping the Remote-SSH session, and configuring the Docker daemon to connect through SSH like in the docs. However this requires maintaining a separate devcontainer.json, and having two source trees (local and remote), which is far from ideal.
It's possible to configure the ms-vscode-remote.remote-containers to run as a workspace extension through remote.extensionKind, which is closer to what I would like. Though that does not work either because as far as I can tell, when reopening a session in a container, VS Code is unaware that the container was running behind Remote-SSH.
I think i have the exact same situation :
So from my local computers, i would like to Remote-SSH on the linux server (as i don't have / want the files in local), and from there, to be able to Remote-Container/Open the project, with the remote docker client / remote docker daemon (so i should even not need the local docker client).
At this time / as far as i can tell, this seems to not be working (with 1.45.1).
Maybe it's a corner case, but in my situation :
@SR-G
At this time / as far as i can tell, this seems to not be working (with 1.45.1).
Maybe it's a corner case, but in my situation :
- it's way better to use a remote docker daemon (fast server with plenty of RAM versus slow desktop/laptop)
- it's way better to have my workspaces in a share location in my remote server, allowing me to retrieve the same workspaces from several computers acting as clients.
I think I have a similar situation and I was using remote.extensionKind as a workaround.
I added the following setting to the remote setting.json:
{
"remote.extensionKind": {
"ms-azuretools.vscode-docker": [ "ui" ],
},
"docker.host":"tcp://<remote_ip>:2375"
}
The case is that I will always open a window for the remote first, then attach to the container, and keep the window for remote open in the background.
Well, might be this is typical for WINDOWS because it's MS 馃ぃno offensive.
I have a "proxyjump" server in the middle and CIFS filesystem on remote and mounted to the container (might need some other workaround) and they all work well for me.
@szlend had the same solution I think, and as mentioned, the concern is that it is really a just a workaround and really not supported by the extension. Some features of the extension are not usable in this case.
Our setup, in short, is as following: our Docker daemon is running on a Linux server, we are establishing a tunnel to it using: ssh server_name -L localhost:2375:/var/run/docker.sock -N and then use the .devcontainer.json approach to create and launch the Docker container on the server and attach to it with VSCode installed on a Windows laptop. The Docker images are build on that server (using Jenkins).
However, we still have the following issue:
The issue is that we then also need to start the container as root user, which causes file permission issues on volume mounts. For example, all new created files are owned by root, not 'my_user'...
If we could specify a specific user to be used to execute the postCreateCommand, then we could run the postCreateCommand as 'root'. If we could then start the container as 'custom_user', which now will already have the correct UID & GID, then we wouldn't have the file permission issues on the volume mounts anymore.
Could this functionality please be added? See also here for a similar request.
Or is there something we can try using the existing functionality? I noticed that recently the postStartCommand and postAttachCommand commands are added, however I didn't find a way yet how to solve this issue with those new commands either.
@diricxbart Have you considered updating UID & GID by building an image from the original image with UID 2000? We do that when the local machine is Linux, unfortunately that feature cannot be configured to run when the local machine is Windows (something to think about).
@chrmarti My local machine is Windows... What I've did in the past was to, on the Linux server, launch the Docker container and add the user there. Next from VSCode attach to the running Docker container. But this is a bit of a cumbersome workflow.
Ideally I just open the .devcontainer.json based environment in VSCode from my Windows machine, it launches the Docker container on the Linux server and works as if all was installed locally. My current experience is nearly that, apart from the fact that my file permissions are messed-up all of the time...
I really don't mind adding more complexity in the Docker container and/or in the .devcontainer.json, if needed. This is a one-time effort, reusable across different projects, ...
To me this postCreateCommand is a nearly perfect solution, it allows to modify the 'base' Docker container. One of the use cases here is to change the user UID and GID. (We also use this to create derived Docker containers from a 'base' Docker image, each with different Python packages installed.) So we use this to perform some tweaks / changes on our 'base' Docker image. The downside is that changing UID and GID has to be done as root (you can't have the 'custom_user' change its UID and GID, even as sudo, it again screws up the file permissions).
So to me, it would be perfect if I could run the postCreateCommand as root user (via a "postCreateUser" setting?) and specify for example 'custom_user' via the "remoteUser" setting...
@chrmarti How do you suggest to continue here? What do you think about the option of adding a separate "postCreateUser" setting (next to the existing "remoteUser" setting? Should I create a separate issue for this, or is it OK to track this in this one?
@diricxbart A separate issue would be great. This issue will address the problem in a more general way.
@guoquan
I have a "proxyjump" server in the middle and CIFS filesystem on remote and mounted to the container (might need some other workaround) and they all work well for me.
Can you elaborate on how you are able to connect to a container on a remote host via a proxyjump? I have an identical setup and cannot figure out how to incorporate the proxy jump information into the 'docker.host' setting.
Hi @theasianpianist,
Basically, I set my docker extension to be working locally, and docker.host to a local sock in server-level configuration, and set up a default LocalForward in my ssh connection from the remote docker sock to the specific local sock.
Concretely, in server-level configuration, I added the following section:
{
"remote.extensionKind": {
"ms-azuretools.vscode-docker": [ "ui" ],
},
"docker.host": "unix:///tmp/foo_bar_.sock",
}
And in my local ~/.ssh/config, I added
Host myproxy
User my_proxy_user
Host foo
HostName foo.bar.com
User my_remote_user
ProxyJump myproxy
LocalForward /tmp/foo_bar_.sock /var/run/docker.sock
StreamLocalBindUnlink yes
In this way, the configurations are valid only for this server, and all the socks are activated automatically when you connect to foo server on vscode. You can even config for multiple servers at ones.
Most helpful comment
I think i have the exact same situation :
So from my local computers, i would like to Remote-SSH on the linux server (as i don't have / want the files in local), and from there, to be able to Remote-Container/Open the project, with the remote docker client / remote docker daemon (so i should even not need the local docker client).
At this time / as far as i can tell, this seems to not be working (with 1.45.1).
Maybe it's a corner case, but in my situation :