Code Server seems particularly close to being able to run for a team of engineers on a single Kubernetes cluster, each with their own container and persistent data store. That would be incredibly efficient, secure and highly available.
I don鈥檛 think server side collaboration is necessary, that鈥檚 what Github is for, i鈥檇 prefer each engineer is sandboxed, the key utility of code-server being in-browser and consistent.
A generic OAuth implementation as described in other feature requests might work agnostic of cloud providers... and be a first step.
But i鈥檇 suggest a well formed Kubernetes deployment with Google Identity Aware Proxy on the front of it would be epic. Brings with it a host of benefits, not the least of which being their zero trust corp security.
IAP is easy to attach to a GCP Load Balancer, and AFAICT the server would just need to understand the identity that is asserted in headers, and route to an appropriate container.
Thoughts? How would I go about resourcing that?
This would really be an awesome feature. I assume it would would let monitor connections to the server so it would be easy to do 'auto shutdown' feature for the server.
It's unlikely that we'll implement support for Google IAP in code-server. It should be relatively easy to write your own proxy that can handle this.
This would really be an awesome feature. I assume it would would let monitor connections to the server so it would be easy to do 'auto shutdown' feature for the server.
Yes and equally, i鈥檇 happily have Kube cluster running at all times so startup was instant.
It's unlikely that we'll implement support for Google IAP in code-server. It should be relatively easy to write your own proxy that can handle this.
Yes the best implementation would be a proxy, and we鈥檙e going to test that theory next week, but respecting the passed-through header (email address and user ID) would be useful for attaching persistent disk claims, keys, and a bunch of other stuff right?
Hey @asomervell any update on your experiments?
code-server doesn't have any functionality which lets you attach disks/keys, it just provides full access to the computer/container it's running on. If you wanted to programmatically attach disks and keys, you'd have to make your proxy work that out and send the relevant commands to the kube cluster.
One container per person in a docker swarm...
This is already being done to isolate students in individual software development environments for 200+ students in university web development course.
Each student gets his own single container docker service (php:7.3-apache) with code server installed and running on port 8443 (apache runs on port 80). And a nginx server connects the students web browser (via a wildcard DNS entry) to the individual students service.
Code-server itself runs as www-data, in the container, with /var/www as a working home mount.
Currently special remote SSH commands are use to allow students to start/stop their containers and get the code-server password (via the "docker service logs")/ This will eventually become a web-based interface, but for now SSH login provides all the authentication and authorization to the system.
SSH is also used to provide file transfer to/from there working directory, though it is not used for CLI access (that is provided by code-server). GIT, is also installed inside the container so students can use a GIT repository as an alternative file transfer method for their project work.
Basically it can be done, and is being done, and was put together in under a month by one person (me).
To provide another perspective, I'm operating my own dev environment using Cloudflare Access (for user auth), Cloudflare Argo (such that the backend instances aren't exposed to the internet), GCP Compute with Microk8s installed. I have a small portal that allows me to define container templates/projects (that use a template) that brings along with it disk configurations. I have a set of container images at https://github.com/davefinster/coder that I use for various languages.
At the moment, the portal allows me to manually start/stop my instances and I assign a project to one instance at a time. This configures DNS records such as
Ideally one day I'd like to somehow monitor the websocket connectivity and automatically idle out the machine and would like it to get compute costs to $0 when not in use.
One aspect I don't have a good answer for is interacting with private Git credentials given the remote nature and trying to stay away from SSH and having credentials stored on the server.
I'll pin this one so people have an idea on what to do, I recommend everyone to pool over your ideas and have them documented in doc/ as well. It would really help everyone if they're aiming for a multi-tenant architecture to provide multi-user code-server.
@davefinster Ideally one day I'd like to somehow monitor the websocket connectivity and automatically idle out the machine and would like it to get compute costs to $0 when not in use.
In CS version 1 I have been doing idle testing by checking the timestamp of one of the log files (file meta data, not its contents). The file is the latest file (sorted by name, or by date) with this name
$HOME/.cache/code-server/logs/[0-9]*/sharedprocess.log
This file was updated every 5 minutes by code-server while the user has it open in the browser. Once the browser is closed the file no longer updated, and a hour later I automatically shutdown that users docker environment. As it is a file, I could do the test outside docker, or even from a different node to where the docker container is running. Easy - Peasy...
I have not found a similar easy solution for this in code-server v2, but have an active issue logged for it.. https://github.com/cdr/code-server/issues/1050
Update. A Heartbeat file has been added... So idle checking is now just a matter of checking when a file was last updated, once code-server has started.
https://github.com/cdr/code-server/pull/1115
Hey folks,
I was interested in doing this a few months ago so I implemented a hacky solution which basically just handles authentication through Github, starts containers on demand, and forwards requests (it is independent of the underlying container, so theia also works). I haven't worked on it for a while, so it doesn't work with the current version of code-server, but I would be interested in working on this issue from scratch to make sure that performance is optimal and all desired features are covered. In order to avoid doing duplicate work, would anybody else who was something like this working like to contribute to a public solution?
@rafket Hey, I started to work on a similar solution a couple months ago named multiverse, but I've since archived it because I thought things got a bit too messy. I got as far as having username/password authentication, with a reverse proxy (traefik) to lock paths. The entire plan was to have it kinda template based so dev teams could use the same _template_ for consistency, I'd love to collaborate on a new solution however.
I am trying something with traefik + docker and some magic with labels to match with https://github.com/dexidp/dex with https://github.com/mesosphere/traefik-forward-auth for authentication. The big issue I have right now is that for some reason the websockets keep getting lost (#1161) but might be trafik's fault (https://github.com/containous/traefik/issues/5533). I have something written with node-proxy and passport, but I would rather use traefik in the end.
I am trying something with traefik + docker and some magic with labels
@geiseri I have an ldap/traefik-forward-auth up and running but need help with the multi-tenancy part. how are you attaching users to separate drives?
@sr229 I have built a basic multi-tenant solution with traefik authelia openldap and a small starlette server i wrote to manage the spin-up of user containers. Would this interest anyone and would I infringe on any licenses by posting a gist of my solution? It should also take care of auto ssl with let's encrypt
Do you have trafik on the same server as code server? I have them different.
This is going to be discussed in detail in the FAQ I'm writing. Thank you all for your comments.
@nhooyr Would you post a link to the FAQ?
@dclong Being written right now. I'll post here once done. Tracking issue is https://github.com/cdr/code-server/issues/1333
One container per person in a docker swarm...
This is already being done to isolate students in individual software development environments for 200+ students in university web development course.
Each student gets his own single container docker service (php:7.3-apache) with code server installed and running on port 8443 (apache runs on port 80). And a nginx server connects the students web browser (via a wildcard DNS entry) to the individual students service.
Code-server itself runs as www-data, in the container, with /var/www as a working home mount.
Currently special remote SSH commands are use to allow students to start/stop their containers and get the code-server password (via the "docker service logs")/ This will eventually become a web-based interface, but for now SSH login provides all the authentication and authorization to the system.
SSH is also used to provide file transfer to/from there working directory, though it is not used for CLI access (that is provided by code-server). GIT, is also installed inside the container so students can use a GIT repository as an alternative file transfer method for their project work.
Basically it can be done, and is being done, and was put together in under a month by one person (me).
This sounds really interesting and similar to what I'm trying to achieve. Would you mind sharing your nginx.conf that's doing the wildcard proxying to the docker containers?
Sure...
Here it is... sanitised with the domain replaced.
It is run in a docker container, with a wildcard domain. Without a username, it gives out the top level website. With a username in that domain, it proxies the request to the given ports to that service via a docker network.
It was design with version 1 of code-server and still works fine with version 3.4
Yes each of the docker environments has an apache as well as a code-server.
BUt we don't have a fancy interface for starting the containers. At the moment we use a ssh account with a 'fake' shell that lets them use specific remote ssh comments to start/stop thr users container.
EG: ssh [email protected] start
That starts the container, waits till it can see code-server running, then reports the URL (based on their username), and the randomly generated password...
MOTD... Legal Mambo Jumbo... Code of Practise...
-------------------------------------------
Please wait a moment while I start your development container...
For a complete list of controls, "ssh" the command 'help'.
ssh [email protected] help
Waiting for code-server to appear online...
Code-server is running at the URL...
https://username.docker.exampe.com:8443/
Using the password: caum+Karyl+loonier+scarps
Connection to docker.exampe.com closed.
We are working on a web interface to start/stop user containers instead.
That's awesome, @antofthy! Thanks so much for sharing. I'm sure this will be helpful for others.
Most helpful comment
One container per person in a docker swarm...
This is already being done to isolate students in individual software development environments for 200+ students in university web development course.
Each student gets his own single container docker service (php:7.3-apache) with code server installed and running on port 8443 (apache runs on port 80). And a nginx server connects the students web browser (via a wildcard DNS entry) to the individual students service.
Code-server itself runs as www-data, in the container, with /var/www as a working home mount.
Currently special remote SSH commands are use to allow students to start/stop their containers and get the code-server password (via the "docker service logs")/ This will eventually become a web-based interface, but for now SSH login provides all the authentication and authorization to the system.
SSH is also used to provide file transfer to/from there working directory, though it is not used for CLI access (that is provided by code-server). GIT, is also installed inside the container so students can use a GIT repository as an alternative file transfer method for their project work.
Basically it can be done, and is being done, and was put together in under a month by one person (me).