code-server version: code-server1.939-vsc1.33.1-linux-x64.tar.gzWhen using the Binary download in separately build docker environment. I get a blank web page. I am sure there is something missing in the environment that the binary needs, Perhaps access to other files from the source image, or some library, but there is no error report that helps determine the problem, or any 'requirements' outlined as needed for the binary.
On the other hand if I use the docker image they provide as a straight docker run image, it works fine. But I need to be able to run multiple copys of code-server in a docker swarm (seperated multiple-users for a class environment), meaning I need to run it as a 'service' in a swarm environment. Which is why I am trying to create my own docker image.
In summary: The binary in a swarm works, but gives a blank page, and the pre-build docker image does not work in a swarm.
Example...
Start a php:7.3-apache service in a docker swarm.
Run a BASH command line into that docker container (not as root), to download, install, and run in background; the latest binary code-server.
curl -s -L https://github.com/codercom/code-server/releases/latest |
grep download | grep tar.gz |
sed '2q; s%.*href="%https://github.com%; s/".*//' |
xargs curl -s -L -o - | tar zxvf -
mv code-server*/ code-server
chmod +x code-server/code-server
code-server/code-server -d ~/code-server ~/html --no-auth --allow-html -p 8080 &
At this point I can connect to the code-server web page on port 8080, via the swarm ingress-nginx proxy
(which currently allows http on 80, and 8080, and https on 8443).
But I only see a blank page.
Same if I remove the option --allow-html -p 8080 and connect via https on port 8443
The blank page is why I theorize the provided binary requires something MORE that is not documented, or provided in the tar file for the binary.
I also see the following error, from the command line connection to the php:7.3-apache docker service
Error: ENOENT: no such file or directory, open '/home/travis/build/codercom/code-server/packages/server/build/web/index.html'
Which presumably is the path to the build directory of the provided binary, which is NOT present, in the tar file of the binary, or is effected by any option I have tried.
I have seen a similar error in the provided docker image...
Error: ENOENT: no such file or directory, open '/src/packages/server/build/web/index.html'
which does match the build directory used in the 'Dockerfile' (and is still present in the image) that was used to build that docker image.
It is very frustrating.
Addendum... Looking at the processes, running the code-server binary will then run another code-server sub-process with a Source directory path as the first argument, along with 'development' command line options. Again no option seem to effect or change this non-existent hard-coded 'path' from being included.
I'm also getting the `Error: ENOENT: no such file or directory, open '/src/packages/server/build/web/index.html' when I run the provided docker image on a ubuntu 18.04 server, but not when I run it on my Arch dev machine ...
I'm seeing the same behavior as well when running on Ubuntu 18.04. It only happens when I do a curl call though, when viewing with a regular browser, everything seems fine.
@Juggels Good observation. Happening to me, too.
On my dev machine, the browser can load localhost:8443 just fine but curl localhost:8443 gets the Error: ENOENT: no such file or directory, open '/src/packages/server/build/web/index.html error.
This is an odd error message. If I attach to the docker container with docker exec -it --user root <container> bash I don't even see a /src folder:
ls /
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Yet somehow it works in the browser!
The error message is very odd and very confusing, and probably not related to the blank page problem.
However it seems to be the directory in which the binary is built, as the path is the same as the build directory as that is what it is in a docker image generated using the provided Docker file. For some reason the build directory path gets 'hard-coded' into the final binary, for some unknown reason.
The blank page however makes me think that we are missing prerequisite software, that they get from either the use of the 'Node' docker image or the 'Ubuntu' docker image. There is currently no list of what software and libraries the program requires! That lack of a error message to do with the blank page is what makes debugging the issue, basically impossible.
Not knowing what prerequisite are needed makes using the binary image in some other docker environment (EG: "php:7.3-apache" for use in a multi-user class, docker swarm service) very difficult.
In summery:
What does the binary program require to run properly?
And why does it need the source directory hardcoded, when it does not appear to require it run?
Addendum... Including into the OP...
Looking at the processes, running the code-server binary will then run another code-server sub-process with a Source directory path as the first argument, along with 'development' command line options. Again no option seem to effect or change this non-existent hard-coded 'path' from being included.
Same here. EC2 on AWS.
I'm able to reproduce it when running behind a reverse proxy as well - running the image (in docker) I can connect if I go directly to the exposed port, but when accessing it at a context (abc.com/editor/) I get:
codeserver_1 | Error: ENOENT: no such file or directory, open '/src/packages/server/build/web/code-server/index.html'
EDIT: It was a bit of a pain to get setup right - mostly on the DNS side - but I did find that running at a subdomain does work fine - so 'editor.abc.com' works which means it shouldn't be related to any of the network hops (router, rev.proxy, etc)
I got it working behind a reverse proxy mounted on a subpath. Docker 18.04.
@trowj are you rewriting the url to remove the path that you're mounting code-server on? I ask because the code-server segment in the error message I don't think should be there.
@Juggels @justinmoon Just a hunch, but maybe try adding a / to the url in the curl command.
Having the same problem. I just see a blank page when running remotely and connecting though ssh reverse proxy.
I got it working behind a reverse proxy mounted on a subpath. Docker 18.04.
@t-d-d Can you give us the nginx configuration for the reverse proxy?
@antofthy I'm actually using kubernetes ingress controller, so it generates the nginx config. But here's the relevant section of the nginx.conf file. Its served from my.fqdn.net/vscode/thomas/
nginx.conf.txt
I have it working (now with https, no options needed when run)
after adding a lot of entries from the provided nginx.conf file.
Many of the entries used variables that were not defined, but after removing those I got a non-blank page. I do not know what was 'missing' from the nginx config I was originally using.
user nginx;
worker_processes 1;
error_log stderr warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /proc/self/fd/1 main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
server {
# Proxy web requests into the appropriate swarm service
listen 80;
listen 8080;
listen 8443 ssl;
server_name ~^(?<name>\w+)\.server.domain$;
ssl_certificate /etc/ssl/certs/local.crt;
ssl_certificate_key /etc/ssl/certs/local.key;
location / {
# Use the swarm dns to resolve into the cluster
resolver 127.0.0.11 ipv6=off valid=1m;
proxy_pass $scheme://$name:$server_port;
proxy_set_header Host $host:$server_port;
# Don't be so strict about internal certificate checking
proxy_ssl_verify off;
proxy_redirect off;
set $proxy_upstream_name "-";
port_in_redirect off;
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/
# mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
}
}
}
The above was updated to also allow it to work with HTTPS (see below)
Yes... HTTPS is now working on my system..
Just as a FYI this was the error the proxy was showing due to the problem (domain and IPs hidden)
2019/05/14 06:15:11 [error] 6#6: *3 SSL_do_handshake() failed (SSL: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream, client: 192.168.1.2, server: ~^(?<name>\w+)\.server\.domain$, request: "GET / HTTP/1.1", upstream: "https://10.0.13.5:8443/", host: "container.server.domain:8443"
192.168.1.2 - - [14/May/2019:06:15:11 +0000] "GET / HTTP/1.1" 502 158 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0" "-"
And the proxy returns a "502 Bad Gateway" Error
The above errors however did not help in tracking down the problem.
What did help was trying to get HTTPS running on the apache server in the container!
The problem... code-server requires the ssl-cert package to be installed!
This was not installed previously as apache was only running with HTTP (port 80) only.
Now running "code-server" no arguments works (port 8443 via proxy as per above).
However... The Proxy is still not correct, as when ever "code-server" redirects a web page,
it removes the "port" from the redirected web page. This was not an issue with "--no-auth" option
as thet does not require any web redirection to the login page.
For example...
curl -k https://container.server.domain:8443
Found. Redirecting to https://container.server.domain/login
then again after providing the password as it redirects to the main UI.
Something is still missing.
Well that last problem was a quick fix, everything now working as it should...
Change the line in the nginx.conf I posted above from
proxy_set_header Host $host;
to
proxy_set_header Host $host:$server_port;
I will also change it in the above...
The proxy configuration, WITH the installation of ssl-cert resolved all issues!
That package requirement probably should be noted in the main readme!
I got it working behind a reverse proxy mounted on a subpath. Docker 18.04.
@trowj are you rewriting the url to remove the path that you're mounting code-server on? I ask because thecode-serversegment in the error message I don't think should be there.
@t-d-d I honestly don't recall fully - I think I was dropping the path prefix, but perhaps not - I swapped over to the subdomain method and just moved on with my life so I don't have it setup currently to test. I'll try to validate though later on, with any new build as might be available too.
I am not certain on the path handling component, and the proxy server uses a wildcard DNS entry
EG *.server.domain
and uses that wildcard as the name of the docker service, (also the username of the owner of that container) to proxy to the right place.
As such code-server for me is running at the top level of its own server name.
Most helpful comment
I'm also getting the `Error: ENOENT: no such file or directory, open '/src/packages/server/build/web/index.html' when I run the provided docker image on a ubuntu 18.04 server, but not when I run it on my Arch dev machine ...