Flow: Flow server does not start (v0.105.1)

Created on 11 Aug 2019  路  30Comments  路  Source: facebook/flow

Flow version: 0.105.0

Expected behavior

flow server should start a server

Version 0.104.0 is working as expected:

$ yarn add [email protected]
success Saved lockfile.
success Saved 1 new dependency.
info Direct dependencies
鈹斺攢 [email protected]
info All dependencies
鈹斺攢 [email protected]
Done in 1.07s.

$ ./node_modules/.bin/flow version
Flow, a static type checker for JavaScript, version 0.104.0

$ ./node_modules/.bin/flow server 
Aug 11 13:58:58.512 [info] argv=/tmp/flow-test/node_modules/flow-bin/flow-linux64-v0.104.0/flow server
Aug 11 13:58:58.512 [info] Creating a new Flow server

Actual behavior

$ yarn add [email protected]
success Saved lockfile.
success Saved 1 new dependency.
info Direct dependencies
鈹斺攢 [email protected]
info All dependencies
鈹斺攢 [email protected]
Done in 0.96s.

$ ./node_modules/.bin/flow version
Flow, a static type checker for JavaScript, version 0.105.0

$ ./node_modules/.bin/flow server 

$ echo $?
1
Crash bug

All 30 comments

I'm having the same issue; it creates a lock file but no log file and no error output. I manually removed the lock file to no avail.

I did get it to work by running as a superuser (sudo). This isn't ideal, but it does work.

/cc @mroch

can you try running the Flow binary directly instead of via node? /tmp/flow-test/node_modules/flow-bin/flow-linux64-v0.105.1/flow server

Just want to rule out that extra layer of complexity.

Running the binaries directly is how I figured out it works as a superuser, but not as a plain user.
Running flow server (from the binary) returns nothing (except the 1 returned by echo $?). Running just flow returns the Could not start Flow server!.

which linux distro/release?

the output of strace and --verbose might also help.

strace node_modules/flow-bin/flow-linux64-v0.105.1/flow server --verbose

Linux Mint 19.2; node v12.8.0

Does running flow server --no-cgroup work?

It does.

I just installed Mint and got this:

mroch@mroch-mint:~/flow-bin-test$ yarn flow server
yarn run v1.17.3
$ /home/mroch/flow-bin-test/node_modules/.bin/flow server
Aug 12 13:12:46.101 [info] argv=/home/mroch/flow-bin-test/node_modules/flow-bin/flow-linux64-v0.105.1/flow server --no-cgroup
Aug 12 13:12:46.101 [info] Creating a new Flow server

seems like it ran with --no-cgroup by default.

flow server re-execs itself as

systemd-run --user --slice flow.slice --scope flow server --no-cgroup

I think something is wrong with systemd-run here (and our limited check for support didn't catch it) but I'm on mobile and can't investigate right now.

in @xthule's strace: execve("/usr/bin/systemd-run", ["/usr/bin/systemd-run", "--quiet", "--user", "--scope", "--slice", "flow.slice", "--", "/usr/local/nvm/versions/node/v12"..., "server", "--no-cgroup", "--verbose"], 0x7ffc3f6bec28 /* 67 vars */) = 0

in my strace (also Mint): execve("/usr/bin/systemd-run", ["/usr/bin/systemd-run", "--quiet", "--user", "--scope", "--slice", "flow.slice", "--", "node_modules/flow-bin/flow-linux"..., "server", "--no-cgroup"], 0x7ffec8fd94c0 /* 51 vars */) = 0

I don't really know what i'm looking at but here's where our straces diverge:

xthule's

recvmsg(3, {msg_namelen=0}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
ppoll([{fd=3, events=POLLIN}], 1, NULL, NULL, 8) = 1 ([{fd=3, revents=POLLIN}])
recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\4\1\1L\0\0\0\4\0\0\0\246\0\0\0\1\1o\0\31\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 24
recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="/org/freedesktop/systemd1\0\0\0\0\0\0\0"..., iov_len=236}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 236
recvmsg(3, {msg_namelen=0}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
ppoll([{fd=3, events=POLLIN}], 1, NULL, NULL, 8) = 1 ([{fd=3, revents=POLLIN}])
recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\4\1\1g\0\0\0\5\0\0\0\242\0\0\0\1\1o\0\31\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 24
recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="/org/freedesktop/systemd1\0\0\0\0\0\0\0"..., iov_len=263}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 263
close(3)                                = 0
exit_group(1)                           = ?
+++ exited with 1 +++

mine

recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\4\1\1e\0\0\0\4\0\0\0\242\0\0\0\1\1o\0\31\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 24
recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="/org/freedesktop/systemd1\0\0\0\0\0\0\0"..., iov_len=261}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 261
execve("/home/mroch/flow-bin-test/node_modules/flow-bin/flow-linux64-v0.105.1/flow", ["/home/mroch/flow-bin-test/node_m"..., "server", "--no-cgroup"], 0x560eb53f0dd0 /* 51 vars */) = 0
brk(NULL)                               = 0x31f8000
[... starts the flow binary ...]

I don't know if the systemd-run version is making a difference, but here's the output of mine:

systemd 237
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid

mine (it's the same):

systemd 237
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid

Well now you're not helping.

huh, i can repro over SSH but not from a "real" console (via vmware).

That makes it sound like it's something from the shell.

it's this bug: https://github.com/systemd/systemd/issues/3388

confirmed because I see this in /var/log/syslog:

Aug 12 14:12:26 mroch-mint systemd[1023]: run-r20618e9d42d44569870c03696eaaeb88.scope: Failed to add PIDs to scope's control group: Permission denied

Found it in my journalctl.

Mint is using "hybrid" cgroup mode on systemd 237 (the fix is in 238 :(). the distro we use internally is legacy mode on systemd 242.

[ "$(stat -fc %T /sys/fs/cgroup/)" = "cgroup2fs" ] && echo "unified" || ( [ -e /sys/fs/cgroup/unified/ ] && echo "hybrid" || echo "legacy")

it sounds like this bug affects systemd <= 237 in unified or hybrid mode. @gabelevi sounds like we need to detect this case.

It looks like the merge request to fix the error comes after version 237, which is what Ubuntu Bionic (and Mint Tessa) are using. There are a few workarounds until the distros upgrade, possibly the best being booting using the legacy cgroup controller:
https://github.com/flathub/org.gimp.GIMP/issues/23#issuecomment-394911840

confirmed it works properly in Ubuntu 19.04 (systemd 240)

Is there any way to make this work on Ubuntu 18.04 LTS?

testing a fix. if it works I think we'll roll a 0.105.2 patch release. (cc @avikchaudhuri, @gabelevi)

sorry for this
systemd

Is there any way to make this work on Ubuntu 18.04 LTS?

Using the link I provided above you can change your grub command line to enable the legacy controller. Doing this worked for me and so far has not adversely affected anything on my system.

Fine. Be that way. :)

Thanks for the quick response.

Ok, the fix is deployed in v0.105.2!

It detects that systemd-run --user --scope doesn't work, and avoids using cgroups. The grub workaround @xthule mentions above will make systemd-run --user --scope work properly, so it'll still use cgroups in that case.

Was this page helpful?
0 / 5 - 0 ratings