https://ci.nodejs.org/job/node-test-commit-osx/13607/nodes=osx1010/console
not ok 1990 sequential/test-fs-readfile-tostring-fail
---
duration_ms: 0.506
severity: fail
stack: |-
/Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:60
throw err;
^
AssertionError [ERR_ASSERTION]: false == true
at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:34:12
at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/common/index.js:533:15
at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:528:3)
I think better is assert.equal(err.constructor, Error)
to display more information.
If err is not an instance of Error (ex. Number), it will display constructor name.
It would be nicer than assert(err instanceof Error)
.
> const err = new Error()
undefined
> assert(err instanceof Number)
AssertionError [ERR_ASSERTION]: false == true
> assert.equal(err.constructor, Number)
AssertionError [ERR_ASSERTION]: { [Function: Error] stackTraceLimit: 10, prepareStackTrace: undefined } == [Function: Number]
My opinion does not solve the issue but I think that it will provide useful information at next time the same problem occurs.
How do you think?
@Leko If the error is not an Error
I think in this case it's basically a null
. The question is why the read/toString() succeeded here.
it's basically a null
@joyeecheung Ah, I see. It鈥檚 just nothing.
This seems to be failing reasonably often again. Anyone have any ideas?
https://ci.nodejs.org/job/node-test-commit-osx/16147/nodes=osx1010/tapResults/
This does not only fail on OS-X as it seems:
https://ci.nodejs.org/job/node-test-commit-linux/16441/nodes=ubuntu1404-64/console
Easily reproduced with adjusting ulimits:
/home/gireesh/node/test/sequential/test-fs-readfile-tostring-fail.js:67
throw err;
^
AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
assert.ok(err instanceof Error)
at /home/gireesh/node/test/sequential/test-fs-readfile-tostring-fail.js:34:12
at /home/gireesh/node/test/common/index.js:474:15
at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:424:3)
with this patch it shows the error was null - evidently the write failed, so the read succeeded.
--- a/test/sequential/test-fs-readfile-tostring-fail.js
+++ b/test/sequential/test-fs-readfile-tostring-fail.js
@@ -31,6 +31,7 @@ for (let i = 0; i < 201; i++) {
stream.end();
stream.on('finish', common.mustCall(function() {
fs.readFile(file, 'utf8', common.mustCall(function(err, buf) {
+ console.log(err)
assert.ok(err instanceof Error);
null
...
#l /home/gireesh/node/test/.tmp/toobig.txt
-rw-r--r-- 1 gireeshpunathil staff 1024000000 May 16 22:06 /home/gireesh/node/test/.tmp/toobig.txt
So I am not claiming that th CI had ulimit -f
set to low values, but under differing fs situations, such a circumstances would have become in effect.
I guess the test should validate that kStringMaxLength
bytes of data is indeed written, before making such an assertion.
Inviting interested parties to come up with a PR - I know the issue and can provide pointers.
Since the most recent reported failure here is February, I'll mention that it happened again today:
https://ci.nodejs.org/job/node-test-commit-osx/18712/nodes=osx1010/console
not ok 2198 sequential/test-fs-readfile-tostring-fail
---
duration_ms: 0.642
severity: fail
exitcode: 7
stack: |-
/Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:67
throw err;
^
AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
assert.ok(err instanceof Error)
at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:34:12
at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/common/index.js:443:15
at FSReqWrap.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:53:3)
...
Mostly guessing, but maybe common.isAIX
in this line needs to be changed to ! common.isWindows
?
if (common.isAIX && (Number(cp.execSync('ulimit -f')) * 512) < kStringMaxLength)
Failed on test-requireio-osx1010-x64-1:
https://ci.nodejs.org/job/node-test-commit-osx/18735/nodes=osx1010/console
not ok 2199 sequential/test-fs-readfile-tostring-fail
---
duration_ms: 0.264
severity: fail
exitcode: 7
stack: |-
/Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:67
throw err;
^
AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
assert.ok(err instanceof Error)
at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/sequential/test-fs-readfile-tostring-fail.js:34:12
at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1010/test/common/index.js:443:15
at FSReqWrap.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:53:3)
...
Mostly guessing, but maybe
common.isAIX
in this line needs to be changed to! common.isWindows
?if (common.isAIX && (Number(cp.execSync('ulimit -f')) * 512) < kStringMaxLength)
Alas, that won't work. We just saw it fail on test-requireio-osx1010-x64-1 in CI and ulimit -f
reports unlimited
on that machine.
Stress test: https://ci.nodejs.org/job/node-stress-single-test/1855/nodes=osx1010/
Edit: Stress test was running on macstadium and seemed to be doing fine after 573-ish runs. Going to try again and hope I get a requireio machine this time for comparison.
Mostly guessing, but maybe
common.isAIX
in this line needs to be changed to! common.isWindows
?if (common.isAIX && (Number(cp.execSync('ulimit -f')) * 512) < kStringMaxLength)
Alas, that won't work. We just saw it fail on test-requireio-osx1010-x64-1 in CI and
ulimit -f
reportsunlimited
on that machine.
Even if not unlimited
, I'm not sure ulimit -f
reports in 512 byte blocks everywhere.
I was investigating this. Few points:
ulimit -f
does not seem to be a factor here: as, if one CI lacks sufficient user limit of files, it should cause consistent failure to this test case, unless some one alter this value (which I don't think is the case).
However, ulimit -f <a low value>
can be used to mimic the condition (that simulates a low disk condition). With that, I installed stream.on(error
) in expectation of catching error, but could not.
system trace showd that writev failed, but was never detected, retried, or propagated upwards, and was silently ignored:
25144/0x1b7f10: writev(0xA, 0x10305AC00, 0xC8) = 45831292 0 // it was supposed to write 1GB.
live debugger showed the same, and we seem to be closing the file as if we wrote enough:
Process 25275 resuming
Process 25275 stopped
* thread #6, stop reason = breakpoint 2.1
frame #0: 0x000000010094d1bd node`uv__fs_write(req=0x000000010250b148) at fs.c:727 [opt]
724
725 if (req->off < 0) {
726 if (req->nbufs == 1)
-> 727 r = write(req->file, req->bufs[0].base, req->bufs[0].len);
728 else
729 r = writev(req->file, (struct iovec*) req->bufs, req->nbufs);
730 } else {
Target 0: (node) stopped.
(lldb) n
(lldb) p r
(ssize_t) $10 = 5368708
(lldb) c
Process 25275 resuming
Process 25275 stopped
* thread #9, stop reason = breakpoint 3.1
frame #0: 0x000000010094d243 node`uv__fs_write(req=0x0000000103023478) at fs.c:729 [opt]
726 if (req->nbufs == 1)
727 r = write(req->file, req->bufs[0].base, req->bufs[0].len);
728 else
-> 729 r = writev(req->file, (struct iovec*) req->bufs, req->nbufs);
730 } else {
731 if (req->nbufs == 1) {
732 r = pwrite(req->file, req->bufs[0].base, req->bufs[0].len, req->off);
Target 0: (node) stopped.
(lldb) p r
(ssize_t) $11 = 45831292
(lldb) p req->file
(uv_file) $14 = 13
(lldb) c
Process 25302 resuming
Process 25302 stopped
* thread #10, stop reason = breakpoint 6.28 7.28
frame #0: 0x00007fff7b0ec4f8 libsystem_kernel.dylib`close
libsystem_kernel.dylib`close:
-> 0x7fff7b0ec4f8 <+0>: movl $0x2000006, %eax ; imm = 0x2000006
0x7fff7b0ec4fd <+5>: movq %rcx, %r10
0x7fff7b0ec500 <+8>: syscall
0x7fff7b0ec502 <+10>: jae 0x7fff7b0ec50c ; <+20>
Target 0: (node) stopped.
(lldb) f 1
frame #1: 0x000000010094b310 node`uv__fs_work(w=<unavailable>) at fs.c:1113 [opt]
1110 X(ACCESS, access(req->path, req->flags));
1111 X(CHMOD, chmod(req->path, req->mode));
1112 X(CHOWN, chown(req->path, req->uid, req->gid));
-> 1113 X(CLOSE, close(req->file));
1114 X(COPYFILE, uv__fs_copyfile(req));
1115 X(FCHMOD, fchmod(req->file, req->mode));
1116 X(FCHOWN, fchown(req->file, req->uid, req->gid));
(lldb) p req->file
error: Couldn't materialize: couldn't get the value of variable req: no location, value may have been optimized out
error: errored out in DoExecute, couldn't PrepareToExecuteJITExpression
(lldb) reg read rdi
rdi = 0x000000000000000d
(lldb) c
Process 25275 resuming
/Users/gireeshpunathil/Desktop/collab/node/test/sequential/test-fs-readfile-tostring-fail.js:67
throw err;
So this would mean we shoud:
/cc @nodejs/libuv
@gireeshpunathil If it's an issue with partial writes, can you check if https://github.com/libuv/libuv/pull/1742 fixes the issue?
o!
26084/0x1be13b: writev(0xA, 0x103048800, 0xC8) = 45831292 0
26084/0x1be124: kevent(0x3, 0x7FFEEFBF70B0, 0x0) = -1 Err#4
26084/0x1be13b: writev(0xA, 0x103048880, 0xC0) = -1 Err#27
the error is propagated, and the write is re-attempted, and finally it is thrown properly too:
Filesize limit exceeded: 25
In disc-near-full case the error can be different, but we won't reach the scenario where we are currently.
thanks @santigimeno !
So I guess we just have to mark this as flaky, wait for libuv#1742 to land, and for Node to consume it!
Nice. Let's see if we can finally move forward with the review of the PR.
Adding blocked
label until the libuv PR lands.
@gireeshpunathil can you get this marked as flaky unless the libuv PR is going to be landed pull in within the next few days, I think we are seeing quite a few failures in the regular CI runs.
@mhdawson - I can do that (busy with some unrelated work this week), but just wondering is there any changed process here - some recent activities suggested we are addressing flakes together in a new repo (/cc @joyeecheung ) ?
@Trott had actually opened a PR last night to do the same https://github.com/nodejs/node/pull/21177 , but closed on the basis that the flake reasons as different.
@gireeshpunathil The process is still the same. I'll post something in the collaborator discussion page and the core issue tracker when we settle down on the procedures.
ok, thanks @joyeecheung . So @Trott - given we have this issue open as a blocker
for the libuv PR, does it makes sense if you re-open and progress #21177 ? to keep the CI green?
We're seeing frequent timeouts of this test on AIX. Testing a potential fix: https://ci.nodejs.org/job/node-stress-single-test/1919/ 鈥斅爊ope that wasn't enough.
Edit: Let's get some logging output: https://ci.nodejs.org/job/node-stress-single-test/1920/
Re-opened the mark-as-flaky PR.
Reopening until the next version of libuv containing the fix from https://github.com/libuv/libuv/pull/1742 lands.
Flake morphed into a timeout:
14:17:52 not ok 2335 sequential/test-fs-readfile-tostring-fail
14:17:52 ---
14:17:52 duration_ms: 120.660
14:17:52 severity: fail
14:17:52 exitcode: -15
14:17:52 stack: |-
14:17:52 timeout
14:17:52 (node:29364) internal/test/binding: These APIs are exposed only for testing and are not tracked by any versioning system or deprecation process.
14:17:52 ...
https://ci.nodejs.org/job/node-test-commit-linux/nodes=ubuntu1404-64/22483/
re-occurred here as well: https://ci.nodejs.org/job/node-test-commit-linux/22527/nodes=ubuntu1404-64/consoleFull
I've started a run on master to see if this is occurring consistently on master now.: https://ci.nodejs.org/job/node-test-commit-linux/22568/
Run on master passed this time, so despite 2 failures in a row on my PR its still flaky. Probably needs additional failures before it gets officially marked as flaky.
@mhdawson In your test run, it barely didn't time out.
11:35:18 ok 2342 sequential/test-fs-readfile-tostring-fail
11:35:18 ---
11:35:18 duration_ms: 117.252
Any reason you didn't run a stress test? https://ci.nodejs.org/job/node-stress-single-test/2069/
This may be a performance regression.
If it's a performance regression https://github.com/nodejs/node/pull/23801 might fix it.
Any reason you didn't run a stress test? ci.nodejs.org/job/node-stress-single-test/2069
Job wasn't using ccache
so I reconfigured and restarted https://ci.nodejs.org/job/node-stress-single-test/2074/
Stress with -j8
: https://ci.nodejs.org/job/node-stress-single-test/2080/
Stress with
-j8
: https://ci.nodejs.org/job/node-stress-single-test/2080/
Test is in sequential
so -j
shouldn't matter.
If it's a performance regression聽#23801聽might fix it.
Alas, it failed in that PR's CI: https://ci.nodejs.org/job/node-test-commit-linux/22549/nodes=ubuntu1404-64/console
15:12:35 not ok 2341 sequential/test-fs-readfile-tostring-fail
15:12:35 ---
15:12:35 duration_ms: 120.709
15:12:35 severity: fail
15:12:35 exitcode: -15
15:12:35 stack: |-
15:12:35 timeout
15:12:35 (node:14769) internal/test/binding: These APIs are exposed only for testing and are not tracked by any versioning system or deprecation process.
15:12:35 ...
Would have been nice, though!
https://ci.nodejs.org/job/node-test-commit-linux/22577/nodes=ubuntu1404-64/console
17:45:39 not ok 2342 sequential/test-fs-readfile-tostring-fail
17:45:39 ---
17:45:39 duration_ms: 121.367
17:45:39 severity: fail
17:45:39 exitcode: -15
17:45:39 stack: |-
17:45:39 timeout
17:45:39 (node:18259) internal/test/binding: These APIs are exposed only for testing and are not tracked by any versioning system or deprecation process.
17:45:39 ...
The host (test-softlayer-ubuntu1404-x64-1) has some node
processes lingering from as far back as October 3. Will terminate them and see if it addresses the performance issue on the host.
Also has a (stalled?) citgm smoker thing running since October 13! Will terminate that too...
Might just be best to reboot it...
Stress test is passing, but it is running on test-digitalocean-ubuntu1404-x64-1. All the failures seem to be on test-softlayer-ubuntu1404-x64-1. I think it was probably that stray CITGM etc. I've definitely significantly reduced the load reported via uptime
and top
but terminating the stale processes.
https://github.com/libuv/libuv/pull/1742 should have fixed the original issue (partial writes) this test was suffering from, anything else should be a different issue. Were there recent failures for this?
Were there recent failures for this?
If you have node-core-utils
, you can do ncu-ci walk commit
to get all recent failures and grep the results for the test you care about.
I'm running it right now so I'll let you know.
It prints stuff to stderr so I do ncu-ci walk commit 2>&1 | tee out.txt
to put it into a file.
@gireeshpunathil No recent failures. I'll close this. Thanks!
This started happening routinely again lately on CI, perhaps coinciding with a switch to MacStadium or something? @nodejs/build
00:13:49 not ok 2474 sequential/test-fs-readfile-tostring-fail
00:13:49 ---
00:13:49 duration_ms: 19.795
00:13:49 severity: fail
00:13:49 exitcode: 7
00:13:49 stack: |-
00:13:49 /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1011/test/sequential/test-fs-readfile-tostring-fail.js:67
00:13:49 throw err;
00:13:49 ^
00:13:49
00:13:49 AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
00:13:49
00:13:49 assert.ok(err instanceof Error)
00:13:49
00:13:49 at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1011/test/sequential/test-fs-readfile-tostring-fail.js:34:12
00:13:49 at /Users/iojs/build/workspace/node-test-commit-osx/nodes/osx1011/test/common/index.js:367:15
00:13:49 at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
00:13:49 ...
https://ci.nodejs.org/job/node-test-commit-linux/25676/nodes=centos7-64-gcc6/console
test-rackspace-centos7-x64-1
08:36:11 not ok 2396 sequential/test-fs-readfile-tostring-fail
08:36:11 ---
08:36:11 duration_ms: 9.505
08:36:11 severity: crashed
08:36:11 exitcode: -9
08:36:11 stack: |-
08:36:11 ...
given the perennially flaky
nature of this test, I would like to discuss some considerations on the premise of the test.
the test has two parts:
It is the first part that is highly platform, load, environment, disk dependent where the issues prop up.
What we want to validate is the string API (content threshold), we should find a more stable means to feed large data to it.
What we want to validate is the fs API (the large file read), then that can be taken out and tested in a more stable manner - which takes care of environmental factors with proper catch sink.
Combining the two seems to be problematic. However, the name of the test is suggestive of relating the two in some way. Who knows the background of the test, that can state whether the inter-relation is a must for the validity of the test, or those can be split?
Who knows the background of the test, that can state whether the inter-relation is a must for the validity of the test, or those can be split?
Test was introduced in b6207906c45 by @evanlucas. PR was https://github.com/nodejs/node/pull/3485 and it was to fix a bug reported in https://github.com/nodejs/node/issues/2767.
ok, so looks like fs.readFile
is the key API being tested here; so we cannot avoid reading large content! So the only question is, can we avoid writing large content, instead leverage an existing large content, say process.execPath
or something similar? the amount in question is 1GB, and the node executable is much smaller than that.
https://ci.nodejs.org/job/node-test-commit-linux/26604/nodes=ubuntu1804-64/console
test-joyent-ubuntu1804-x64-1
00:26:47 not ok 2460 sequential/test-fs-readfile-tostring-fail
00:26:47 ---
00:26:47 duration_ms: 26.401
00:26:47 severity: fail
00:26:47 exitcode: 7
00:26:47 stack: |-
00:26:47 /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:67
00:26:47 throw err;
00:26:47 ^
00:26:47
00:26:47 AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
00:26:47
00:26:47 assert.ok(err instanceof Error)
00:26:47
00:26:47 at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:34:12
00:26:47 at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/common/index.js:369:15
00:26:47 at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
00:26:47 ...
It may be a slight cheat to get the issue resolved, but I wonder if given that it deals with a 1Gb file, it should be moved to pummel
where it will still be tested in CI, but only once a day and on one platform.
https://ci.nodejs.org/job/node-test-commit-linux/26616/nodes=ubuntu1804-64/console
test-joyent-ubuntu1804-x64-1
17:26:21 not ok 2460 sequential/test-fs-readfile-tostring-fail
17:26:21 ---
17:26:21 duration_ms: 27.288
17:26:21 severity: fail
17:26:21 exitcode: 7
17:26:21 stack: |-
17:26:21 /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:67
17:26:21 throw err;
17:26:21 ^
17:26:21
17:26:21 AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
17:26:21
17:26:21 assert.ok(err instanceof Error)
17:26:21
17:26:21 at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:34:12
17:26:21 at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/common/index.js:369:15
17:26:21 at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
17:26:21 ...
but only once a day and on one platform.
@Trott - I don't know if this is feasible, but how about:
and assert that toString
fails on the edge?
the key here I believe to be able to use the same buffer for iterative file read operation.
test-digitalocean-ubuntu1804-x64-1 is passing consistently, but test-joyent-ubuntu1804-x64-1 is failing consistently.
The test is failing on that host because it has less than 1Gb of free disk space, so the file gets truncated and the error does not occur when the file is read. I think moving to pummel
is the right answer after all.
I'm removing workspaces and will put it back online, then open a pull request to move this test to pummel.
The test is failing on that host because it has less than 1Gb of free disk space, so the file gets truncated and the error does not occur when the file is read. I think moving to
pummel
is the right answer after all.
I'd expect the test to detect that -- Is it ignoring errors when writing the file out?
I'd expect the test to detect that -- Is it ignoring errors when writing the file out?
const stream = fs.createWriteStream(file, {
flags: 'a'
});
const size = kStringMaxLength / 200;
const a = Buffer.alloc(size, 'a');
for (let i = 0; i < 201; i++) {
stream.write(a);
}
stream.end();
I'd expect that to throw if there's a problem and it does indeed when I mess with file permissions to cause a problem.
There's also this, but that seems like it shouldn't get in the way either:
function destroy() {
try {
fs.unlinkSync(file);
} catch {
// it may not exist
}
}
...
process.on('uncaughtException', function(err) {
destroy();
throw err;
});
Could be an OS-specific and/or file-system-specific and/or configuration-specific thing so someone may need to log in again to figure out why it's not throwing an error if it's a mystery.
If it's a stream should it be listening for the error
event?
https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback
The
writable.write()
method writes some data to the stream, and calls the
suppliedcallback
once the data has been fully handled. If an error
occurs, thecallback
may or may not be called with the error as its
first argument. To reliably detect write errors, add a listener for the
'error'
event.
Interesting observation: the recent 19 failures all happened on test-joyent-ubuntu1804-x64-1
| Reason | sequential/test-fs-readfile-tostring-fail
|
| - | :- |
| Type | JS_TEST_FAILURE |
| Failed PR | 19 (https://github.com/nodejs/node/pull/24997/, https://github.com/nodejs/node/pull/26973/, https://github.com/nodejs/node/pull/26928/, https://github.com/nodejs/node/pull/26997/, https://github.com/nodejs/node/pull/26963/, https://github.com/nodejs/node/pull/27027/, https://github.com/nodejs/node/pull/27022/, https://github.com/nodejs/node/pull/27026/, https://github.com/nodejs/node/pull/27031/, https://github.com/nodejs/node/pull/27033/, https://github.com/nodejs/node/pull/27032/, https://github.com/nodejs/node/pull/26874/, https://github.com/nodejs/node/pull/26989/, https://github.com/nodejs/node/pull/27039/, https://github.com/nodejs/node/pull/27011/, https://github.com/nodejs/node/pull/27020/, https://github.com/nodejs/node/pull/26966/, https://github.com/nodejs/node/pull/26951/, https://github.com/nodejs/node/pull/26871/) |
| Appeared | test-joyent-ubuntu1804-x64-1 |
| First CI | https://ci.nodejs.org/job/node-test-pull-request/22051/ |
| Last CI | https://ci.nodejs.org/job/node-test-pull-request/22113/ |
not ok 2470 sequential/test-fs-readfile-tostring-fail
---
duration_ms: 23.935
severity: fail
exitcode: 7
stack: |-
/home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:67
throw err;
^
AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
assert.ok(err instanceof Error)
at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/sequential/test-fs-readfile-tostring-fail.js:34:12
at /home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-64/test/common/index.js:369:15
at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:54:3)
...
It would be interesting to know what kind of value err
is.
worker config is not too shabby (maybe a bit of a small disk)
ubuntu@test-joyent-ubuntu1804-x64-1:~$ free -h
total used free shared buff/cache available
Mem: 3.6G 281M 2.2G 388K 1.2G 3.1G
Swap: 1.9G 12M 1.9G
ubuntu@test-joyent-ubuntu1804-x64-1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 672K 369M 1% /run
/dev/vda1 7.3G 6.2G 1.1G 85% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/vdb 98G 61M 93G 1% /mnt
tmpfs 370M 0 370M 0% /run/user/1000
Should we upgrade the host, or keep it as a canary?
worker config is not too shabby (maybe a bit of a small disk)
ubuntu@test-joyent-ubuntu1804-x64-1:~$ free -h total used free shared buff/cache available Mem: 3.6G 281M 2.2G 388K 1.2G 3.1G Swap: 1.9G 12M 1.9G ubuntu@test-joyent-ubuntu1804-x64-1:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 1.8G 0 1.8G 0% /dev tmpfs 370M 672K 369M 1% /run /dev/vda1 7.3G 6.2G 1.1G 85% / tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda15 105M 3.4M 102M 4% /boot/efi /dev/vdb 98G 61M 93G 1% /mnt tmpfs 370M 0 370M 0% /run/user/1000
Should we upgrade the host, or keep it as a canary?
Maybe use it to see if https://github.com/nodejs/node/pull/27058 gives better diagnostics when it fails?
Most helpful comment
Inviting interested parties to come up with a PR - I know the issue and can provide pointers.