In the file lib/v8.js there is a line "const dummy = new ArrayBuffer();".
In my node app started with pm2, I get the following error.
What is the purpose of this line and why ist there no argument in the constructor of ArrayBuffer?
/root/.pm2/logs/app-error.log last 15 lines:
0|app | v8.js:139
0|app | const dummy = new ArrayBuffer();
0|app | ^
0|app |
0|app | RangeError: Invalid array buffer length
0|app | at new ArrayBuffer (
0|app | at v8.js:139:17
0|app | at NativeModule.compile (internal/bootstrap/loaders.js:364:7)
0|app | at Function.NativeModule.require (internal/bootstrap/loaders.js:176:18)
0|app | at Function.Module._load (internal/modules/cjs/loader.js:572:25)
0|app | at Module.require (internal/modules/cjs/loader.js:692:17)
0|app | at require (internal/modules/cjs/helpers.js:25:18)
0|app | at Object.
0|app | at Module._compile (internal/modules/cjs/loader.js:778:30)
0|app | at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
@developsessions I could not reproduce the issue, do you have a code example I could use here?
@vdeturckheim: It crashes in my environment if I execute "const dummy = new ArrayBuffer();".
@developsessions can you give me the output of console.log(process.versions)
please?
@vdeturckheim of course, the output is:
{ http_parser: '2.9.2',
node: '10.16.3',
v8: '6.8.275.32-node.54',
uv: '1.32.0',
zlib: '1.2.11',
brotli: '1.0.7',
ares: '1.15.0',
modules: '64',
nghttp2: '1.39.2',
napi: '4',
openssl: '1.1.1d' }
That's very weird, I don't have a linux machine ready atm sadly :/
If you want, I can upload a Image of the machine anywhere ...
I just tried using this docker image https://hub.docker.com/r/advancedclimatesystems/docker-buildroot/ and could not repro the issue. If you can provide me with a docker image with your setup, it woull be perfect indeed.
I don't know about docker. But I can send a disk image. You can write it to a usb stick (with rufus) and you can boot from that usb stick. (Also in a virtual machine)
@developsessions if you can manage to make a virtuablox image then It would save me a lot of time :)
What flags are you or pm2 passing to node? Can you paste the output of ps aux | grep [n]ode
?
Output:
525 root node /usr/lib/node_modules/pm2/lib/ProcessContainerFork.js
This seems irreproducible. I'm afraid I don't have good suggestions on how to proceed.
Can you reproduce when you run just node -e 'require("v8")'
? If not, then it's probably something that pm2 is doing.
I have the same problem in my Raspberry Pi.
# node -e 'require("v8")'
v8.js:237
const dummy = new ArrayBuffer();
^
RangeError: Invalid array buffer length
at new ArrayBuffer (<anonymous>)
at v8.js:237:17
at NativeModule.compileForInternalLoader (internal/bootstrap/loaders.js:279:7)
at NativeModule.compileForPublicLoader (internal/bootstrap/loaders.js:219:10)
at loadNativeModule (internal/modules/cjs/helpers.js:26:9)
at Function.Module._load (internal/modules/cjs/loader.js:872:15)
at Module.require (internal/modules/cjs/loader.js:1044:19)
at require (internal/modules/cjs/helpers.js:77:18)
at [eval]:1:1
at Script.runInThisContext (vm.js:120:20)
# node -e 'console.log(process.versions)'
{
node: '12.16.1',
v8: '7.8.279.23-node.31',
uv: '1.34.0',
zlib: '1.2.11',
brotli: '1.0.7',
ares: '1.16.1',
modules: '72',
nghttp2: '1.39.2',
napi: '5',
llhttp: '2.0.4',
http_parser: '2.9.3',
openssl: '1.1.1g'
}
Some clues: as @developsessions, I am using buildroot, which means that the node version I use has been cross-compiled. My compilation host is a x64 running Gentoo where another node version is installed.
Can someone (@vdeturckheim, @bnoordhuis) please point me to the source file where ArrayBuffer constructor is defined? I had a look in the source but could not find it. I suspect a sizeof(int)
mismatch or something similar…
@sveyret It’s here: https://github.com/nodejs/node/blob/e18afe45d2bb035434b8aac31375a9a15a605c9e/deps/v8/src/builtins/builtins-arraybuffer.cc#L29-L94
I’m curious whether this is potentially a compiler bug? Could that be? You should be able to add --abort-on-uncaught-exception
to the Node.js command line to make crash hard when the error is thrown, so that you can get a core dump/attach a debugger and find out which instruction exactly it is where it’s crashing.
@sveyret The code sections you're interested in are probably around:
And:
Note the Object::ToInteger()
call a few lines up.
@vdeturckheim That file contains the definitions for the C++ class behind ArrayBuffer
s, but not the direct JS-interfacing part (i.e. not the ArrayBuffer
JS constructor implementation), and in particular not the code that potentially results in that error message (the parts @bnoordhuis points to).
Thanks @addaleax ! Very clear!
@addaleax, yes, this could be a (cross-)compiler bug. Just for information, mine is:
armv7a-unknown-linux-gnueabihf-gcc (Gentoo 9.3.0 p2) 9.3.0
Copyright © 2019 Free Software Foundation, Inc.
I found something suspicious. I will make some tests but it may not be the cause of our problem. Anyway, I wonder why we test the host architecture here:
Shouldn't it be:
#if V8_TARGET_ARCH_32_BIT
@sveyret Yes, that seems like a bug. It’s not fully obvious to me how it would cause the problem here, but I think trying to fix it that way is worth it. :+1:
Indeed, it is not the source of the problem. Actually, I found that the provided value for byte_length
is random (probably not initialized). Sorry if it takes some time to diagnose, it's my first time looking at Javascript native code.
@sveyret Does that mean that TryNumberToSize()
returns false
here, or is length
already uninitialized? (If you’re using a debugger: I think you can use job
from https://github.com/v8/v8/blob/master/tools/gdbinit to print out the value of e.g. length
)
@addaleax, I don't have appropriate dev tools on my machine, so I use the universal debugger: std::cout <<
:wink:
And no, I actually did not test the return value of TryNumberToSize()
, so that may be the cause of my random result. I will continue my investigations.
So, after reading documentation, it seems that ArrayBuffer()
needs to be given a _length_ parameter. But in some v8 internal classes, it is called without. Anyway, from the code, I understand that this _length_ parameter is optional (args.atOrUndefined
). The problem is that if no length is provided, then the value is _undefined_ or, because a number is expected, _nan_. But this _nan_ cannot be converted to a length, so TryNumberToSize()
returns false, so the exception.
My main problem now is that, to my understanding, it should always fail. So why is it working most of the time? @addaleax, if you can give me some explanations…
@sveyret Okay, all of that makes a lot of sense… I’m surprised that it doesn’t always fail in that case, as you are.
Fwiw, as far as I can tell, when I run it on x64, is that TryNumberToSize()
is not even called with nan
in the first place – because Object::ToInteger()
converts undefined
to 0
for me.
(We call Object::ToInteger()
, which calls Object::ConvertToInteger()
, which calls DoubleToInteger()
, which explicitly checks for NaN and returns 0
in that case.)
Just so we’re on the same page, you’re also on Node.js master? And the value passed to TryNumberToSize()
is indeed NaN
?
@addaleax, no, I'm not on master, but directly on 12.16.1. I need to make this version work, so I have to find a patch for it. If the bug is also on master, I'll create a PR.
I'm not sure that the value is indeed a Nan
. From what I see, it is actually an undefined
seen as a number. Well, when I output the value, for example in DoubleToInteger
, it shows nan
, but std::isnan(x)
returns 0. Do you think that here is the problem? Where is this isnan
method defined? In the _glibc_?
EDIT: I tested x != x
, but it is false either.
@sveyret If the bug exists on any of the supported Node.js release lines, a PR would be helpful, master or not.
If you have a number that is a nan
value but std::isnan()
returns 0, then that’s probably a bug in the C standard library (whichever one you’re using), and if x != x
is false, then it’s most likely a compiler bug.
Some pointers:
@addaleax, for information, I made a few more tests just to be sure it is a compiler/glibc problem. I displayed the number value, its raw value (in hex - thank you @devsnek for the tip) and the result of isnan()
for the number used when value is undefined, for std::nan("")
and for std::numeric_limits<double>::quiet_NaN()
.
The results are the same for all these 3 values: nan
for the value, 0x7ff8000000000000
for the raw bytes, which is correct, and false for the isnan
.
Ok, that’s unfortunately almost certainly a compiler bug then. We could probably work around this by just passing 0
explicitly to the ArrayBuffer
constructor here, and I think a PR like that would be accepted, but it would not stop the problem from showing up under other circumstances.
Yes, that's what I thought. I will try to generate my system using _buildroot_ default toolchain, instead of my Gentoo cross-compiler to see if it's better.
I'd better not provide a PR which would only be a partial workaround. Especially because it seems to not be the only problem I have (I now face a MODULE_NOT_FOUND
as soon as the path to find the module starts with ..
).
We can probably close this issue, if OP agrees.
@addaleax, I made a few more tests, using other toolchains, and I always had the same problem. So I made a little program just to test the isnan
function. And I found that the isnan
function stops working when we are setting one of these options: -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
, but I don't see the relation and I'm not sure I can remove them…
@sveyret This is a long shot, but any chance you can share the resulting node
binary? I’m curious what the generated Object::ConvertToInteger()
code looks like. Also, yes, I’m very surprised that these options have an effect – it might be worth taking a look at your build system’s <cmath>
/<math.h>
files…
@addaleax, you will find the binary file here: https://we.tl/t-zWCwZNnLRk
Okay, my understanding of ARM assembly isn’t perfect, but here’s the interesting part:
a3444c: ed1b0b05 vldr d0, [fp, #-20] ; 0xffffffec
a34450: eeb50b40 vcmp.f64 d0, #0.0
a34454: eef1fa10 vmrs APSR_nzcv, fpscr
a34458: 0a000003 beq a3446c <v8::internal::Object::ConvertToInteger(...)+0x64>
a3445c: eeb50bc0 vcmpe.f64 d0, #0.0
a34460: eef1fa10 vmrs APSR_nzcv, fpscr
a34464: ba000005 blt a34480 <v8::internal::Object::ConvertToInteger(...)+0x78>
a34468: ebeca41e bl 55d4e8 <floor@plt>
a3446c: e1a00004 mov r0, r4
a34470: e3a01000 mov r1, #0
a34474: ebf92b85 bl 87f290 <v8::internal::Factory::NewNumber(double, v8::internal::AllocationType)>
a34478: e24bd008 sub sp, fp, #8
a3447c: e8bd8810 pop {r4, fp, pc}
a34480: ebeca397 bl 55d2e4 <ceil@plt>
a34484: eafffff8 b a3446c <v8::internal::Object::ConvertToInteger(...)+0x64>
(Before this are the ConvertToNumberOrNumeric()
and IsSmi()
parts, which do not seem relevant here.)
This seems wrong to me, because it loosely corresponds to:
if (x == 0) {
} else {
if (x < 0) x = ceil(x);
else x = floor(x);
}
return NewNumber(x);
where the compiler appears to be under the assumption that x == 0
is true for NaN on ARM (which very well might be – I have trouble finding information about that, but I think the vcmpe
for the second comparison would raise a floating point exception if NaN ever reaches it). But even if that’s the case, the code just passes on NaN as-is.
So yeah, I think std::isnan()
on your compiler is broken in a weird way, which is very very surprising but I don’t have a better explanation.
@addaleax, actually, I think that the assumption is that the NaN value does not exist, as if -ffast-math
was used. And indeed, I noticed that when I add the large file definitions, there is a __arm_set_fast_math
section in my test binary.
In my test binary, I simply wrote a small function:
void analyze(double d, std::uint64_t &bits, int &tnan, int &teq) {
Visible v = { d };
bits = v.x;
tnan = std::isnan(d);
teq = d == d;
}
When compiled normally, it creates:
8a4: e92d0830 push {r4, r5, fp}
8a8: e28db008 add fp, sp, #8
8ac: eeb40b40 vcmp.f64 d0, d0
8b0: e24dd00c sub sp, sp, #12
8b4: ed0b0b05 vstr d0, [fp, #-20] ; 0xffffffec
8b8: e14b41d4 ldrd r4, [fp, #-20] ; 0xffffffec
8bc: eef1fa10 vmrs APSR_nzcv, fpscr
8c0: 03a03001 moveq r3, #1
8c4: e1c040f0 strd r4, [r0]
8c8: 13a03000 movne r3, #0
8cc: 63a00001 movvs r0, #1
8d0: 73a00000 movvc r0, #0
8d4: e5810000 str r0, [r1]
8d8: e5823000 str r3, [r2]
8dc: e24bd008 sub sp, fp, #8
8e0: e8bd0830 pop {r4, r5, fp}
8e4: e12fff1e bx lr
While if compiled with one of -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
, it creates:
8bc: e92d0830 push {r4, r5, fp}
8c0: e3a0c000 mov ip, #0
8c4: ec554b10 vmov r4, r5, d0
8c8: e28db008 add fp, sp, #8
8cc: e3a03001 mov r3, #1
8d0: e1c040f0 strd r4, [r0]
8d4: e581c000 str ip, [r1]
8d8: e5823000 str r3, [r2]
8dc: e24bd008 sub sp, fp, #8
8e0: e8bd0830 pop {r4, r5, fp}
8e4: e12fff1e bx lr
Just as if NaN wouldn't exist…
Correction, @addaleax! Actually, this is not the large file flags, but a -Ofast
flag which makes these bogus optimization, which seems more logical. This fast optimization flag most probably comes from _buildroot_ and not from the compilation process of NodeJS…
EDIT: just to confirm that without the -Ofast
flag, NodeJS is now working well…
@developsessions, can you ensure that you did not select “optimize for fast” in Build options/gcc optimization level? The -Ofast makes the compiler being not compliant to standards, as it is indicated in the help of the choice. I open an issue for it is clearer that this choice is dangerous: https://bugs.busybox.net/show_bug.cgi?id=13046.
@sveyret Thanks for looking further! Given that this is not something that Node.js or V8 can do anything about, I’ll close this issue as resolved :+1:
Hi @all,
I've faced the same issue and it was related to the fact that net-libs/nodejs has been compiled with the flag '-Ofast'. So, apparently we have to avoid from using -Ofast CFLAG on Gentoo with nodejs.
ERROR:
zlib.js:344
const dummyArrayBuffer = new ArrayBuffer();
^
RangeError: Invalid array buffer length
at new ArrayBuffer (
at zlib.js:344:28
at NativeModule.compileForInternalLoader (internal/bootstrap/loaders.js:276:7)
at NativeModule.compileForPublicLoader (internal/bootstrap/loaders.js:218:10)
at loadNativeModule (internal/modules/cjs/helpers.js:25:9)
at Function.Module._load (internal/modules/cjs/loader.js:908:15)
at Module.require (internal/modules/cjs/loader.js:1089:19)
at require (internal/modules/cjs/helpers.js:73:18)
at Object.
at Module._compile (internal/modules/cjs/loader.js:1200:30)
ENV:
{
node: '14.4.0',
v8: '8.1.307.31-node.33',
uv: '1.37.0',
zlib: '1.2.11',
brotli: '1.0.7',
ares: '1.16.1',
modules: '83',
nghttp2: '1.41.0',
napi: '6',
llhttp: '2.0.4',
openssl: '1.1.1g',
cldr: '36.0',
icu: '65.1',
tz: '2019c',
unicode: '12.1'
}
Most helpful comment
Correction, @addaleax! Actually, this is not the large file flags, but a
-Ofast
flag which makes these bogus optimization, which seems more logical. This fast optimization flag most probably comes from _buildroot_ and not from the compilation process of NodeJS…EDIT: just to confirm that without the
-Ofast
flag, NodeJS is now working well…