Currently the magic .len field of arrays yields a usize. This means I sometimes have to use an intCast which is not ideal, and it would be preferable if it instead yielded a comptime_int.
I think this is a fairly obvious (and small) improvement, and can't think of any downsides.
It is not obvious that it is improvement, and even less obvious if it's not a pessimization instead.
It is necessary to consider the implications of:
comptime_int allowing negative integer range;comptime_int having higher range than usize in general.comptime_int being able to coerce into any other integer type implicitly.usize is correct type for any size of array unlike comptime_int and you'd be hardpressed to find a good enough reason to justify changing what is both correct and works into what is loosely defined and is a possible vector for compiler errors or no errors at all, that are hard to understand or track down because now type is not just usize, it is any kind of int type supported, from i/u0 to i/u65535.
1 doesn't matter as it will never happen; .len of an array cannot be negative anyway.
2 doesn't matter as if the .len of your array is larger than usize, well, you have far more important things to worry about.
3 is the benefit. Do note that a comptime_int will only coerce to another integer type which may fit the value of the comptime_int. You couldn't accidentally coerce the .len of a [1000]f32 into a u8. This is a compile error.
it is any kind of int type supported, from i/u0 to i/u65535.
This is false, comptime_int is distinct from these entirely; it is of arbitrary precision and has no conceptual bit size or signedness. It as also, as far as I can tell, not bound by the limit of 65535 bits.
The length of an array is a comptime-only concept anyway - it's not explicitly stored anywhere and should be treated as such. All we'd really need to preserve sanity is to have a compile error for if you tried to create an array of an invalid length. If you want to treat the length as a usize, you can do just that with a coercion.
As for your worry about creating hard to understand compile errors, I would argue that it is very clear; it would be of no real difference to this:
error: integer value 1000 cannot be coerced to type 'u8'
const abc: u8 = 1000;
^
Another issue is that .len on slices is usize and can not be comptime_int. Having the type of .len differ between arrays and slices would be an inconstancy that should be avoided.
1 doesn't matter as it will never happen; .len of an array cannot be negative anyway.
So don't memory bugs introduced by comptime_int_len - 10i32 evaluating to potentially negative i32 since comptime_int coerces into whatever it comes into first.
2 doesn't matter as if the .len of your array is larger than usize, well, you have far more important things to worry about.
comptime_int_len - 10i32. I think I wrote this somewhere already.
3 is the benefit.
comptime_int_len - 10i32. I think I wrote this somewhere already.
This is false, comptime_int is distinct from these entirely;
It is not. It is .Int like type range I mentioned and coerces into whatever it comes into contact first, example: comptime_int_len - 10i32. I think I wrote this somewhere already.
The length of an array is a comptime-only concept anyway
comptime_int_len - 10i32 is not guaranteed to be a comptime expression, -10 could be what user inputed.
As for your worry about creating hard to understand compile errors, I would argue that it is very clear;
Do you know why is it clear? Have you confirmed that Zig as it is now does not rely on .len being usize to provide us with this clarity? Have you changed it to comptime_int and tested yourself that comptime_int_len - 10i32, which, by the way, is giving me an unsettling feeling of d茅j脿 vu, does not show something unclear or unexpected, vague or straight up wrong?
Also let me ask you what is compiler supposed to do when user uses i32 as an indice because they were too lazy to cast offset to usize. Should it emit a range check? Should it fail to compile at all? Or should it say that this never happens?
10i32 is not zig syntax.
So don't memory bugs introduced by comptime_int_len - 10i32 evaluating to potentially negative i32 since comptime_int coerces into whatever it comes into first.
It will only evaluate to a negative number in places where it can be negative, aka when passed to a location whose type is coercible from i32, I don't see what the issue is.
const ArrType_len = 10; // pretend this is ([10]void).len for a second
fn foo(arg: usize) void {}
fn bar(arg: i32) void {}
pub fn main() void {
var val: i32 = undefined;
foo(ArrType_len - val); // error: expected type 'usize', found 'i32', as expected
bar(ArrType_len - val); // works as long as ArrType_len >= std.math.minInt(i32) and ArrType_len <= std.math.maxInt(i32)
}
It is not. It is .Int like type range I mentioned and coerces into whatever it comes into contact first, example: comptime_int_len - 10i32. I think I wrote this somewhere already.
Your statement was in fact false, comptime_int is not any kind of int type supported, from i/u0 to i/u65535, it is a C++ bigint struct managed by the compiler.
Also let me ask you what is compiler supposed to do when user uses i32 as an indice because they were too lazy to cast offset to usize. Should it emit a range check? Should it fail to compile at all? Or should it say that this never happens?
pub fn main() void {
var i: i32 = 0;
while (i < ArrType_len) : (i += 1) {} // This compiles for ArrType_len: comptime_int and ArrType_len: usize
}
Most helpful comment
10i32is not zig syntax.It will only evaluate to a negative number in places where it can be negative, aka when passed to a location whose type is coercible from
i32, I don't see what the issue is.Your statement was in fact false,
comptime_intis notany kind of int type supported, from i/u0 to i/u65535, it is a C++ bigint struct managed by the compiler.