In this example, Zig can know that when you do x & 0b11 the range of the runtime value is 0 ... 0b11 which fits cleanly in a u2, so there is no need for the compile error:
const assert = @import("std").debug.assert;
test "runtime value hint" {
assert(foo(1234) == 2);
}
fn foo(x: u64) -> u2 {
return x & 0b11;
}
gives
/home/andy/dev/zig/build/test.zig:8:14: error: expected type 'u2', found 'u64'
return x & 0b11;
^
Instead should work with no compile errors.
We can have a runtime value integer range hint for at least these operations:
Further, runtime hints can be maintained. If you have a known range of 0 .. 10 and you add 42, then the range becomes 42 .. 52.
These runtime hints can also be used to avoid unnecessary integer overflow checking. If the range is known to not overflow, no safety check should be emitted.
How it will interact with type deduction?
var x : u64 = 1111;
var y = x & 0b11;
fn foo(x: u64) -> var {
return x & 0b11;
}
It will not change the type.
See also #422.
While this seems nice on the surface, it and #422 seem like they violate the goal of Zen to be explicit.
Any kind of implicit casting makes me worried because that is one of the things that C++ did that I think was a huge mistake. They even had to make extensions to the language to disallow it.
Clearly this is just aimed at implicitly casting integers between sizes, but if it is done in one area then I worry it will be asked in another.
In C implicit size promotion and things like allowing mixed signed/unsigned arithmetic are a source of problems (mostly compilers warn, but there is so much code that throws such warnings). I think it may be a source of UB as well. Even seasoned C programmers are often not sure exactly what will be happening in a simple arithmetic statement. One of the (many) reasons I like Zen is that it makes you think carefully about what you are doing.
I know that as a non-contributor (so far), my opinion cannot carry much weight, but on this topic I urge you think carefully.
seem like they violate the goal of Zen to be explicit.
This seems explicit to me. When I read x % 200, I read it as a function that takes x and maps it to 0..199. So it feels redundant to then require a cast afterward.
Similar x & 0x0f says get only the lowest 4 bits of x and x >> 8 says set the 8 msb of x to 0.
Most helpful comment
While this seems nice on the surface, it and #422 seem like they violate the goal of Zen to be explicit.
Any kind of implicit casting makes me worried because that is one of the things that C++ did that I think was a huge mistake. They even had to make extensions to the language to disallow it.
Clearly this is just aimed at implicitly casting integers between sizes, but if it is done in one area then I worry it will be asked in another.
In C implicit size promotion and things like allowing mixed signed/unsigned arithmetic are a source of problems (mostly compilers warn, but there is so much code that throws such warnings). I think it may be a source of UB as well. Even seasoned C programmers are often not sure exactly what will be happening in a simple arithmetic statement. One of the (many) reasons I like Zen is that it makes you think carefully about what you are doing.
I know that as a non-contributor (so far), my opinion cannot carry much weight, but on this topic I urge you think carefully.