const debug = @import("std").debug;
const uType = u128;
pub fn main () void {
debug.warn("{} {}", @intCast(uType, @maxValue(uType)), @intCast(uType, r()) * 2);
}
pub fn r () uType {
return @intCast(uType, 1844674407370955161611); // higher than 2 ** 64
}
-> 340282366920938463463374607431768211455 3689348814741910323222
if you change uType to anything higher than u128 compiling fails with LLVM ERROR: Unsupported library call operation! (lower works fine though)
Maximum width for unsigned types isn't mentioned in the docs (anyway there is no explicit list of unsigned types).
Is 128 as max intentional? I might like a u256...
The short answer to this and #1534 is that we need to send a patch to LLVM to make it emit library calls for this even though the scope of those library calls is outside of the standardized compiler-rt.
Would a zig fork of LLVM with upstream merging be a solution?
It would be a solution, but ideally we could use upstream without any patches. I think the feature is not so critical that we can't wait for one llvm release cycle (after getting the patch into llvm)
Inefficient workaround
const std = @import("std");
const Int = std.math.big.Int;
const warn = std.debug.warn;
pub fn main() void {
var alloc = std.heap.direct_allocator;
const a: u1024 = 1234;
const b: u1024 = 5678;
const c = a + b;
const i = Int.initSet(alloc, c);
warn("{}\n", i);
}
Most helpful comment
It would be a solution, but ideally we could use upstream without any patches. I think the feature is not so critical that we can't wait for one llvm release cycle (after getting the patch into llvm)