Can you provide a real-life use case?
it's good for UUID, IPv6, hashing (MD5) etc, we can store IPv6 into uint128 instead byte slice and do some mathematics with subnetworks, checking range of IP addresses
These use cases are not strong enough to justify adding 128-bit types,
which is a big task to emulate it on all targets.
Also note that GCC doesn't support __int128 on 32-bit targets and Go do
want consistent language features across all supported architectures.
I agree with you there aren't a lot of benefits for int128/uint128, maybe a little better performance for comparing and hashing in maps when we use uint128 for storing UUID/IPv6 because for byte slices or string we need do some loops and extra memory but it isn't important I think
I stat all interface flux of a device in one day.
In addition to crypto, UUID and IPv6, int128 would be enormously helpful for volatile memory analysis, by giving you a safe uintptr diff type.
It also just makes code that much more readable if you have to deal with large IDs e.g. those you get back from google directory API amongst others (effectively they're uuids encoded as uint128).
Obviously you can use math/big
but it makes the code much harder to reason about because you have to parse the code mentally first, distracting you from reading the code.
Adding a data point: ran into a situation with a current project where I need to compute (x * y) % m where x*y can possibly overflow and require a 128-bit integer. Doing the modulus by hand for the high and low halves is needlessly complicated.
Another +1 for both IPv6 and UUID cases.
The examples of UUID and IPv6 are not convincing to me. Those types can be done as a struct just as easily.
It's not clear that this is worth doing if processors do not have hardware support for the type; are there processors with 128-bit integer multiply and divide instructions?
See also #19623.
@ianlancetaylor I do not think so. GCC seems to use the obvious 6 instructions for mul, 4 for add and sub, and a more involved routine for quo. I'm not how anybody could emulate mul, add, or sub that precisely (in Go) without assembly, but that prohibits inlining and adds function call overhead.
The fact that the current tools can't yet inline asm code is not in itself an argument for changing the language. We would additionally need to see a significant need for efficient int128
arithmetic.
If there were hardware support, that in itself would suggest a need, since presumably the processor manufacturers would only add such instructions if people wanted them.
If there were hardware support, that in itself would suggest a need
A need that—presumably—compilers couldn't meet by adding their own 128-bit types, which they have. I mean, for all but division it's a couple extra instructions. For most cases that's been sufficient.
I confess I'm not an expert on CPU characteristics, but my understanding is much of the driving force behind adding larger sizes was the ability to address more memory. That makes me think general 128-bit support is rather unlikely.
Yet major compilers have added support (GCC, Clang, ICC, ...) for C and C++. Rust has them because of LLVM. Julia has them as well.
Other languages and compilers having support isn't sufficient reason to make a language change, sure. But it's evidence there exists a need other than simply UUIDs.
Their domain seems to lie in cryptography and arbitrary-precision calculations, for now.
Additional usecases are timestamps, cryptographic nonces and database keys.
Examples like database keys, nonces and UUID represent a pretty large collection of applications where keys/handles can't ever be reused or number ranges can't overlap.
@FlorianUekermann People keep saying UUID, but I see no reason that a UUID could not be implemented using a struct. It's not like people use arithmetic on a UUID once it has been created. The only reason to add int128
to the language is if people are going to use arithmetic on values of that type.
It's not like people use arithmetic on a UUID once it has been created
They do. UUIDs don't have to be random. Sequential UUIDs are common in databases for example. Combine sequential UUIDs with some range partitioning and you'll wish for integer ops in practice.
Still, timestamps seem like the most obvious example to me, where 64bit is not sufficient and the full range of arithmetic operations is obviously meaningful. Had it been available, I would expect that the time package contained some examples.
How big of an undertaking is the implementation of div? The rest seems rather straightforward.
How big of an undertaking is the implementation of div?
The code for naïve 128-bit division exists in the stdlib already (math/big). The PowerPC Compiler Writer’s Guide has a 32-bit implementation of 64-bit division (https://cr.yp.to/2005-590/powerpc-cwg.pdf, page 82) that can be translated upwards.
Use case: [u]int128 can be used to check for overflow of [u]int64 operations in a natural way. Yes, this could make you want int256, but since int64 is the word size of many machines, this particular overflow matters a lot. See e.g. #21588. Other obvious options to address this use case are math/bits and
Somewhat related use case: https://github.com/golang/go/issues/21835#issuecomment-356478304.
I have wanted int128 for representing currency in some situations.
If there were hardware support, that in itself would suggest a need
The ADX and BMI2 ISA extensions are implemented by recent Intel and AMD processors.
For example MULX does 128bit=64bit*64bit.
are there processors with 128-bit integer multiply and divide instructions
There are instructions that let you multiply two 64-bit registers to two 64-bit registers. Having access to those in the form of uint128 can significantly speed up cryptographic code, which might reduce our reliance on assembly implementations.
An example from what I'm looking at today: https://github.com/mit-plv/fiat-crypto/blob/f7b212b9/src/Specific/X25519/C64/femulDisplay.log
(I am however also very much unconvinced by the data storage use cases: there's no reason to store hashes or foreign IDs as numbers if you are not doing arithmetic on them. []byte
is fine.)
How about (uint64, uint64) -> (uint64, uint64) math/bits intrinsics if not uint128 in the language?
[...] Unfortunately, this requires writing assembly, because writing high-performance arithmetic is not possible in Go — it's simply not a design goal of the language. (There are a few reasons, most notably that there's no way to directly compute the (128-bit) product of 64-bit integers.)
How about (uint64, uint64) -> (uint64, uint64) math/bits intrinsics if not uint128 in the language?
How would the API work? ISTM there's two options:
uint64
and returns the upper and lower halves and it's DIY with the results, oruint64
returns a type Uint128 [2]uint64
that has various methods#1 has the downside of not making arithmetic operations any easier. For example, one use case is convolutions in a prime field. If you need to compute (a * b) % p
then getting the upper and lower halves of the 128-bit multiplication of a
and b
means you still need to do the modulus by hand.
#2 seems to just be a clumsy wrapper for a builtin uint128
type.
(BTW: I don't want to shoot down your idea right off the bat, and I'd rather have bits.Uint128
than nothing—those two issues just stood out to me.)
I'd much appreciate atomic.CompareAndSwapUint128 (or CASDoubleUint64(*[2]int64 to avoid language change), since double-word CAS makes it easy to implement tagged (or version) pointers to prevent ABA problem in lockless algorithms.
If we add these types, we will presumably need to change at least the strconv package to support converting int128
(and uint128
) from string to integer. This will require new functions, as the existing functions take int64
arguments. What should these new functions be called? What should we do if we decide to add int256
in the future?
The math/big package might need new setters and getters.
What packages other than strconv should change if we add these types?
(By the way, #24813 seems to be working well for the cryptography use case, so we don't really need the uint128 type for that anymore.)
Everything related to currencies/finance should be done in fixed-point ... and the 18 digits of precision you get with int64 is not enough.
It seems to me that 'int128' and 'uint128' should be reserved words in the language ... even if you choose not to implement it at this time.
@michaelthoward That's not necessary, as none of the predeclared type names like int
, int64
, etc., are reserved words. They are declared in the "universe block" and can be shadowed by names defined in package scope.
none of the predeclared type names like int, int64, etc., are reserved words
Thank you for explaining
The Snowflake data warehouse supports int128, and the type is useful for anything from IPv6 addresses, to time-plus-ID serials that sort well. (I e, 64 bit nanosecond timestamp in high order, 64-bit random node id in low order.)
The most-generic option would be support for array-as-int-of-fixed-size, which is different from the math/big.Int
type. I e, I could pass in a [16]byte, or a [32]byte, or a [6]byte, or whatever, and the compiler would perhaps have smarts enough to generate better code in cases where that's possible?
I came here to say that I want this for high performance fixed precision decimal calculations. You have essentially 37 digits to work with in a signed 128 bit integer. You set the radix (decimal) at some point, let's say you want 8 digits of precision then you have 29 digits (+/-100 octillion) before you overflow. This is very useful for high frequency trading and cryptocurrency applications where big
may be considered to be too slow due to heap allocation and looping.
@cespare
I have wanted int128 for representing currency in some situations.
Can you describe this use case in more detail? I don't see it, but it may be because I'm simply not rich enough ;)
I was just reading up on Rust, and happily surprised to see it there. Now that's not a reason why Go should have it in and of itself, of course. But still, I think it's a neat feature to have.
One place where it would come in handy would be subnet calculations on ipv6 addresses. Obviously that's possible with [16]byte, but the logic would be so much clearer when it'd be a single uint128 instead.
@ianlancetaylor
The examples of UUID and IPv6 are not convincing to me. Those types can be done as a struct just as easily.
Do you mean just storing the data, or also performing calculations on them?
I meant just storing the data. It seems to me that people don't normally use arithmetic operations with UUID values (once they have been created initially) or IPv6 addresses.
Maybe not arithmetic operations, but a common operation is to see if an IP address is within a certain subnet size (e.g. when implementing white/blacklists). Then you'd typically compare the first N bits of an address with the given subnet to see if it falls within the give subnet. Right now, that's quite tedious to do effectively.
Edit: To clarify, I'm not saying that we need the int128 support only because of this usecase. But given that it was brought up I felt I could show some support since that particular use case is one that bugged me before and would be easily solved with int128 support.
time.Duration int64 is not enough for values starting from January 1, 1601.
My 2c. The argument that "The processor doesn't do 128 bit maths so neither should the compiler" is not very convincing. If that were the case, we would be stuck on 8 bit math with 8 bit micros, and yet 8 bit micro's can easily do 32 bit math and its handled by the compiler (as it should be).
The reason to support it is most modern CPU's can do 64 bit maths natively. It is trivial in assembler to generate the necessary code sequences to extend the number range over two registers, the even have hardware features to facilitate it (carry flags anyone?), but to do the same in a high level language, it is NOT TRIVIAL. (unless go wants to expose the carry flag?) There are enough problem domains requiring large integers beyond 64 bits that asking for a convincing "reason" is torturous. The reason is, because the compiler should be there to make the life easy, especially for mathematics. Processors as its been pointed out already have built in functions to help with 128 bit maths beyond the simple carry flag.
Even if you are limiting yourself to 64 bit precision, there are many times when its a lot easier and clearer in code to extend into 128 bits then modulo back down. The performance hit on a 64bit machine would be negligible, and not much worse on even a 32 bit machine. Certainly the compiler doing it would be inordinately faster than emulating it at the high level. There is no rational reason why a 8 bit chip can support 32 bit integers, but a 32 bit chip can't support 128 bit ones.
Personally I would go further and say int256 should also be supported.
Would adding int128 hurt any existing application? No.
Do you need to add support to print them as strings? Certainly.
Does any other function need to change to support them? No, why would they? There is no reason any other function needs to take them, they could, maybe it would be great, but if they don't its not a reason to deprive them of use generally.
This issue is marked as "language change" and "Go 2" which I think makes it much less likely to be considered.
Would it break the Go 1 compatibility promise to add support for uint128/int128?
Years ago the Context was added and that required adding new functions and types to a huge number of stdlib's.
@chrispassas All language changes are marked "Go 2". That's just how we do things. As you can see from release notes, the language changes we've made in the last couple of releases were tied to "Go 2" issues, although they were backward compatible.
Would it still make sense to add these types if we implemented #19623 in one form or another?
Would it still make sense to add these types if we implemented #19623 in one form or another?
Yes. When I've missed int128, it's because I want 128 bits. I end up using two uint64s and implementing the annoying conversion/arithmetic logic myself.
When I want arbitrary precision arithmetic, using big.Int is not a problem.
(I deeply hope that #19623 never happens.)
I agree completely with @cespare
int128 and uint128 should implemented regardless of what happens with #19623
Yes, #19623 and this are not related and the commentary on #19623 indicates as much. Otherwise may as well get rid of all the other sized types as well.
I think uint128
is useful on its own for e.g. optimizations as it corresponds nicely to 128bit registers on amd64. The compiler can easily and efficiently generate single instructions for compares and load/stores on amd64 allowing to write high level go code that utilizes larger register widths than 64bits. This can be useful for writing memcpy or compare loops more efficiently without resorting to assembler. If the compile is made to recognize combined loads like it is already implemented for 32/64bits then loops like in https://go-review.googlesource.com/c/go/+/228823/5/src/unicode/utf8/utf8.go#497 can be speed up further without resorting to assembler.
(I understand that we could make this work already with the current compiler but this likely will be more complex in discovering loads to different array indeces)
This could also replace uses of [2]uint64
for memequal in current code:
https://github.com/golang/go/blob/2b8e60d464515634462ca472ca09c791e2cbf6ae/src/runtime/alg.go#L252
Platforms that dont support 128bit registers natively can split into two 64bit registers in the backend much like 64bits are split up into two 32bit valued on 386.
Personally I have seen more potential utility in having uint128
than existing uses for complex128
in everyday coding.
I agree with @stevenj. There's no excuse not to add 128-bit integer types, just like there's no reason why we shouldn't, in the future, expose architecture-level intrinsics like SSE and AVX vector types. Used correctly, 128-bit integer types can make code a lot easier to read and reason about.
upvote -> network programming. 2xuint64 structs are a hacky mess of a workaround. it's 2020....
One more reason: blockchains use uint256
as a combination of [4]uint64
, it will be much faster with using [2]uint128
.
@k06a or just uint256. I don't think its unreasonable for a compiler to supply native integers that are 2 sizes bigger than the base machine. In this case the most common base machine is 64 bit, so i don't think its unreasonable or excessively hefty for the compiler to supply 128 and 256 bit ints. No one complains when gcc will happily provide 32bit ints for 8 bit micros (such as AVR chips).
uint128 will be great for all the blockchain stuff written in golang. Do you know how difficult it is to serialize a big.Int.
Most helpful comment
it's good for UUID, IPv6, hashing (MD5) etc, we can store IPv6 into uint128 instead byte slice and do some mathematics with subnetworks, checking range of IP addresses