Somewhat related to #9055 I think, or at least that's the closest I could find. As things sit now BigInt and BigFloat aren't compatible with the IO::ByteFormat methods, even though they are numbers and probably should be.
Right now the only way I know of to write a BigInt to an IO is by first converting it to a hex string, then padding it, then converting it to a slice. It would be nice to have a more efficient, not to mention cleaner, way of doing so.
I'm not sure #decode can be implemented. Unlike normal ints, BigInt doesn't have a fixed bytesize. What should it do, create a BigInt from all the bytes remaining in the IO?
However if for now you want something more efficient than your solution you could do something like this:
require "big"
lib LibGMP
fun export = __gmpz_export(rop : Void*, countp : Int32*, order : Int32, size : Int32, endian : Int32, nails : Int32, op : MPZ*) : UInt8*
end
struct BigInt
def bytes(format = IO::ByteFormat::SystemEndian)
e = format == IO::ByteFormat::BigEndian ? 1 : -1
size = LibGMP.sizeinbase(self, 256)
Bytes.new(size).tap { |s| LibGMP.export(s, nil, e, 1, 1, 0, self) }
end
end
n = 1234567890.to_big_i
n.bytes(IO::ByteFormat::LittleEndian) # => Bytes[210, 2, 150, 73]
n.bytes(IO::ByteFormat::BigEndian) # => Bytes[73, 150, 2, 210]
Yeah, the problem here is that there's no absolute standard to serialize arbitrary length integers or floats, so it easily will be application specific.
So the problem isn't in writing the bigint to the IO/Slice, but in reading it right? Because it seems like converting it to bytes is simple enough, but converting back using an IO or Slice that contains other serialized datatypes isn't possible without some kind of concensus as to what a byte serialized integer of an arbitrary length should look like?
If that's the case, I'd settle for just having the #bytes method @Exilor came up with above. Most of the time when I want BigInt serialization it's BigInt -> Bytes not the other way around, and the BIgInt -> Hex String -> Byte Slice method is really inefficient.
@watzon The problem reading a BigInt's bytes from an IO is that there's no way to know where they end and some other thing's bytes begin because it could have any bytesize.
Edit: If you want to do both things:
require "big"
lib LibGMP
fun export = __gmpz_export(rop : Void*, countp : Int32*, order : Int32, size : Int32, endian : Int32, nails : Int32, op : MPZ*) : UInt8*
fun import = __gmpz_import(rop : MPZ*, count : Int32, order : Int32, size : Int32, endian : Int32, nails : Int32, op : Void*)
end
struct BigInt
def bytes(format = IO::ByteFormat::SystemEndian)
e = format == IO::ByteFormat::BigEndian ? 1 : -1
size = LibGMP.sizeinbase(self, 256)
Bytes.new(size).tap { |s| LibGMP.export(s, nil, e, 1, 1, 0, self) }
end
def self.new(bytes, format = IO::ByteFormat::SystemEndian) : self
e = format == IO::ByteFormat::BigEndian ? 1 : -1
new { |mpz| LibGMP.import(mpz, bytes.size, e, 1, 1, 0, bytes) }
end
end
BigInt.new(UInt8.slice(210, 2, 150, 73), IO::ByteFormat::LittleEndian) # => 1234567890
BigInt.new(UInt8.slice(73, 150, 2, 210), IO::ByteFormat::BigEndian) # => 1234567890
Right, that's what I thought. It's unfortunate, but nothing we can do about it unless we come up with a standard of our own I guess. BigInt#bytes would be nice though.
You could also simply write the BigInt as a string to the IO and then on the other side read the string and call #to_big_i on it.
Most helpful comment
I'm not sure
#decodecan be implemented. Unlike normal ints, BigInt doesn't have a fixed bytesize. What should it do, create a BigInt from all the bytes remaining in the IO?However if for now you want something more efficient than your solution you could do something like this: