#![feature(never_type)]
use std::mem::size_of;
fn main() {
type V = Vec<!>;
let v : V = Vec::with_capacity(123);
println!("{} {} {}",size_of::<Option<!>>(), size_of::<V>(), v.capacity());
}
Expected: 0 0 18446744073709551615
Actual: 0 24 18446744073709551615
Length field is irrelevant, as the vector can be only empty. capacity()
getter does not work for ZST anyway, so capacity field is irrelevant too.
Obviously, HashMap and other data structures should collapse to ZST as well when facing the !
.
It's not only !
, but any ZST that will yield a Vec
that takes 24 bytes. I'm pretty sure this is because of Vec
's own minimal size: len: usize
, cap: usize
, ptr: Unique<T>
need to be stored in memory, even if it's empty (note that usize
is 8, usize + usize + ptr
is 24).
Can this useless ballast be optimized away?
Maybe there can be some ConditionalUsize<T>
that is like usize
for normal types and is like ()
otherwise. Or even like cap: ForSized<T, usize>, len: ForInhabited<T, usize>, ptr: ForSized<T, Unique<T>,
.
It could in theory be optimized away through specialization, but what's the actual value of doing that? In what context are you working with enough Vec<!>
s that 24 "useless" bytes for each matters?
Like with other degenerate cases, probably when code is generated by macros or some other means and relies on heavy optimizer to remove the most of it and retain the meaningful part.
See also #45431.
It seems like this is too esoteric and doesn't actually have a concrete use case for us to track, as well as being not _entirely_ clear to me that this is true. Any wins here are likely to be equivalent to #45431 as cited by cuviper.
Most helpful comment
It could in theory be optimized away through specialization, but what's the actual value of doing that? In what context are you working with enough
Vec<!>
s that 24 "useless" bytes for each matters?