Currently we're doing what scalac is doing WRT to this issue. Example:
List(1, 2.3) // res: List[Double]
from a user standpoint, this should in a stricter language have been inferred as List[Int | Double]. In scalac they have a flag for this Ywarn-numeric-widen. Which is not yet implemented in Dotty.
If we choose not to make inference for this type of construct stricter, then we should definitely implement the Ywarn-numeric-widen-flag.
Could we simply remove weak conformance from the language altogether? Seems like a thing inherited from Java that few like anyway.
Strictly speaking we don't have weak conformance in dotty anymore. What we have is a new, more restricted and more localized rule. It applies to
If the expressions in these scenarios all have primitive numeric types, we make sure it's the same numeric type by widening as necessary.
The new rule is less sweeping and much easier to specify than the old one.
(See harmonize in Applications for the implementation).
Could we experiment with disabling it completely, i.e. making it union type or a type error? It still seems like the new approach wouldn't address the problems some people complain about with longs getting "widened" to floats, and precision being lost.
Yes, this has been requested before: https://github.com/lampepfl/dotty/issues/2289
But meanwhile the compiler has been changed to never infer union types: https://github.com/lampepfl/dotty/pull/2330 The proposed alternative was warnings, which I'm not super enthusiastic about, unless they're turned on by default.
I believe there's lots of code out there, in particular dealing with numerics where you write something like
Array(-2.33, 0, 3.0)
and so on. What's the point in requiring people to write 0.0? In my mind it's just pedantic busywork which will alienate people. But contrast, what's the gain? What do we stand to gain in inferring a useless type?
The other observation I have is that the amount of heat is often inversely proportional to the importance of an issue. This one here is about as unimportant as it gets. Scala's original weak conformance rules could be criticized for being too complicated. The new rules aren't. Are there constructed corner cases where they might do something that surprises some people? Sure. Should we care? I guess you know my position.
@odersky, in cases where constants are explicitly written, we can actually check if precision is lost during numeric conversion in compile time.
@DarkDimius Indeed.
unless you can determine at compile time that the conversion is safe, so for example a 0 literal would convert without any cost
(from my brainstorming in https://github.com/lampepfl/dotty/issues/2289#issuecomment-296276049)
@odersky I think the "heat" argument works both ways: we shouldn't make too much fuss about allowing people to write 0 when it would be reasonably trivial for them to write 0.0 if they want a double (especially as they have to adhere to this expectation of type-safety elsewhere), and given that the solutions to this (i.e. the "complicated" weak conformance in Scala, or the new scheme) are complicated, shouldn't the burden be on the user to just add .0 everywhere? But I think that discussion isn't worth having.
So as a more radical alternative, could we instead infer the type of literal numbers to be the intersection of all the types in which those literals are precisely representable, and then deal with removing the intersections during erasure?
1: Int(1) & Long(1L) & Double(1.0) & Float(1.0F)1L: Long(1L)2147483648: Long(2147483648L) & Double(2.147483648E9)3.141592653589793: Double(3.141592653589793)3.1415927: Double(3.1415927) & Float(3.1415927F)3.1415927F: Float(3.1415927F)3.1415927D: Double(3.1415927)The LUBs in an expression like Array(-2.33, 0, 3.0) would fall out to the "right" answer (i.e. Double), without any special rules. Likewise for if/else expressions and match blocks.
This may also help solve the problem which still exists in both Dotty and Scala where expressions like 1L :: 2 :: 3 :: Nil infers as List[AnyVal]. My hope would be that 3 :: Nil could be typed as List[Int(3) & Long(3L)], 2 :: 3 :: Nil would be List[Int & Long] and 1L :: 2 :: 3 :: Nil would infer to List[Long].
I think the challenge would be in finding the correct point to widen the type. It seems to work well for singleton literals currently (though maybe we're all just too accustomed to working with their widened primitive types instead of the singleton types), but having types like Int & Long & Float & Double inferred everywhere would be a visual distraction, so we would want to avoid that, I suspect using only those types for local inference, rather than result types...
And cases like this would remain problematic:
val list = List(1, 2, 3)
val list2 = 0L :: list
unless they were typed explicitly as
val list: List[Int with Long] = List(1, 2, 3)
val list2 = 0L :: list
And as a wild future idea, could we support BigInteger and BigDecimal literals just by adding them to the intersection? I have a feeling doing so might end up working only if everything gets boxed, in which case it becomes a less interesting idea, but I don't know the details...
@odersky what about the following:
object Foo{
def main(args: Array[String]): Unit = {
val list = List(1.2, 92233720368547751L, 4)
val long = 92233720368547751L
println(list)
println(list(1))
println(list(1).getClass)
println(list(1).toLong)
println(long)
println(list(1).toLong == long)
}
}
sbt: [info] Running Foo
List(1.2, 9.2233720368547744E16, 4.0)
9.2233720368547744E16
double
92233720368547744
92233720368547751
false
(https://scastie.scala-lang.org/btdDoZBcRNqJRWm12VMXWA)
Here, we have the same literal resulting in two different, unequal values, due to widening/conformance/whatever. There are no type annotations here, so I don't expect any implicits to kick in: if anything, I'd expect to get a List[Any] containing the mix of Longs and Doubles that I wrote in the source code. But I don't and so 92233720368547751L ends up being two different values.
Surely the same literal resulting in different values is a real problem, and that's not just pedantic busywork?
I don't really care what the solution is, whether it's union types or intersection types or inferring Any or whatever. I just don't want my numbers mysteriously, irreversibly losing precision despite there being no explicit conversions in sight
Somebody would have to take this on. This means:
It's a big project. I don't see anybody in the core team having the stamina to do it. Until that changes or we have an outside contributor I will close the issue.
Most helpful comment
Could we simply remove weak conformance from the language altogether? Seems like a thing inherited from Java that few like anyway.