V version: V 0.1.21 f7c00b8
OS: Mac OS
As a safe language, V should not allow implicit type conversion at all or at least should not compile if the implicit conversion can lead to overflow or loss of precision. Currently it's inconsistent.
With unsigned types, it's strict and does not allow implicit conversion at all:
fn main() {
x := u64(1)
y := u32(1)
println(x + y)
}
test2.v:8:15: expected type `u64`, but got `u32`
7| y := u32(1)
8| println(x + y)
^
9| }
md5-76b00804fe740dcd31645132341d01f1
test2.v:8:15: expected type u32, but got u64
7| y := u64(1)
8| println(x + y)
^
9| }
With signed integers, it allows implicit conversion if there is no possibility of overflow:
```v
fn main() {
x := i64(1)
y := int(1)
println(x + y)
}
2
md5-8bb6d04c4746f147616882751ce3e3b7
test2.v:8:15: expected type int, but got i64
7| y := i64(1)
8| println(x + y)
^
9| }
With floats, it allows anything (and the result can loose precission):
```v
fn main() {
x := f64(1)
y := f32(1)
println(x + y)
}
2.000000
md5-8c27ef51fc8f8d3d2bc93ed1bfed4212
2.000000
```
Good summary @avitkauskas
My idea was to allow implicit casts where data can't be lost (int => i64, f32 => f64, int => f64) etc.
This is primarily to make math and game code less verbose:
n := 0
q := 1
math.foo(n, q)
vs
math.foo(f64(n), f64(q))
Nice! Then that should be stated in the docs.
And all of this has to be corrected to be consistent.
At the moment then:
u32 -> u64 should be allowed, but is not
f64 -> f32 should NOT be allowed, but it is
int -> u32 is also not allowed now, but should.
All in all, all combinations should be checked and made work as supposed.
These to the left should be allowed to be implicitly converted to anything to the right:
i8 -> u8 -> int -> u32 -> i64 -> u64
With floating point it's a bit more complicated, I think, as 32bit float has only 23(+1) significant bits for values, and 64bit float has 52(+1) significant bits.
So, it should be:
i8, u8 -> f32 -> f64
int, u32 -> f64
But NOT:
int, u32 -> f32
i64, u64 -> f64
And, of course, no conversion from float to integer should be allowed.
Hope that helps :)
You are correct:
warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
int -> u32 is also not allowed now, but should.
This would mean the value -33 (int) would be converted to 33 (unsigned)? For me this is even falsification of data
I rather see the following rules as allowed only:
Integers
i8 -> i16 -> int (i32) -> i64 -> i128
byte (u8) -> u16 -> u32 -> u64 -> u128
u8 -> i16
u16 -> i32
u32 -> i64
u64 -> i128
Floats
f32 -> f64 -> f128
Mixed:
u8,i8,u16,i16 -> f32
u32,i32 -> f64
u64,i64 -> f128
float -> int conversion in general only using floor/ceiling/round etc. functions
int -> u32 is also not allowed now, but should.
Indeed, this shouldn't be allowed. I missed that.
The rule is simple. If A is a subset of B, then A -> B is allowed.
Oh ye! @gslicer, you are correct! That was a big mistake of mine!
Probably it depends on the situation? It should definitelly NOT be allowed for the parameter of the function, but what about this?
a := -50
b:= u32(3000) + a
Should we explicitly cast in this case?
We have to give more thoughts on that.
@avitkauskas I would allow something like:
a := -33
b := a.abs() // absolute
print(b) // 33
For your example I would like indeed to have "mixed type evaluation" as long the result can fit into the receiver's type without value-falsification (but not sure how expensive it would be to let check this by compiler)
@gslicer Safe language should earn it's name :)
Should we explicitly cast in this case?
This is the main reason Go doesn't have any implicit casts. This always results in tons of rules people need to remember.
So I'd say if it's not part of our small table we defined, cast it manually:
u32(3000) + u32(a)
or
3000 + a
I did not want to encourage implicit casting by any means, just make sure they don't happen uncontrolled (or just wrong) ;)
@medvednikov Actualy, even explicit casting raises questions:
a := -50
b:= u32(3000) + u32(a)
println(b)
Can you immediatelly tell the value of b?
Actually, it prints 2950, and that looks like a bug, as it was interpreted as
b := u32(3000 + a)
And if you try this:
b := u32(a)
you get 4294967246.
And that's actually what is defined by the C standard - see it here with the comments:
http://c0x.coding-guidelines.com/6.3.1.3.pdf
But even the C standard makes me wonder. When you implicitly or explicitly cast signed to unsigned or any wider integer to the number with the less bits in representation, you actually get quite nonsensical value (unless you know 100% what you are doing, are sure about the implementation in your compiler, and specifically need that - probably 0.001% of the real live cases).
Could it be done better in V? If we say V is safe language, should only implicit (or explicit, if the user insists) casting by the rules correctly described by @gslicer be allowed in safe mode? And all other castings loosing significant bits should only be allowed in unsafe blocks?
Another option could be to implement these "dangerous" explicit castings with the run-time boundary checking (slower, but safe), panicing when the value is out of range. Then same could probably be done with math operations (safe addition, multiplication etc.) - I think there was a question about that is some GH issue.
Should that be how safe language behaves?
I don't know if all this would play havoc in all the code.
Just dumping my thoughs. Sorry.
should only implicit (or explicit, if the user insists) casting by the rules correctly described by @gslicer be allowed in safe mode? And all other castings loosing significant bits should only be allowed in unsafe blocks?
My instinct is to only require unsafe blocks for code that the compiler cannot verify as memory safe. Otherwise it's less clear that potential memory corruption can occur any time you see an unsafe block, it could just be value reinterpretation.
Another option could be to implement these "dangerous" explicit castings with the run-time boundary checking (slower, but safe), panicing when the value is out of range.
I agree that type conversion can produce unexpected values, but perhaps as can be encouraged for 'safe' casting. I'd prefer not to slow down reinterpretative type casts.
Most helpful comment
This would mean the value -33 (int) would be converted to 33 (unsigned)? For me this is even falsification of data
I rather see the following rules as allowed only:
Integers
i8 -> i16 -> int (i32) -> i64 -> i128
byte (u8) -> u16 -> u32 -> u64 -> u128
u8 -> i16
u16 -> i32
u32 -> i64
u64 -> i128
Floats
f32 -> f64 -> f128
Mixed:
u8,i8,u16,i16 -> f32
u32,i32 -> f64
u64,i64 -> f128
float -> int conversion in general only using floor/ceiling/round etc. functions