Currently we have Int
which is an alias for Int32
or Int64
depending on the system architecture. I think it would good idea to implement that for floating point numbers as well.
There isn't really a "system floating point". Making Float
an alias of typeof(0.0)
(i.e. Float64
) might be ok if it will not confuse c/c++
user with float
.
This has been discussed multiple times on various mailing lists and probably in GitHub issues. Should probably have an FAQ entry. Changing this to a documentation issues.
Yes, that seems fine. The real reason I thought it might be useful since it is not entirely clear on the best way (i.e. most performant) to write code that is independent of system architecture.
When I was a Julia newbie I interpreted Int
as "I don't care to specify the exact type, so long as it's an integer and quick to compute with".
For that reason I've always wanted an analogous Float
type, and the lack of one was confusing to me as a beginner.
So long as the addition of one would not confuse other segments of Julia's userbase such as those coming from C or C++, I would be strongly in favor of @yuyichao's proposal.
Making Float an alias of typeof(0.0) (i.e. Float64) might be ok if it will not confuse c/c++ user with float.
I'm concerned that it will. There is already similar confusion between Complex
and Fortran COMPLEX
(see #8142).
The idea that your system's word size determines a preferred floating-point size is a misconception that should be corrected, not something that should be catered to and reinforced. Hence, the doc tag.
What was the rationale for allowing Int
then?
Because your system's word size does determine a preferred integer size.
Wouldn't the preferred integer size imply the preferred floating point size since floating points can be represented as the division of two integers and are rational.
Wouldn't the preferred integer size imply the preferred floating point size since floating points can be represented as the division of two integers and are rational.
The representability of floats as rationals has nothing to do with how they are represented in hardware. And if you mean by this statement that 64-bit floats can be represented as the ratio of 64-bit integers, then it is not even a true statement. Inf
s and NaN
s are not representable as a ratio of any two integers with a nonzero denominator. And very large finite Float64
s like 1.0e308
cannot be represented as the ratio of two 64-bit integers.
I was more referring that significand, base, and exponent of a floating point number are integers although they are not 64 bit integers Which I guess answers my question. If 64 bit integers implies preferred floating point it would be Float129
(1 bit for sign and 64 bit for exponent and significand)
64 bit integers would imply preferred floating point but it would be Float129 (1 bit for sign and 64 bit for exponent and significand)
You are assuming that modern hardware shares logic between floating point and integer operations, which is mostly not true.
Ok, I am no expert on floating point implementation. Im just pulling things from wikipedia. Anyway I found a link to a previous discussion that explains it well enough for me.
It was mentioned that it is bad form to have abstract types in record fields. I am assuming that the best way to handle both Float64
or Float32
would be something like this.
type foo{T<:FloatingPoint}
x::T
end
for type definitions and
function foo{T<:FloatingPoint}(x::T)
println(typeof(x))
end
for functions.
Is there any performance cost in doing this? Is it Julian?
I'm concerned that it will. There is already similar confusion between Complex and Fortran COMPLEX (see #8142).
FWIW, at least making Float
an alias of Float64
won't have such big of an performance impact. (I'm only suggesting that because I've mistyped Float64
as Float
again today...)
Is there any performance cost in doing this? Is it Julian?
That should be the way to do it for type parameters.
function foo{T<:FloatingPoint}(x::T) println(typeof(x)) end
This is not necessary since the compiler will specialize on the type of x
anyway. However,
function foo{T<:FloatingPoint}(x::Array{T}) println(typeof(x)) end
The parameter here is usually necessary (unless you only want to accept Array{FloatingPoint}
). The reason here is that Array{Float64}
and Array{FloatingPoint}
are different types that are not subtypes of each other.
function foo{T<:FloatingPoint}(x::Array{T}) println(typeof(x)) end
Will the compiler specialize on the type . That is to say is it equivalent to writing
function foo(x::Array{Float32})
println(typeof(x))
end
function foo(x::Array{Float64})
println(typeof(x))
end
Will the compiler specialize on the type . That is to say is it equivalent to writing
Yes.
For most of the cases, specifying type parameter on a _function_ won't cause the compiler to specialize the code more. It's only important for distinguishing different types. In another word, the function will be as effecient even if you just write foo(x) = println(typeof(x))
.
@lstagner in 32 bit architectures Int
is an alias for Int32
, in 64 bit architectures it is an alias for Int64
. At the begining this also confused me, since I also thougth, that then this must also be true for floats, ie. Float32
for 32 bit architectures and Float64
for 64 bit architectures. But 32 bit architectures also use Float64 (double precision floating point number) by default.
Did I understand correctly? Please correct me if I'm mistaken so I can update the FAQ, thanks!
Sounds about right. 32 vs. 64 bit floats is less about system architecture and more about numerical precision. I believe the conflation of the two concepts is the primary source of the confusion.
An argument in favor of having a type alias of Float
for Float64
is that Julia coders would then not have to think about the size of the float they need when coding for high-level tasks like symbolic math, plotting, etc. This would put Int
and Float
on the same footing in the sense that there would be a sensible default size for both (even though the default size is determined differently for Int
vs. Float
). This is, e.g., Python's approach: http://stackoverflow.com/a/31470078
that python link is unrelated to this discussion. The sensible default size for floating point number is always Float64, unless your algorithm demands something else. Unlike Int, it has nothing to do with the host processor.
@vtjnash - Right, but having a Float
alias would make things more convenient for high-level tasks (eliminating the need to specify a size for a floating point) at the expense of possibly conflating things for lower-level tasks (Float == Float64
for both 32- and 64-bit systems, whereas Int == Int32
for 32-bit systems, but Int64
for 64-bit systems). So this is a question of whether Int
/Float
should (1) specify sensible default sizes or (2) refer to the underlying architecture. A Float
alias makes sense for (1), but not (2) as you mention.
This would be actively confusing since in C float = Float32
and in Julia you'd have Float = Float64
. Not the end of the world, but it's totally unnecessary confusion.
I see. In that case Double
might be a better choice for a Float64
alias for (1) in my previous response. I don't have that strong of a preference for (1) over (2) - I just wanted to present a good (?) argument in favor of an alias for Float64
, as there were only arguments against it in this thread. (Also, note that no other type requires such a size specifier.)
Also, note that no other type requires such a size specifier.
How so? See Int8, Int16, Int32, Int64, Int128
, and unsigned versions thereof? For C heritage / naming compatibility we have a Cdouble
type already. On all common platforms that's going to equal Float64
which is the Julia naming convention for fixed-width floating point type names.
Given recurrent questions/suggestions on this subject, I wonder if the absence of a Float
alias is really less confusing than what would result from having Float != Cfloat
(in particular given that Int != Cint
already, and also that there is an unambiguous "sensible default size for floating point number").
I'd rather actually consider removing the system-dependent Int
type (and default to parsing integer literals as Int64
even on 32 bit) since it's the most common reason packages don't work correctly on 32 bit. Bitstypes have sizes, name things what they are.
@tkelman - What I meant by "no other type requires such a size specifier" is that I can just use e.g. the default Int
type without having to remember how many bits it consumes under the hood, reserving usage of the more explicit Int8, Int16, etc. for only those cases where I care about how many bits I need to be allocating. Currently, only Float
-type types require me to specify a number of bits in all cases, so analogously, it would be nice to have a default Float
type as a fallback for floating point numbers for high-level usage so that I don't have to remember that I need to use the version that allocates 64 bits when performing tasks like symbolic math, plotting, etc. in Julia. I personally have enough experience that I know to use Float64
most of the time in most high-level cases (unless I have a good reason not to), but not having a default Float
type introduces an additional level of complexity to those who are not as familiar with system architectures or how values are stored in the form of bits under the hood. As it currently stands, Julia sits in a sweet spot between being a "fancy calculator" (requiring a minimal knowledge of programming concepts) and having the ability to write/interact with low-level code. Having both a Float
default and the more specific Float16
, Float32
, etc. would be nice for both ends of spectrum without compromising on either IMHO.
Even though I don't personally care about the 32 bit stone age, I expect that defaulting to 64 bit integers will hurt performance. Even a single add would be at least two instructions (I'm hoping llvm legalizes a 64 add into 32 add + adc) and you would lose half your registers. Mostly for nothing too since most practical integers fit easily in 32 bits. I think that the native register size is something important enough that it's a waste of time to try to abstract it away.
If we're gonna pick a default size for literals then I guess we should pick 32 bits int like C but I'm not sure I like that.
I'd rather have things work and be slow than not work at all. Most package authors, like you, don't care or ever test with a 32 bit build.
@tkelman 鈥撀爇eep in mind that people may want to run Julia on 32-bit embedded systems. If we changed Int
to always be 64-bit, then we'd basically be ensuring that Julia will never be a good thing to run on such systems. Even if a fair number of packages are broken on 32-bit, they're usually not _too_ hard to fix.
If we're gonna pick a default size for literals then I guess we should pick 32 bits int like C but I'm not sure I like that.
To nitpick C does not pick a specific size.
In fact, if we're worried about better support for 32-bit, I'd be much more in favor of making it possible to run Julia in 32- and 64-bit modes on both 32- and 64-bit systems. That way you could test packages works with either size of Int
and you would be able to run programs the same everywhere. Last I talked to @JeffBezanson about this idea, he thought it would be too hard to make work.
There used to be an int literals flag, but I don't think it ever worked.
Are you suggesting trying to make fat binaries that include both arches, or just resurrecting and fixing the int literals flag?
64 bit systems can generally download and run 32 bit binaries just fine. Possibly need to install an i386 libc first on linux. BinDeps doesn't support this situation very well though if the system compilers are still 64 bit by default.
I mean just changing the meaning of Int
and making that work with the libraries. Dues to autoconversion, as long as people are using the right Cfoo
names in ccalls
, things _should_ work, in theory. In practice, they probably won't.
Most helpful comment
Given recurrent questions/suggestions on this subject, I wonder if the absence of a
Float
alias is really less confusing than what would result from havingFloat != Cfloat
(in particular given thatInt != Cint
already, and also that there is an unambiguous "sensible default size for floating point number").