Julia: Proposal: Defer calculation of field types until type parameters are known

Created on 13 Sep 2016  Â·  45Comments  Â·  Source: JuliaLang/julia

Currently, we can perform a limited set of logic in the type definition, such as:

type A
    a::eltype(Vector{Int})
end

but we can't involve a type parameter, such as:

type B{V <: AbstractVector}
    a::eltype(V)
end

AFAICT, at the moment the field types A.types is calculated when the type is defined and type parameters are inserted into the correct slots as they become known.

However, it would be nice if the types could be calculated by arbitrary inferrable or @pure functions. Another simple example (close to my heart) would be:

immutable StaticMatrix{M,N,T}
    data::NTuple{M*N, T}
end

However this results in an error that multiplication is not defined for TypeVars. Instead, I need all of this code:

immutable StaticMatrix{M,N,T,L}
    data::NTuple{L, T}
    function StaticMatrix(d)
        check_params(Val{L}, Val{M}, Val{N})
        new(d)
    end
end

@generated function check_params{L,M,N}(::Type{Val{L}}, ::Type{Val{M}}, ::Type{Val{N}}) # could also be `@pure` in v0.5
    if L != M*N
        error("Type parameters don't match")
    end
end

and my users need to foist around the redundant L paramater when they need to specify a concrete type.

For abstract types, I'm hoping that inference itself could still be used to come up with a least-pessimistic approximation of each field, or otherwise just use Any when that's not possible. If that makes it difficult to avoid regressions, straight types (combinations of types and the relevant TypeVars with apply_type but no other functions) could keep functioning as they currently do.

speculative types and dispatch

Most helpful comment

Is there some best practice how to cope with the situation?

struct StateSpaceSystem{Nx, Nu, Ny,
        AT <: SMatrix{Nx, Ny},
        BT <: SMatrix{Nx, Nu},
        CT <: SMatrix{Ny, Nx},
        DT <: SMatrix{Ny, Nu}}
    A::AT
    B::BT
    C::CT
    D::DT
end

Or you could use https://github.com/vtjnash/ComputedFieldTypes.jl to achieve the exact same result as proposed in this issue, although I think the proposal in this issue may be more cumbersome than using the above (although it does provide an implementation of fulltype for you, for taking a StateSpaceSystem{Nx, Nu, Ny} and filling in the rest of the parameters).

My speculation was about how to remove fulltype

If you don't like using fulltype, just ignore it. Just because my package provides this extra functionality over the proposal in this issue doesn't mean you have to use it.

All 45 comments

fwiw, there's no advantage of your complicated @generated function (and some disadvantage) over the naive version on any version of Julia (e.g. they end up running the same code):

immutable StaticMatrix{M,N,T,L}
    data::NTuple{L, T}
    function StaticMatrix(d)
        L == M*N || error("Type parameters don't match")
        new(d)
    end
end

also, duplicate of https://github.com/JuliaLang/julia/issues/8472

duplicate of #8472

Respectfully, I thought #8472 was much more general than this, and is about doing a fully-@generated type along the lines of GeneratedTypes.jl. I felt this issue isn't really a duplicate since it's suggesting a much narrower scope about allowing a slightly expanded set of pure functions for the types, and should be less disruptive in implementation (the number of fields and their names are fixed). Speaking in-person to @JeffBezanson (or maybe it was Stefan? sorry if I got that wrong!) he seemed reasonably positive about the idea.

fwiw, there's no advantage of your complicated @generated function (and some disadvantage) over the naive version on any version of Julia (e.g. they end up running the same code):

That's not at all what I see. Your code executes a run-time check and my code executes a compile-time check.

Run-time:

julia> immutable StaticMatrix{M,N,T,L}
           data::NTuple{L, T}
           function StaticMatrix(d)
               L == M*N || error("Type parameters don't match")
               new(d)
           end
       end

julia> StaticMatrix{2,2,Int,4}((1,2,3,4))
StaticMatrix{2,2,Int64,4}((1,2,3,4))

julia> @code_native StaticMatrix{2,2,Int,4}((1,2,3,4))
    .text
Filename: REPL[1]
    pushq   %rbp
    movq    %rsp, %rbp
    movq    %fs:0, %rax
    addq    $-2672, %rax            # imm = 0xFFFFFFFFFFFFF590
    vxorps  %xmm0, %xmm0, %xmm0
    vmovups %xmm0, -16(%rbp)
    movq    $4, -32(%rbp)
    movq    (%rax), %rcx
    movq    %rcx, -24(%rbp)
    leaq    -32(%rbp), %rcx
    movq    %rcx, (%rax)
Source line: 5
    vmovups (%rdx), %ymm0
    vmovups %ymm0, (%rdi)
    movq    -24(%rbp), %rcx
    movq    %rcx, (%rax)
    movq    %rdi, %rax
    popq    %rbp
    vzeroupper
    retq
    nopl    (%rax)

Compile-time

julia> @generated function check_params{L,M,N}(::Type{Val{L}}, ::Type{Val{M}}, ::Type{Val{N}}) # could also be `@pure` in v0.5
           if L != M*N
               error("Type parameters don't match")
           end
       end
check_params (generic function with 1 method)

julia> immutable StaticMatrix2{M,N,T,L}
           data::NTuple{L, T}
           function StaticMatrix2(d)
               check_params(Val{L}, Val{M}, Val{N})
               new(d)
           end
       end

julia> @code_native StaticMatrix2{2,2,Int,4}((1,2,3,4))
    .text
Filename: REPL[12]
    pushq   %rbp
    movq    %rsp, %rbp
Source line: 5
    vmovups (%rdx), %ymm0
    vmovups %ymm0, (%rdi)
    movq    %rdi, %rax
    popq    %rbp
    vzeroupper
    retq
    nopw    %cs:(%rax,%rax)

@vtjnash am I misinterpreting this? Perhaps the branch is eliminated in the first case (for instance, I don't see call for the error) but the calculation of L == M*N remains? Either way, the code is different...

This is basically dup of https://github.com/JuliaLang/julia/issues/15791

Please never use @code_native to tell the difference between two functions. It's almost useless unless you hit a codegen bug or if you are a CPU. The difference in the generated code is due to https://github.com/JuliaLang/julia/issues/17880 and https://github.com/JuliaLang/julia/issues/15369

The only operation permissible while constructing a type is allocation (http://docs.julialang.org/en/latest/devdocs/locks/). No other inspection of the system is possible in a thread-safe manner. Additionally, you would need to provide an algorithm for allocating them uniquely during deserialization (and handling any errors), despite the fact that the system is temporarily in an inconsistent state and examining it too closely will lead to errors (e.g. you can't look at the fields of a type, since they might still contain placeholders instead).

This is basically dup of #15791

That seems closer to the mark. Thanks @yuyichao.

Please never use @code_native to tell the difference between two functions.

Huh? This totally confuses me. I _only_ use @code_native to compare which of many possible implementations runs fastest.

I do understand the @code_native will be different on different CPUs and different versions of Julia. But if I want to ask: will Jameson's code or my code run faster on my computer right now, surely @code_native is the tool I want?

The reason I am using this "trick" is exactly because #17880 and #15369 will be fixed in the future and I am currently using Julia for real-world work now. Stating that two implementations "should" behave the same isn't really of practical value.

PS - @yuyichao I'm curious - could you explain how GC frame generation in #15369 affects my isbits types here?

@vtjnash Thank you very much for the insight. I totally admit that implementing this could be messy and complicated and might require changes to how the system works, and unless I can help somehow, then it remains up to you guys to figure out what is feasible, or worthwhile given the payoff. I had _hoped_ this would be simpler than #8472 (fully-generated types) since to me as an end-user it seems pretty non-disruptive (affecting only how DataType.types gets populated).

Thanks again. Cheers :)

This issue is related very much to #8322, which is still open with the ideas I requested here.

I do understand the @code_native will be different on different CPUs and different versions of Julia. But if I want to ask: will Jameson's code or my code run faster on my computer right now, surely @code_native is the tool I want?

No. The point is not how reproducible it is across systems but how easy it is to understand the difference. For 99% of the case (the 1% being codegen/llvm bugs) code_llvm contains much more information compare to code_native and for 99.9% of people (i.e. unless you are a CPU or as good as one at understanding assembly) the code_llvm is much easier to understand/learn. FWIW, I hand wrote 2 of the instructions appears in your longer version and I wasn't able to tell at a first glance why it is longer than the second one, OTOH, it was extremely clear from code_llvm and then code_warntype shows that why the extra code is there when they are not needed.

The reason I am using this "trick" is exactly because #17880 and #15369 will be fixed in the future and I am currently using Julia for real-world work now. Stating that two implementations "should" behave the same isn't really of practical value.

The better workaround is to move the condition in a @pure function.

PS - @yuyichao I'm curious - could you explain how GC frame generation in #15369 affects my isbits types here?

https://github.com/JuliaLang/julia/pull/11508

The point is not how reproducible it is across systems but how easy it is to understand the difference.

Yes llvm is usually easier to read, though I have noticed some operations in @code_llvm are more-or-less elided in @code_native so I _guessed_ llvm does another optimization pass during the translation (actually, it definitely does - many basic functions are call to an llvm-function in @code_llvm while in @code_native I can see which are inlined and which are actually allocating a stack frame, etc).

I think its just unusual that most the code in _StaticArrays_ is extremely simple (e.g. adding two pairs of floats, or the (shorter) constructor above) so I found both formats readable and I used a combination of both. But thank you for the advice - I'll err toward llvm from now on.

The better workaround is to move the condition in a @pure function.

Agreed (there was a comment in my codeblock about that, but you had to scroll sideways... the given version is 0.4 friendly).

11508

Right, awesome, thank you! I'm glad for that optimization... previously I was really pessimistic about throwing errors in _StaticArrays_ since code is so performance sensitive but I see all this will work out beautifully in the end.

I guessed llvm does another optimization pass during the translation

99% of the time not in any way you would care about.

(actually, it definitely does - many basic functions are call to an llvm-function in @code_llvm while in @code_native I can see which are inlined and which are actually allocating a stack frame, etc).

Those are llvm instrinsics (i.e. instructions) and are never function calls to begin with.

99% of the time not in any way you would care about.

Exception being https://github.com/JuliaLang/julia/issues/16375

OK, thanks for all the useful info @yuyichao! I'll read up on the intrinsics, and then I should be good to use @code_llvm more frequently.

(On a side note, @code_warntype has become much less readable in v0.5... my functions are polluted with :metas (such as > 50% of code being metas for inbounds), and I'm _still_ confused about invoke...)

8322 asked for two features, one the same as this issue and one for computed subtyping. I believe those have very different implications, so we should have separate issues for them.

I do think it is possible to support this by storing functions to compute the field types (based on a syntactic heuristic, e.g. a type parameter occurs somewhere other than inside a type application A{B}). Until all type parameters are known, we can just assume those fields are Any. Those will be abstract types anyway.

Thanks guys.

,,,I believe those have very different implications, so we should have separate issues for them.

I do think it is possible to support this by storing functions to compute the field types (based on a syntactic heuristic, e.g. a type parameter occurs somewhere other than inside a type application A{B}). Until all type parameters are known, we can just assume those fields are Any. Those will be abstract types anyway.

Agreed.

Until all type parameters are known, we can just assume those fields are Any

Fair enough. If we prohibit types with generated fields from being isbits (e.g. guarantee they will be heap allocated) that should likely help with making the layout and precompile serialization computable (and not cause https://github.com/JuliaLang/julia/issues/18343, https://github.com/JuliaLang/julia/issues/16767, & friends to be reopened).

Do you mean just the abstract types? For the concrete types, isbits would be crucial in some circumstances, especially for the StaticMatrix example above.

I needed this desperately so I've written an intermediate solution that uses macros and does allow correct type inference. Its still a prototype and a bit hacky, but it works :-). I guess one would implement this differently if one wants to add language support, but I hope its useful to everyone that is waiting for this feature.
The code can be found at https://github.com/tehrengruber/ExtendedParametricTypes

Nice.

there seems to be some weird, non-hygienic eval / Base.return_types in that code. It seems like the correct approach would be simply to call eval(current_module(), expr.args[1]) there?

I didn't want to eval expr.args[1] directly since it could be a more complex expression then just the unparameterized type name. When the evaluation of expr.args[1] depends on the context there is no way to deduce the correct unparameterized type name (which is needed for a type stable expansion) on macro expansion. To check whether expr.args[1] depends on the context I just use type inference. I compile a closure :(() -> $(expr.args[1])) and in case the return type is inferred to Type{A} with A being concrete (i. e. not abstract) I know the unparameterized type name on macro expansion and can generate a type-stable expansion. Otherwise I fall back to a non type-stable version (which does not need to run eval).

Here is an example to illustrate where eval(current_module(), expr.args[1]) would fail.

function some_function(t::DataType)
    @EPT(t{Int})
end

In general using eval here is not bulletproof for sure. If one has a const t=Int defined in the module the EPT macro expansion would be wrong I guess, but for now thats tolerable.

Another nice side effect of using Base.return_types is that I do not directly evaluate code that was passed to the macro, which is a bad practice boundary I didn't want to cross with my module (its ugly enough).

PS: Further discussion should be probably be made in the issue tracker of the package.

I think you're confused about execution environment. :(() -> $(expr.args[1])) doesn't create a closure and (by definition) doesn't depend on the environment. That expression is also capable of performing any arbitrary side-effect or computation (because it is wrapped in eval), so using the heuristic interpreter (Base.return_types) rather than the actual interpreter (eval) doesn't really gain you anything. Using Base.return_types should be an huge red-flag in code. Using eval in a macro is not generally ideal, but it's not a particular bad practice.

for the primary example above, the new type-system can now express this computation directly:

type B{T, V <: AbstractVector{T}}
  a::T # equivalent to eltype(V)
end

for the secondary example, the old type system could already express this computation, if you're willing to accept a different type with an equivalent in-memory representation.

immutable StaticMatrix{M, N, T}
    data::NTuple{M, NTuple{N, T}}
end

@vtjnash You're right that's not a closure, but an anonymous functions, but I think thats not relevant for the discussion (or am I missing something here?). Anyway can you give an example of how :(() -> $(expr.args[1])) might have unexpected side-effects? I don't see why Base.return_types does not gain anything here. How about the example I have given?

I agree that Base.return_types is more a hack then a good solution. However in this case I consider eval to be much worse, since I would then evaluate an expression in the scope of a module that is expected (from the user) to be run inside of its sorrounding scope.

Regarding your solution to the primary examples, heres a more complex ones where those become unfeasible:

Real world example (think K ≘ Triangle):

immutable Geometry{K <: Cell, REAL_ <: Real} <: FixedVector{vertex_count(K), Vec{dim(K), REAL_}}
    _::NTuple{vertex_count(K), Vec{dim(K), REAL_}}
end

Constructed one:

immutable SomeType{dim}
    _::NTuple{dim+1, Int}
end

However in this case I consider eval to be much worse, since I would then evaluate an expression in the scope of a module that is expected (from the user) to be run inside of its sorrounding scope

What do you think eval( () -> ...) and Base.return_types do? They are precisely "evaluating an expression in the scope of a module". (Base.return_types is just much worse at doing it than eval)

second example (SomeType)

this one is also trivial since ~v0.5:

immutable SomeType{dim}
    _::Tuple{Int, Vararg{Int, dim}}
end

first example (Geometry)

with the extra constraints in the subtyping, this is nothing at all like the other examples but is instead #8322.

I've implemented this request as an unregistered package at
https://github.com/vtjnash/ComputedFieldTypes. I don't particularly love it, since it turns the nicer error message shown in the alternatives above into miserable Type and Method Errors. However, perhaps that could be improved by adding in dummy methods to catch those cases.

But consider closing the issue?

I've wanted this for:

immutable CxxValue{T}
data::NTuple{UInt8,cxxsizeof(T)}
end

The problem with going for an extra, hidden parameter approach is that you can't really write things like:

type a
a::CxxValue{foo}
end

dispatching is also a pain.

@vtjnash I would say that eval(:(() -> ...)) compiles an anonymous function by evaluating its definition. If I however do something like eval(:(push!(a, 1))) then a is modified which I assume is not the case if I do eval(:(() -> push!(a, 1))). For this reason I haven chosen the return_type approach. What I meant with

since I would then evaluate an expression in the scope of a module that is expected (from the user) to be run inside of its sorrounding scope.

Is that instead of evalutating the expression directly I evaluate a function definition with the expression as its body. Which I assume does not change any variables, which I explicitly don't want to do. I hope that clarifies things. Why is return_types so bad here? I think I have clearly showed that just running eval will do stuff the user might not expect, while I see no real argument against return_types beside the fact that I am using it for something it is not intended to do. If not please tell me.

_Regarding examples:_

second example (SomeType)

I have dim+1 not dim. As far as I know thats currently not possible with just 0.5.

first example (Geometry)

Sure this is different to the other examples and belongs more to the #8322 ticket.

_Regarding your package:_

I tried it with 0.6 and it doesn't work ERROR: syntax: invalid function name "ComputedFieldTypes.fulltype". Should be easy to fix. However the code looks much nicer and I'll take a closer look tomorrow.


To make this clear. My package was written in a real hurry because I couldn't write efficient code that I need for my thesis and is only meant as an intermediate solution until there is a better solution.

@keno I fully agree with the dispatching issue. Your second example however at least works with my module. You just have to append @EPT to every type (ugly I know).

type a
a::@EPT(CxxValue{foo})
end

If foo is a TypeVar itself you have to make a an EPT itself (again ugly).

If not please tell me.

I did. Base.return_types is a fuzzy evaluation. While eval is not.

(sorry, last message wasn't supposed to be truncated, just hit the wrong button).

there's nothing that stops the expansion and evaluation of that function from executing arbitrary code. It's true that a simple call wouldn't be executed, but it's not hard to find other expressions that would. Some examples include, function definitions, ccalls, global and other similar keywords, and macros.

return_types is bad because it's only supposed to return an over-approximation of the right answer. indeed, even the Type object returns is only an approximation: there are many possible correct answers to return. Inspecting the .parameters field of the Type it returned does not actually return the original unparameterized type, but rather, something that is similar to it. In practice, what that means is that you'll sometimes find the type parameters end up in the wrong order and don't get applied correctly. For example, given Type{SArray{N, M}}, the result of computing .parameters[1] from return_types will return the N and M unsorted. Trying to call T{N, M} afterwards will occasionally randomly swap N and M in the resulting SArray.

None of this is not a problem if you just use eval, instead of pretending that you aren't using eval.

I have dim+1 not dim. As far as I know thats currently not possible with just 0.5.

I also have dim + 1, not dim.

I tried it with 0.6

sounds like an old version of v0.6. it uses features that have only been on master for about a week or so.

To make this clear. My package was written in a real hurry because I couldn't write efficient code that I need for my thesis and is only meant as an intermediate solution until there is a better solution.

indeed, it worked. you inspired me to write another alternative :)

Your second example however at least works with my module.

also works with my code (using the fulltype function rather than a macro). And similarly, requires marking the type as @computed, if it uses a type variable in the computation.

immutable CxxValue{T}
    data::NTuple{UInt8, cxxsizeof(T)}
end

Assuming we enable inlining all immutables via an extension of the existing no-cycles rule (#18632), this definition requires that either CxxValue or data be heap-allocated (I can't immediately remember which is likely to be preferable). Thus, the memory layout and dispatch capabilities are in fact exactly identical to either my ComputedFieldTypes.jl example (CxxValue{T} is heap-allocated) or to dropping the cxxsizeof(T) annotation (.data is heap-allocated).

dispatching is also a pain.

There's lots of reflection that isn't really generally valid / correct anyways which would be hard to write. But with diagonal dispatch, it should generally be possible to treat the hidden parameters as though they are indeed hidden / not present.

for the secondary example, the old type system could already express this computation, if you're willing to accept a different type with an equivalent in-memory representation.

immutable StaticMatrix{M, N, T}
    data::NTuple{M, NTuple{N, T}}
end

Yes, we were aware of that trick, in fact that was the approach of Mat in FixedSizeArrays.jl. I made the following observations:

  • It's challenging to get this to work in the arbitrary-dimensional case
  • Codegen/benchmarks in v0.4 was better with nested tuples but in v0.5 it was better with one flat tuple

Given the history of immutable arrays packages in Julia, we were completely expecting this kind of underlying representation to change in v0.6, v1.0, etc, as the compiler changes and thus the optimal approach changes.

The package looks nice and simple (in a good way), @vtjnash. :)

Of course the advantage of putting this into the language would be remove the need for fulltype(T) anywhere where you need to talk about a concrete type. Also, I didn't know about those kinds of inner constructors - could you always define them like that, or is that new?

Taking Keno's example further, would it make sense to have two kinds of type parameters: normal and hidden ones? With the expectation that the hidden ones would usually not participate in dispatch, show, etc. (although it should still be possible).

@mauro3 That is kind-of what I was thinking after using ComputedFieldTypes.

Here's some rampant speculation: Hidden parameters would be fully determined by other parameters. They could be populated when the last free, non-hidden parameter is applied, and unpopulated otherwise. In this model, hidden parameters would never determine dispatch (if/when we can dispatch on arbitrary computable traits, there would be an equivalent way of doing this). Field types would be applied as they are now in ComputedFieldTypes, from a mixture of hidden and non-hidden parameters. Possible syntax could involve a ; separator and therefore look a little bit like keyword arguments do, as in MyType{A,B; C=f(A,B)}. To go back to an early example,

immutable SMatrix{M,N,T; L=M*N}
    data::NTuple{L,T}
end

Users would only interact with SMatrix{M,N,T} and UnionAlls thereof. The L is just a crutch to populate field types, nothing more. Equivalently, for @keno:

immutable CxxValue{T; N = cxxsizeof(T)}
    data::NTuple{UInt8,N}
end

(I guess that upper bounds could be applied to hidden arguments; that might be useful to someone)

Also, a nested type would then need to inherit any not fixed hidden parameters:

type Nest{T} # Nest{T; N}
  a::CxxValue{T} # CxxValue{T; N = cxxsizeof(T)}
end

Exactly. That's more-or-less the whole advantage of making them completely hidden and automatically populated - in ComputedFieldTypes you have to use fulltype to get good performance out of the above, or if you want to create a Vector{CxxValue{T}} (vs. Vector{fulltype(CxxValue{T})}).

It's challenging to get this to work in the arbitrary-dimensional case

Do we know anything about how large arrays it makes sense to represent as tuples? I thought it was mainly small vectors and matrices.

Of course the advantage of putting this into the language would be remove the need for fulltype(T) anywhere where you need to talk about a concrete type

you don't need to ever call fulltype with my package now. however, if you don't have / call fulltype, the compiler will heap-allocate the value. it doesn't matter whether you use my package version, or a builtin one, the restriction on what the compiler can know is the same. There's no magic performance.

With the expectation that the hidden ones would usually not participate in dispatch, show, etc. (although it should still be possible)

this is literally exactly what my package implements.

They could be populated when the last free, non-hidden parameter is applied,

I don't believe magically computed parameters are a good idea. They aren't monotonic in the type lattice. So inference would have to treat them as complete unknowns (the same problem as generated functions have now). This would also make it very difficult to include these values in a serialized stream. (Generated functions have a similar problem, and it took a couple rewrites of the Method representation to get something that can mostly handle them correctly now. That is generally a much easier problem though, since it doesn't drive memory layout.).

Also, I didn't know about those kinds of inner constructors - could you always define them like that, or is that new?

It's fairly new (I think it was added with the Function types).

you don't need to ever call fulltype with my package now. however, if you don't have / call fulltype, the compiler will heap-allocate the value.

With the expectation that the hidden ones would usually not participate in dispatch, show, etc. (although it should still be possible)

this is literally exactly what my package implements.

Right, yes, I completely understand both - your package seems very useful, complete and simple. I really like it! Nonetheless, I would be using fulltype for performance optimizations. For example, at work we store data in a Vector{SMatrix{3,3,Float64,9}}, take them out of the vector and onto the stack, and do a specialized 3x3 eigendecomposition on each, put the results in another vector, etc. If I used non-concrete element types like Vector{SMatrix{3,3,Float64}}, then performance would be toast!

My speculation was about how to remove fulltype in the (smallish) set of cases where you want to explicitly name a concrete type to improve performance, e.g. to construct a container. There have been multiple occasions where I have added that 9 to co-workers' code so they can get the performance they need. They do end up learning something important about Julia and my implementation of StaticArrays, but it would be simpler if it "just worked" the way they typed it.

The trouble with Vector{SMatrix{3,3,Float64}} is that regardless of where you hide the computed parameter, it's still effectively present somewhere. If you implement this in the language, then you wouldn't have the fulltype function available to force it into being an isbits type, so it would instead likely need to always be heap allocated pointer array. I assume this isn't what you would want for performance! Whereas with the package version, it permits you to pick between (a) the heap-allocated non-parameterized version and (b) the "fulltype" inferred version.

If you implement this in the language, then you wouldn't have the fulltype function available to force it into being an isbits type, so it would instead likely need to always be heap allocated pointer array.

I think you might have misunderstood my suggestion. I was saying that if you apply a type variable to a type, and that type then has all non-hidden parameters filled, then the hidden parameters will be calculated always and automatically.

So going back to the supposed semicolon notation, if I had SM = SMatrix{3,3,T; L} where T where L and then I write SM{Float64} the output would automatically become SM{Float64} == SMatrix{3,3,Float64; 9}. So we would have SMatrix{3,3,Float64} === SMatrix{3,3,Float64; 9}. (However, the L and 9 would be hidden from the user.) In this world. the type SMatrix{3,3,Float64; L} would never be constructed or added to the type cache. The C code would have to be changed to do this. (Note: I'm not saying this is desirable or even if this is a good solution to the problem... perhaps dealing with such field computations more directly without intermediate type variables would be more elegant).

I'm not misunderstanding, I'm just pointing out that wrapping this in different syntax has no actual impact on what is possible for the underlying implementation. In order to fill in the extra parameters, you need an eval stage. In my example package, that eval stage is called fulltype. There could be an eval stage in base or not, but it can't simply be ignored under the label "auto-magic".

I've been pondering using StaticArrays in ACME for quite a while. And I think it would be a perfect fit, if it wasn't for the proliferation of type parameters. It's based on state-space models, which (in the linear case) operate as

x(n) = Ax(n-1) + Bu(n)
y(n) = Cx(n-1) + Du(n)

That would translate to (fixing the eltype for simplicity):

struct StateSpaceSystem{Nx,Nu,Ny,NxNx,NxNu,NyNx,NyNu}
    A::SMatrix{Nx,Nx,Float64,NxNx}
    B::SMatrix{Nx,Nu,Float64,NxNu}
    C::SMatrix{Ny,Nx,Float64,NyNx}
    D::SMatrix{Ny,Nu,Float64,NyNu}
end

Bad enough. But actually, I'm simulating non-linear systems, which I incorporate by using even more matrices (with interdependent sizes, of course). Also, I'm very concerned about speed (why else spend the effort of changing to StaticArrays), so I really want all types concrete to avoid dynamic dispatch. Is there some best practice how to cope with the situation?

Is there some best practice how to cope with the situation?

struct StateSpaceSystem{Nx, Nu, Ny,
        AT <: SMatrix{Nx, Ny},
        BT <: SMatrix{Nx, Nu},
        CT <: SMatrix{Ny, Nx},
        DT <: SMatrix{Ny, Nu}}
    A::AT
    B::BT
    C::CT
    D::DT
end

Or you could use https://github.com/vtjnash/ComputedFieldTypes.jl to achieve the exact same result as proposed in this issue, although I think the proposal in this issue may be more cumbersome than using the above (although it does provide an implementation of fulltype for you, for taking a StateSpaceSystem{Nx, Nu, Ny} and filling in the rest of the parameters).

My speculation was about how to remove fulltype

If you don't like using fulltype, just ignore it. Just because my package provides this extra functionality over the proposal in this issue doesn't mean you have to use it.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

sbromberger picture sbromberger  Â·  3Comments

Keno picture Keno  Â·  3Comments

musm picture musm  Â·  3Comments

i-apellaniz picture i-apellaniz  Â·  3Comments

omus picture omus  Â·  3Comments