For instance, when you have code such as:
public static int Add( int x, int y )
{
return x + y;
}
Why do you have to specify "int"? You can simply say:
public static Add( x, y )
{
return x + y;
}
I realize that's what Generics are for, and maybe that's how the above will get implemented. Note that this is not about dynamic typing. The above code very much is aware of types, but it works like generics do. Also, we'll have to have automatic handling of constraints on the implicit type parameters. You should have to specify a "where" keyword. For instance, remove the need to say something like:
where T : new()
when you see the code do a new.
So this type of code:
class ItemFactory
{
public GetNewItem()
{
return new typeof(return)();
}
}
changes into this type of code internally:
class ItemFactory
{
public T GetNewItem()
{
return new T();
}
}
Then specifying a type in code will be mostly when you instantiate it. The rest of the code attempts to infer it.
public static Add( x, y )
{
return x + y;
}
So you want that x and y and the return value are automatically inferred to int? How do you know it is a int? It could be a long, double, string or anything else for that matter? Or I'm I missing your point here.
Tip: start with 4 spaces and github will make it a code block making it a lot easier to read so you might want to edit your post :)
I think I get the point of making the code simpler to write, like most dynamic languages. Typescript has this:
class Foo {
method() {
return 1;
}
}
is the same as:
class Foo {
public method(): int {
return 1;
}
}
This is already works in visual studio and the compiler can infer the correct types.
The thing is that I think this doesn't match well with C# syntax, and there are the accessibility modifiers you have to provide already. Not sure if there would be much gain.
I found quite ugly the new typeof(return)() syntax. The current one does the trick in a much nicer way.
DickvdBrink, code like this:
public static Add( x, y )
{
return x + y;
}
should automatically turn into:
public static T Add( T x, T y )
{
return x + y;
}
nvivo, I agree that:
new typeof(return)()
is not great syntax, however that's an extreme case. Imagine the up-side to this -- most of your code will not have to specify types. Most of the time, when you have an explicit type, you can make it a generic type. But it doesn't work in all cases. Sometimes you have to provide hints, such as "Type T is comparable" or "Type T is new'able". So rather than use the "where" keyword in those cases, I am saying let the compiler determine that based on usage. If the code does "new" on the type, then assume it's new'able. If the code compares the type, assume it's comparable.
I'm am significantly trivializing this for illustrations. There are multiple edge-cases which will be difficult or impossible to address, in which case you have to go back to what we do now and provide more specificity. But in some of those more difficult cases, there can be more things we can do than give up. For instance, let's say the parameter has to be comparable based on the fact that the code in the method is comparing values of that type. But what if the comparison is not being done in the body of the method, bur rather in another method. And so on. So inferring "where" keyword is also an option.
Some of this reminds me of Haskell where things are strongly typed, but rarely do you specify a type. You can always specify a type in Haskell, in some cases you have to since the compiler cannot infer the type from usage. However, those cases few and far between. In C#, you currently have to specify the type in too many places where the compiler could infer it.
I like the idea of typing less types. But I think the problem is that in this case you throw away a lot of compiler checking and replace with a lot of doubts and edge cases. First of all, in this case:
public static Add(x, y)
{
return x + y;
}
Once you type {, you have no intellisense at all, there is no way we can infer what x can be, so the method implementation is basically a dynamic language with no type checking and everything is possible.
If we assume T, is it even possible to sum two T? Are they numbers? are we concatenating strings? do they have implicit casts to another type that has a + operator overload? And that is just a very simple example. What if you have two methods with conflicting rules? What if after all the type inference, the compiler sees two or more types?
It just feels that there are too many ways this could go wrong, and too many cases to handle just to avoid creating a generic.
@nvivo, yes, this gets very hard on the IDE. You're right that you would get no IntelliSense initially for x and y unless we first bind all of the callsites to Add(). However, allowing type inference for method return types or fields gets even nastier. Essentially, any change in a method body can affect type-checking of the entire program, resulting in low performance for things like live error squiggles and quick fixes. We've looked at this in the past, and it's definitely hard (not necessarily impossible, but hard). F# makes the problem somewhat better by enforcing that declarations are only in scope after the declaration appears in source. But that isn't a restriction we could impose on C# at this point.
My 2 cents. :moneybag:
Let me shortcut this -- what you're basically asking for is a variant of Hindley-Milner type inference for type members.
Another problem: At some point these methods are parts of runtime types that are persisted in actual assemblies with metadata describing them; and the runtime rules are not that loosey-goosey. So if you don't actually call the method in the same assembly it is defined, or have a definite type inference path within the code in that assembly then it would be impossible to infer the static type contract that needs to be emitted as part of the assembly. So maybe encode it as a generic, but then the body wouldn't bind unless you invested in a bunch of other technologies? Or maybe just encode it as dynamic, that would work, but then you could also just use the dynamic keyword in place of the type, and wouldn't be static typing or compile-time inferred. Am I rambling?
@mattwar, do you ever ramble? :smile:
nvivo,
throw away a lot of compiler checking<<<<
I agree. I don’t want to do that. Remember, this is not about getting rid of types. It’s about inferring types. Generics are still type-safe. Haskell does this and it’s a type-safe language. No, I’m not trying to turn C# into Haskell. Just saying this works in other places.If we assume T, is it even possible to sum two T? Are they numbers?<<<<
Excellent question. This means that if you call this function passing in types that do not support the + operator, such as numbers or strings, then it should not compile. This is what would happen in case of C++ templates, for instance.What if you have two methods with conflicting rules? <<<<
Fair question. Give me an example so we can talk it through. You may be right. I’m just thinking out loud here, but let’s talk about specifics.It just feels that there are too many ways this could go wrong, and too many cases to handle just to avoid creating a generic.<<<<
I’m trying to avoid those feelings because we tend to think what we are familiar with is normal, and something we’re not familiar with must be wrong. That doesn’t mean anything new is good. And this may be a bad idea. I just want to talk it through.
When say “too many cases to handle just to avoid creating a generic”. So I would like to say now that maybe we should say categorically that we will not be willing to give up on any features that one would have if Generics are used. Perhaps this is a shortcut way to create a generic. For instance, we all know that
var s = “abc”;
is a shortcut way of saying:
string s = “abc”;
So there’s no downside.
Similarly, code such s this:
public static Add(x, y)
{
return x + y;
}
Is will simply convert into:
public T static Add(T x, T y)
{
return x + y;
}
What do you think? Do you think it's worth pursuing ?
DustinCampbell,
allowing type inference for method return types or fields gets even nastier.<<<
and it's definitely hard <<<
I agree with you on that. It sounds hairy and hard.
But we like hard things.
any change in a method body can affect type-checking of the entire program, resulting in low performance for things like live error squiggles and quick fixes<<<
I see. But you can make the same argument for Generics right? If not, how would this be different. The more I discuss this topic in this thread, the more I’m leaning toward this being a shortcut way of specifying Generics.
F# makes the problem somewhat better by enforcing that declarations are only in scope after the declaration appears in source. But that isn't a restriction we could impose on C# at this point.<<<
I guess I should know more about F#, because I don’t know what this means. Is there a way for you to easily explain this statement without fully learning F#, or is this one of those things that unless I really understand F# I’m just not gonna get it. In which case, I’ll probably go look up F# on my own. I’ve been learning Haskell lately, so maybe that can help in finding a common ground of communication as well.
agocke
what you're basically asking for is a variant of Hindley-Milner type inference for type members.<<<<
Yes. That’s right.
Is that a bad idea for C#?
mattwar
So maybe encode it as a generic, but then the body wouldn't bind unless you invested in a bunch of other technologies? <<<<
Can you explain how the body wouldn’t bind?
You can't actually emit a method body for a generic that relies on knowing specific operators like +. So you'd need some representation other than IL that would describe this method, more like a template of some sort that binds/emits code later. Of course that would end up pushing the logic of picking that operator into the runtime, or an extension of the runtime that would know about C# rules. I guess it wouldn't be that different than the dynamic runtime (DLR) that IronPython uses (and implements the C# dynamic feature), but then we just get back to it being equivalent to the dynamic feature which the language already has.
@kasajian
Is that a bad idea for C#?
Leaving aside any technical issues (which are substantial) IMHO, yes.
I've long considered ML allowing H-M in top-level methods a mistake made more for theoretical purity than practical usefulness. Almost every ML ends up with style guidelines which prohibit using inference in top-level declarations, partially because of the quick realization after using strong typing that types are actually a form of documentation, but a better form. The question is, if everyone agrees that it's bad style, why allow it in the first place?
Consider what happens when you write many top level declarations which are interdependent for inference:
There are only a few situations that I could see getting better. Fields with duplicate type names are kind of annoying and I would rather not type List<int> field = new List<int>();. Small, very understandable private helper methods could be easier to write. I'm not opposed to doing something for these specific pain points, but I'd prefer to discuss tightly constrained examples if we want to think about it. To me, the larger proposal is a net negative, even if we could get it to work.
It's probably worth adding that the difficulty of debugging C++ template errors is the reason why the C++ world is trying to add 'concepts', a feature that's similar to C#'s generic constraints. In other words, they too figured out that attempting to the let the compiler figure it out can be problematic.
Yes, we like hard things, and we like shorthand, but there is a difference between "succinct" and "terse" and this definitely falls into the latter category and that is not a good thing. The types do represent a form of documentation and allow, at a quick glance, a determination as to what the method expects. Yes, the IDE could help, particularly with Roslyn, but in your case the method definition itself may change based on how it is used. This is something that belongs in other languages.
The method sample that you proposed as numerous problems. For starters, you're trying to add the two parameters. Even if the compiler ditched to just using generics there are no existing constraints that would allow such a call, and if such constraints were to be added you run into the issues of either requiring their declaration or an additional form of inference. More inference just makes it more confusing to determine what the method expects and the method definition could change if you happen to change how you treat the arguments somewhere buried within the method body. Also, how do you proposed to deal with the possibility of the two parameters being different generic types? The compiler would likely have to treat every argument as an individual generic type and then the constraint concerns explode in complexity.
Sorry, my opinion is that stuff like this should remain in dynamic un-typed languages (TypeScript is not a typed language).
As @agocke and @HaloFour said this is not a good idea for a number of reasons. It really doesn't scale well and it reduces some of the benefits of static typing. It is also inconsistent with how local variable type inference works. Specifically local type inference tends to infer the most specific type it can. It would be somewhat odd if omitting the return type from a function declaration had the effect of synthesizing a generic function.
In my experience with F#, automatic generalization is sometimes nice for one offs, but poses problems for maintenance. Also such functions cause problems as implementation changes cause unexpected signature changes such that they no longer meet the requirements of a desired interface. Even in small amounts F# <-> C# interop I encountered this.
Although the goal of C# is not to turn it into Haskell, I would like
someone like erik meyer or other Haskell / C# experts to comment on what
Aluan indicated. What is being claimed that a fundamental design tenant
of Haskell, strong type inferencing, is being positioned as a "one off"
nicety and a problem in maintenance. I'll agree for now.
On Wed, Mar 25, 2015 at 11:45 PM, Aluan Haddad [email protected]
wrote:
As @agocke https://github.com/agocke and @HaloFour
https://github.com/HaloFour said this is not a good idea for a number
of reasons. It really doesn't scale well and it reduces many of the
benefits of static typing it is also inconsistent with how local variable
type inference works. Specifically local type inference tends to infer the
most specific type it can. It would be somewhat odd if omitting the return
type declaration had the effect of synthesizing a generic method.In my experience with F#, automatic generalization is sometimes nice for
one offs, but poses problems for maintenance.—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/roslyn/issues/17#issuecomment-86362173.
@kasajian If you want an expert on the C# language you want @MadsTorgersen.
Regardless, we don't much care whether or not H-M was good for Haskell, I'm claiming it's not a good fit for C#. Also, since it's an even bet I'd be the one implementing this, I'll say now that I'd recommend not doing this strictly as bad ROI. :)
I tend to believe that if something is good for the language, it will
eventually get implemented by someone.
On Fri, Mar 27, 2015 at 10:39 AM, Andy Gocke [email protected]
wrote:
@kasajian https://github.com/kasajian If you want an expert on the C#
language you want @MadsTorgersen https://github.com/MadsTorgersen.Regardless, we don't much care whether or not H-M was good for Haskell,
I'm claiming it's not a good fit for C#. Also, since it's an even bet I'd
be the one implementing this, I'll say now that I'd recommend not doing
this strictly as bad ROI. :)—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/roslyn/issues/17#issuecomment-87026960.
@kasajian Perhaps I spoke too strongly. I do not feel that strong type inference is a one off or maintenance problem. What I meant to say was that _Automatic Generalization_, can be problematic for the maintenance of functions which are exposed as part of a class or module's public interface.
Actually, type inference is often an aid to maintenance as it prevents casts from accidentally being inserted in certain places.
I like both Haskell and F#.
Anyway, my point was that if
C#
public static Add( x, y )
{
return x + y;
}
were to become legal C# I would not expect the inferred return type to be generic, I would expect it to be int because of C#'s existing type inference rules.
You're right. I thought about this a little more and I think I was getting confused with C++ templates a lot. In C++, it would, in fact, be an int, not a generic.
I agree that it wouldn't be a generic now.
Now, on to the request itself. Can someone explain (if you don't mind taking the time), how would something like this:
public static Add( x, y )
{
return x + y;
}
not be preferable in a large percentage of the cases. Like, I can image someone thinking, "well, I want Add to exist for ints, but not for strings because "Add" isn't the right word to use for concatenating strings. But I can also argue that that'd be a small price to pay.
I have this gut feeling that a lot of programmers out there who are advocates of dynamic typing really just care about typing less and not specifying types. I understand with dynamic typing a variables type can change. However, that's not the case for a very large percentage of variables used in an application written in a dynamic language. For instance, take a significantly sized Python, Ruby or JavaScript application and see which of the variables, parameters, etc., really require dynamic typing.. I think the two areas will be networking code where objects are serialized and code that morphs somehow, simulating overloading, polymorphism or generates structures on the fly. There's also plenty of code where types could be inferred.
C#'s "var" keyword introduced the vast sea of .net programmers to the concept of inferred typing. The next steps seems like more type inferencing designed in a way that to the makes sense for C#.
Are we really saying that we can't refine C# type-inferencing system any more?
There's no reason to expect the parameter types or return type to be int for that method. They could be anything, even heterogeneous types. If the method isn't generic then what could it possibly be? A compile-time template expanded to any infinite number of permutations depending on how it's called? Having the compiler just stamp it as int "just 'cause" makes absolutely no sense.
Type inference should be taken exactly as far as it makes sense while the code remains legible on its own merits.
public static Add( x, y )
{
return x + y;
}
would have to be transformed into something like this:
public static TReturn Add<TArg1, TArg2, TReturn>(TArg1 x, TArg2 y) where "some way of saying that there is an operator TArg1 + TArg2 = TResult"
{
return x + y;
}
Because how do you know that both arguments have the same type? How do you know that adding those two arguments will give you the same type? This is just getting messy and a lot more complicated.
To you question how would something like this [...] not be preferable in a large percentage of the cases.:
It is not preferable if the complexity of your method is just a tiny bit higher than the one you mentioned. I dont want to dig into dozens of methods just to see that argument x in fact has to be an integer because it is passed to other methods, that pass it to other methods, and so on, and the developer was to lazy to type the 4(!) characters 'int '. And changing a method body of a method 100 calls deeper will change the method signature of your method. Thats a mess.
We can start by scoping this down to a few places where inference would be an obvious advantage.
class fields and properties would be useful.
Another place is the return type of a function and declaring a Func or Action. For instance:
BEFORE:
public void SomeMethod()
{
Func<int, int, int> Add = (x, y) =>
{
// some other code.
return x + y;
};
var z = Add( 5, 6 );
}
AFTER:
public void SomeMethod()
{
var Add = (int x, int y) =>
{
// some other code.
return x + y;
};
var z = Add( 5, 6 );
}
@kasajian That case was proposed before. The problem is that the compiler doesn't know the type of the delegate based on the signature of the lambda. That probably wouldn't matter if delegates shared a form of type-equivalence. Delegates already bring a bit of overhead, targeting one to another doubles that.
The problem is that the compiler doesn't know the type of the delegate based on the signature of the lambda
While I don't like the proposal itself, I don't get this kind of argument (I saw it many times in it's various forms in other issues).
Any proposal here is to change the language in a way to make the proposal possible, and that implies changing the spec, compiler, runtime or anything else to achieve that goal. So, you could say that "the compiler doesn't know something" or "the spec doesn't allow something" is almost a requirement for any proposal, right? =)
What I think is that allowing @kasajian proposal like a public static Add( x, y ) creates so many unanswered questions and edge cases that any meeting to discuss them will quickly end up with "but why are we adding this to the language anyway?".
@nvivo I completely agree. The important part is the next sentence which discusses why removing that limitation can be considered problematic. It's also not to say that it can't or shouldn't be done, it just outlines what would likely have to be resolved (or considered) before being able to move forward. I'm personally a fan of the very inferred lambda/delegate syntax he proposed, I was more or less summarizing the conversation that arose the last time it was brought up.
@nvivo runtime or anything else to achieve that goal
No. While we may consider _requesting_ changes to the runtime, the language design certainly does not have the power to force runtime changes and I would be very surprised if we get _any_ runtime changes in the near term.
@agocke
we may consider requesting changes to the runtime
Of course, that's what I meant. That it's possible to change those things, not that they will be changed at the first chance.
HaloFour, can you explain to me what you mean when you say "The problem is that the compiler doesn't know the type of the delegate based on the signature of the lambda." ?
Let me make the code simpler by making the transformation one step at a time, so it's clear which step you feel is the problem
So let's start with:
public static var Add(int x, int y)
{
return x + y;
}
The return type can be inferred based on the type of the return value.
If the above is acceptable, then you can rewrite it like so:
public static Func<int, int, var> Add = (x, y) =>
{
return x + y;
}
Right? No information is lost -- it's a transformation. So if the above is acceptable, what about:
public static Func<var, var, var> Add = (int x, int y) =>
{
return x + y;
}
Again, no information is lost. And if the above is acceptable, then you can say that Func < var, var, var> is not really adding information other than indicating the number of parameters. So you can infer it, yielding:
public static var Add = (int x, int y) =>
{
return x + y;
}
@kasajian The problem is that last step. Once you've lost Func<T1, T2, TResult> the compiler cannot determine which specific delegate type to use. Func<T1, T2, TResult> is just one possibility, but there can be infinite possible delegate types with the exact same signature, and they are all considered distinct incompatible types to the CLR.
public delegate int AdderDelegate(int x, int y);
static int Add(int x, int y, AdderDelegate adder) {
return adder(x, y);
}
static void Main() {
var adder = (int x, int y) => {
return x + y;
}
Add(2, 3, adder); // compiler error, Func<int, int, int> is not an AdderDelegate
}
It's possible to work around it with the following:
Add(2, 3, new AdderDelegate(adder)); // create a new delegate which calls the adder delegate
The C# compiler could even potentially do this for you. The problem, as I had mentioned, is that invoking a delegate carries a bit of overhead. A delegate that invokes another delegate, as is the case above, doubles that overhead.
@HaloFour Thank you for the explanation.
I think that we have some constraints that we're working under, the details of which are not always obvious to me. We have various necessary abstractions. I can imagine implementing a language feature that's merely "syntactic sugar" on top of the existing language, is the least intrusive. For instance, C# 6's "expression-bodied function members" feature seems like syntactic sugar. Then you have ones which are new constructs in the language but still at the language level and don't require changes to the CLR/CLI. Then you have things that require changes to the CLR. Adding Generics in C#2 is of that nature.
I think I may be proposing that we work on features that are lot more intrusive than we're willing to entertain at this time. Is that the case?
@kasajian Well I can't speak for the compiler team. My comments are mostly parroting matters on the subject (of inferred lambda types) that happened previously (I think on CodePlex).
From how I've heard it described is that every feature needs 100 points to be implemented, but because every new feature inherently involves work and future support that every proposal starts with -100 points. It has to make up the difference by demonstrating its worth and earning points. I assume that anything that would depend on enhancements in the CLR would then require going through that entire process with the CLR team, demonstrating that it's worth implementing over the cost of doing so and supporting it from here on out.
My _opinion_ is that the degree of inference as described by your first post to this channel is way too aggressive for C#. It doesn't appear to buy much over the syntax of generics, which at least require you to be fairly explicit about the relationship of the type parameters and their constraints. To actually implement it with generics requires CLR changes as constraints aren't powerful enough to describe your use case or probably any implied use case. To implement it without generics would require something more akin to template programming where the method is compiled for every potential invocation depending on the combination of parameters, and if the types of the parameters don't support the implied "constraints" then you get a compile-time error at the call site (e.g., the type doesn't support the + operator). That might be possible in the compiler, but it would be a holy mess. It also has no real way of being expressed in a compiled assembly so that consuming assemblies could consume it.
My _opinion_ of inferred delegate types is that I like the syntax and I think that it looks and feels entirely reasonable. The problem of incompatible delegate types was brought up before and is a potential problem. Delegates do have real overhead compared to your standard method calls, but it might not be that bad that it warrants not being implemented.
@HaloFour Thank you for a very clear explanation.
Do I just close the issue now?
Nah, leave it open. Let the compiler team weigh in on it.
On Apr 4, 2015 1:11 AM, "Kenneth Kasajian" [email protected] wrote:
@HaloFour Thank you for a very clear explanation.
Do I just close the issue now?
—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/roslyn/issues/17#issuecomment-89506521.
@HaloFour I've already weighed in. It's up to the language design team now to resolve the issue.
This is already implemented for anonymous methods and lambda expressions.
I think this is redundant for declared methods. C# is a fairly strict language and until instance-scope fields don't allow type inference, methods shouldn't have type inference either, so I vote NO for this feature.
@Shimmy, why is it redundant?
On Wed, Apr 22, 2015 at 9:39 PM, Shimmy [email protected] wrote:
This is already implemented for anonymous methods and lambda expressions.
I think this is redundant for declared methods. C# is a fairly strict
language and until instance-scope fields don't allow type inference,
methods shouldn't have type inference either, so I vote NO for this feature.—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/roslyn/issues/17#issuecomment-95429803.
@kasajian, I just equalized the importance of this issue with the importance of class-scope-level field type-inference.
Gotcha. class-scope-level field type-inference would be cool, and should be done first.
Kenneth Kasajian -- Mobile call or text: 949-288-3717, Skype: kkasajian
On Apr 26, 2015, at 3:17 PM, Shimmy [email protected] wrote:
@kasajian, I just equalized the importance of this issue with the importance of class-scope-level field type-inference.
—
Reply to this email directly or view it on GitHub.
https://github.com/dotnet/roslyn/issues/17#issuecomment-89464622
Ditto.
I'd want Predicate<T> to be compatible with Func<T, bool> etc.
If that's what you meant.
According to Concepts TS, C++17 (aka C++ 1z) will probably bring the concept of Abbreviated Function Template. The syntax looks like:
// compile-time parameter type deduction
auto crazy_math (auto x, vector<auto> y) {
auto random_index = std::rand() % y::size();
return x * x * y[random_index];
}
// note: all four `auto`s can be deduced as varied types
// note 2: this has various advantages over templated functions
Note that the second parameter constraints the type to "of certain kind of container", i.e. vector in this case.
There is a related discussion here to explore interesting facts / ideas: https://groups.google.com/a/isocpp.org/forum/#!topic/std-proposals/PaKP8EIIlEU
Note that this feature is shipped as non-std by G++ 4.9 (in APR. 2014). VC's cl (and probably Clang too) is yet to implement this feature [1] [2].
@jasonwilliams200OK That still generates multiple permutations of the function at compile time based on every combination of argument types at invocation, no?
Yes, but I think it happens at compile time no?
Yes, but I think it happens at compile time no?
Yes and that's why this kind of stuff is problematic in the .NET world were you don't include headers but reference assemblies. C++17 concepts are derived from traditional C++ templates and attempt to solve some of their problems but also inherit some of their problems (if you consider the use of headers a problem).
Closing per #3912 as this is not a coherent proposal, and we probably don't want to go the way of radical type inference in C#.
Most helpful comment
@kasajian
Leaving aside any technical issues (which are substantial) IMHO, yes.
I've long considered ML allowing H-M in top-level methods a mistake made more for theoretical purity than practical usefulness. Almost every ML ends up with style guidelines which prohibit using inference in top-level declarations, partially because of the quick realization after using strong typing that types are actually a form of documentation, but a better form. The question is, if everyone agrees that it's bad style, why allow it in the first place?
Consider what happens when you write many top level declarations which are interdependent for inference:
There are only a few situations that I could see getting better. Fields with duplicate type names are kind of annoying and I would rather not type
List<int> field = new List<int>();. Small, very understandable private helper methods could be easier to write. I'm not opposed to doing something for these specific pain points, but I'd prefer to discuss tightly constrained examples if we want to think about it. To me, the larger proposal is a net negative, even if we could get it to work.