| Capability | Priority |
| :---------- | :------- |
| Unseal all, or at least nearly all classes in the framework | Must |
This is a fantastic proposal, @ryandemopoulos! I can't wait to see what the community can create with this barrier removed.
Do we need some mechanism to allow custom unsealed controls?
Currently, we can create a control into an unsealed .NET class, and only .NET dependencies can derive from it. Given that most client UI codes are written in .NET, this is not so painful.
What you can't do today is subclass the built-in types that are sealed. For example you can subclass TextBox, but not TextBlock.
Does this imply that
@crhaglun, changes to Style.TargetType behavior would be a separate feature. Currently you can set the TargetType to a base class and use it with a derived type, for example you can set a ToggleButton style onto a CheckBox. But you can't define an implicit ToggleButton Style (defining it in a ResourceDictionary without a key) and have it automatically be picked up by CheckBox instances in that tree.
@MikeHillberg right, and that's actually something I ran into today when working with the Photos app. There's a couple of custom controls that derive from AppBarButton that, when placed in a CommandBar, does not look anything like the vanilla AppBarButtons in the same CommandBar :-)
Turns out the CommandBarRevealStyle has a local <Style TargetType="AppBarButton" BasedOn="{StaticResource AppBarButtonRevealStyle}" />
which of course does not apply on controls deriving from AppBarButton.
If all controls become unsealed, wouldn't this become a much more common class of bug?
Or am I actually looking at a bug in the CommandBar template?
@crhaglun, yes, more controls will have the opportunity for this problem. If you're interested in this it's worth opening a separate issue. We intentionally don't take class hierarchy into account with implicit styles, for fear of mysterious style application. WPF has a style aliasing feature which would be interesting to add here.
I agree and want this unsealing proposal to proceed, but I also think it'd be worthwhile to consider this alternative proposal to allow derivation of sealed classes:
Proposal: Extension/derivation of sealed classes (dotnet/coreclr#26465)
On second thought, forget my above-mentioned proposal, because it falls into the domain of the .NET Foundation and this means the proposal becomes enormously difficult and time-consuming for non-technical reasons. This enormous non-technical difficulty and giant test of patience/determination can be completely avoided by taking the easier path of unsealing the classes.
Unseal all, or at least nearly all classes in the framework
OK, not every class will be unsealed, says the proposal. Whenever the unseal decision for any particular class is borderline or difficult to decide, I'd like to suggest that "partial unsealing" be considered. The policy could be that borderline cases should fall in favor of "partial unsealing" if not full unsealing. I'm using the words "partial unsealing" to mean the following:
sealed class MyDerivedClass : MyBaseClass
{
public override void Method1() { }
public override void Method2() { }
public override void Method3() { }
}
class MyDerivedClass : MyBaseClass
{
public override void Method1() { }
public override void Method2() { }
public override void Method3() { }
}
The class is unsealed but every (or many) virtual members are sealed:
class MyDerivedClass : MyBaseClass
{
public sealed override void Method1() { }
public sealed override void Method2() { }
public sealed override void Method3() { }
}
For example, let's hypothetically pretend that the unseal decision for TextBlock
is borderline and the final decision is partial unseal. If every member inside TextBlock
is sealed, then obviously subclasses cannot override any members, but this doesn't mean that subclassing becomes useless.
The following example class doesn't override any virtual
members, but nevertheless succeeds in achieving a goal of making a derived class of TextBlock
that auto-shrinks the font size when the TextBlock
is resized to a smaller size (just for example). Furthermore, it also succeeds in its goal of adding support for a fictional interface IConvertibleToRichText
that allows objects or UI elements to be converted to RichEditTextDocument
.
class MyExtendedTextBlock : Windows.UI.Xaml.Controls.TextBlock, IConvertibleToRichText
{
public MyTextBlock()
{
// Subscribe to public event "SizeChanged" defined in a base class:
base.SizeChanged += this.OnSizeChanged;
}
private void OnSizeChanged(object sender, SizeChangedEventArgs e)
{
if (this.IsAutoShrink)
base.FontSize = XXXXX;
}
public bool IsAutoShrink { get { ... } set { ... } }
RichEditTextDocument IConvertibleToRichText.ConvertToRichText()
{
return XXXXX;
}
}
Thus a "partially unsealed" class still supports a useful degree of derivation, albeit with restrictions. So, if partial unseal is also on the table in addition to full unseal, then a wider range of classes/Controls could be subclassable; wider than if only full-unseal is on the table.
Xaml's surface area is in WinRT APIs, and WinRT today only supports sealing of classes, not members.
@MikeHillberg -- oh, pity. It would have been nice to have the "middle ground" option available, especially because Windows is not a theoretical OS. _"Perfect in theory" == "Imperfect in reality"._
If the argument is that member sealing is barely beneficial, then it can also be said that class sealing is barely beneficial, therefore both kinds of sealing could be removed (this is debatable). Anyway, I still like the unsealing proposal regardless of having no option for partial unsealing.
It doesn't work in .NET either, you can't seal interface methods only class methods, subclasses are always allowed to override interface methods by reimplementing the interface. So while I like the idea in general, it would have required .NET runtime and possibly language changes for it to truely protect the base implementation.
@weltkante -- Smart point about the interface reimplementing!
Re the protection point:
to truely protect the base implementation.
Who is really protecting who? The usual assumption is that it's about protecting the base implementation, but shouldn't this assumption be questioned?
The sealed base class says to the disallowed derived class: _"I've decided to protect you from potentially breaking in future, therefore you're banned!"_
The banned derived class replies: _"Err... thanks, I guess. Do you also want money in return for generously providing me with this protection service that I didn't request? Do I have any choice in the matter? Can't you just warn me and give me the freedom to make my own decision about whether to take the risk? I'm over 18 years old, you know."_
The sealed base class replies angrily: _"I'm your parent class and you will do as you're told! Is that clear?!?!"_
:smiley:
So far, nobody here in this repo has complained about a loss of performance/optimization associated with unsealing these WinUI classes, but someone is bound to mention it soon. I have zero complaints about this seeming loss because this oft-repeated sealing-optimization argument doesn't hold water, as far as I know. The oft-repeated claim is that when a class is sealed, the CLR/JIT knows with certainty that zero subclasses exist, and this enables various optimizations. This argument doesn't pass the test of logic, as far as I know, but everyone is welcome to correct me if I'm mistaken -- I'd be fascinated to hear the logic.
I believe it's a case of logic dependency. The argument can only pass the logic test if the dependency also passes. If the argument's dependency fails the test, then the argument also fails the test. The dependency is the JIT compiler. The JIT fails the logic test, therefore the sealing-optimization argument also fails the logic test.
i.e. the JIT exists for primarily non-technical reasons: Sun invested heavily in promotion of its Java Virtual Machine (JVM), and engaged in marketing claiming various advantages of new tech ideas such as JIT etc. Thus for marketing/business reasons, it become necessary for Microsoft to likewise produce such a VM with JIT. Sun said, _"We have JIT tech!"_ and Microsoft was then able to reply, _"We have JIT tech also!"_ If you set aside the business reasons and consider only the technical reasons, then the JIT makes no sense, except in uncommon special cases.
JIT is a highly jittery claim therefore JIT is an appropriate name for such tech :-) Thus the logic chain for "sealed" breaks: The "sealed" keyword helps the JIT perform optimizations, but this is invalid because the JIT serves no purpose in normal apps. (To be precise, no technical purpose. It served a marketing purpose in the past.)
When a UWP app is compiled with the Release config instead of Debug, meaning when the .NET Native toolchain is used, the performance is (or should be) identical regardless of whether a class is explicitly marked "sealed" or not, because the .NET Native toolchain is able to determine that a class has zero subclasses regardless of whether the class is marked "sealed". Thus the "sealed" keyword should be irrelevant in regards to performance/optimization.
Normally JIT shouldn't be used. The .NET Native toolchain represents the way it should have always worked from the beginning, but this is easier to justify today than in the past, because today it's no longer necessary for Microsoft to battle Sun's Java marketing.
The CLR is useful when a UWP app is compiled with the Debug config, because it makes the app faster to compile and startup repeatedly -- a frequent activity in the testing and debugging phase. Although the "sealed" keyword allows the CLR+JIT to run parts of the app very slightly faster, this optimization serves no purpose because the Debug config doesn't need max performance, and the Release config shouldn't use JIT anyway.
Now that the Sun/Java marketing problem is out of the way, it's great to see that MS is using the .NET Native toolchain instead of jittery tech. Thanks for this excellent improvement! :+1:
When I was talking about "protecting the base implementation" I mostly meant "protecting the invariants required by the base implementation to be correct". When programming the programmer makes assumptions, being able to inject 3rd party logic in arbitrary points will usually break some of those assumptions and introduce bugs in the base class. Making things overrideable is a design decision introducing additional effort if you care about correct code.
Your concept of partial unsealing (or subclassing without overriding anything) is nice because theoretically it could be made safe to not break any invariants of the base class, but I think it needs support from the language/runtime to actually work as a building block which can compose classes in a safe way (safe as in "not introducing bugs by breaking assumptions")
Re: protecting the base implementation
The concern we've always had about unsealed classes is that it has to be well-designed to tolerate subclasses overriding a random number of members, maybe calling base or maybe not, maybe calling base before or after doing some work, etc.
For example, maybe Base has OnFoo and OnBar virtuals, OnBar assumes that OnFoo ran first (it's a virtual not an abstract), but the subclass overrode and didn't call OnFoo. So Base needs to write OnBar to tolerate this, and have test code to validate it, etc.
That's a somewhat false argument because it's not caused by an unsealed class, it's caused by having unsealed _members_ on an unsealed class. Sealing a class just helps because you don't have to figure out which virtual members you need to seal.
The truth to that argument though is that you can override interfaces, and all class members are internally implemented as interfaces in WinRT. So the unsealed, activatable types in Xaml today are susceptible to this. But there are hundreds of those today, it just hasn't been an issue, and there's real value to unsealing.
@weltkante
I mostly meant "protecting the invariants required by the base implementation to be correct". When programming the programmer makes assumptions, being able to inject 3rd party logic in arbitrary points will usually break some of those assumptions and introduce bugs in the base class.
That's the standard answer, but isn't this one of those situations in life as follows?
Thus sometimes it pays to push myself to try to think like the child. Therefore, I now present the seemingly amusing simple-minded answer from the child.
Firstly, you said: _"being able to inject 3rd party logic in arbitrary points will usually break some of those assumptions and introduce bugs in the base class."_
The child replies: _"When not even one single character of the base class .cs file is modified, no bug is introduced in the base class!"_
I have no idea what you are talking about, but apparently you aren't interested in my opinion about program correctness at all, so I'll not bother derailing this discussion further off topic by responding to those silly (but I have to admit, funny) allegories.
@weltkante -- I wanted to make the conversation lighter while simultaneously communicating valid serious points. I'm certainly interested in your opinion and I understand that you clearly made very good points there. I agree with your points. I'm just saying that I think those points aren't the end of the story. I believe there's more to the issue.
You've probably heard about the difficulty of encouraging people to think outside of the box, in order to find solutions or gain better understanding of a challenging problem. It is difficult to achieve this outside-of-the-box goal. I don't claim to be an expert in this, but I can say that I've heard that silly allegories (or the like) can assist teams to think outside of the box.
So it's not intended as disinterest in your opinion, rather it's actually more like the opposite: I'm so interested in your opinion that I want to expand upon your opinion by using an outside-of-the-box technique.
I understand if you don't want to try this outside-of-the-box technique -- it doesn't suit everyone, and it has no guarantee of success. If you (or anyone else) would like to try out the technique, then you can reply to what the "child/apprentice" said, and we can see where it goes, and see whether the technique succeeds this time. But it's certainly understandable if you don't want to.
@weltkante -- Oh, yes, I see what you mean. Yes, you're right to be bothered about the fact that I ignored this comment of yours:
Your concept of partial unsealing (or subclassing without overriding anything) is nice because theoretically it could be made safe to not break any invariants of the base class, but I think it needs support from the language/runtime to actually work as a building block which can compose classes in a safe way (safe as in "not introducing bugs by breaking assumptions")
I'd also be bothered the same as you are, if I wrote the above and it was simply ignored. Fair complaint. What you said is an idea worth exploring but it was unfairly ignored. Yes your irritation makes sense.
The reason I ignored it is not about you. The reason is actually unrelated to you. I agree with you that the concept would benefit from support from the language/runtime, but this idea was already declined by the CoreCLR team recently. (See the proposal that I linked in an earlier message.)
That's why I ignored it -- it's a dead end unfortunately. My agreement with your opinion doesn't change the fact that the CoreCLR team closed the issue and that's the end of it. I've accepted their decision regardless of whether I disagree with it. Therefore, the discussion here is limited to discussing unsealing in ways that _don't_ require support from the runtime.
@verelpode I'm not bothered by what you think I am ;-) Since you are asking I'll respond in an off-topic answer explaining myself. Please remember this is just an explanation of my opinion, I'm not trying to attack or defend anything.
The problem is that in practical reality the community of programmers at large has little interest in correct programs. The usual motivation is to take the fast route to add the features on his own agenda, without caring about whether the program is correct. This usually results in buggy programs, with the end-users having to carry the consequences, and the programmers moving on. As such your first attempt at an allegory was very misplaced when considering practical reality:
"Can't you just warn me and give me the freedom to make my own decision about whether to take the risk? I'm over 18 years old, you know."
This is the attitude of programmers who do not care. The programmer usually does not carry any risk so this is a pretend argument. Having read your other answers I understand that this may not be what you meant, so you don't have to bother defending or explaining it, I'm not holding it against you.
About your second allegory:
"When not even one single character of the base class .cs file is modified, no bug is introduced in the base class!"
This is simply false, which is why I didn't understand why you took it as a response, in particular considering I was previously just talking about breaking invariants. If a subclass can override a method which never returns null and then just does that, returning null, it will cause bugs in the base class by introducing a condition which normally could never have happened, all without changing a single line in the base class .cs file.
You might argue that this is not a bug in the base class, but that is just semantics, all stack traces and diagnostics will point at the base class code failing to handle the "impossible" condition, as such the programmer of the base class will be the first to be sought to "fix the bug" - usually by adding checks for the "impossible" condition (which in my opinion is bad, but thats not the point).
After these two responses pointing into the direction of someone not caring about software quality I didn't really want to bother anymore, I wasn't here to argue but to explain my point and I already did that, there was nothing addtional to explain. Usually people stay in their line of reasoning and trying to discuss will lead to little result, so unless there is actually something to explain I usually refrain from continuing discussions just for the sake of an argument. Since you asked nicely I'm explaining again ;-)
Anyways, after all that I'll expand a bit more about my opinion about what correctness means.
Invariants are a very powerful and very important tool, without invariants it is impossible to write correct software at all. Unfortunately todays tooling has made very little progress in allowing to specify or make use of invariants, so its all in the programmers head. Occasionally there are academic attempts but they rarely arrive in mainstream languages, the programming language rust is going into a very promising direction but there is still a long way to go. C# is taking the first baby steps with nullability annotations. In general I expect it to take many more years until it becomes actually possible to have the compiler assist in verifying invariants end-to-end through the whole program and only inserting runtime checks when invariants cannot be proven statically.
Some languages like javascript give up and place the entire burden on the programmer. This can work great if the programmer actually understands the whole program he is working on, including all libraries he is using, but in practice it results in very short-lived software which needs constant maintenance (or be frozen in time).
Sealing classes increases the stability of software, it is not necessarily a bad thing. It allows the owner of the library to consider much less conditions where things can go wrong. It makes it possible to reason about how the code behaves.
In particular for closed-source libraries where the programmer doesn't know the semantics, sealing makes a lot of sense. The owner of the library doesn't have to document all the invariants you have to follow when extending something. However once a library becomes open source it makes much more sense to open up and allow subclasses, because now programmers can actually read the source code and infer the invariants from code or comments.
So, yes, I'm all for unsealing as many classes as possible, considering that WinUI is open and everyone can read the code, but it should stay sealed if it has to protect any important invariants on a class to avoid users breaking its implementation.
There are other forms of extension which don't involve subclassing, namely composition, where you build a helper class containing the objects you are extending. Most programming languages don't make that very convenient but its just as valid as a technical solution.
The last point I want to make before I conclude is that being able to subclass things is just convenience, not actual technological empowerment. What they really need to do is to expose and document all those internal APIs and interfaces which allow implementing UI controls from scratch (and if I understood it right they are partially going to this by open sourcing the whole control library with WinUI 3).
If a UI framework provides core primitives and a library of controls built upon it, all those controls built upon the primitives should be possible to implement outside of the library, otherwise its not a good library. WPF largely did that, UWP didn't, which made this framework pretty much useless for advanced users. You couldn't program your "better panel" because you couldn't program a panel in the first place.
Subclassing a panel is one thing, it's nice, but being able to implement a panel is the real solution.
Sorry for the long response, but you asked for it ;-)
I wrote:
The child replies: "When not even one single character of the base class .cs file is modified, no bug is introduced in the base class!"
@weltkante replied:
This is simply false, which is why I didn't understand why you took it as a response, in particular considering I was previously just talking about breaking invariants.
Breaking invariants -- so you're saying, for example, the .NET Framework v4.8 (or UWP or NuGet package etc) contains a class like this:
public class ExampleBase
{
...
}
And an app developer writes a derived class in his/her app, like this:
class MySubclass : ExampleBase, ICapabilityX
{
...
}
Sometime later, v5.0 of the .NET Framework is released and ExampleBase
now implements the same interface as MySubclass
does (ICapabilityX
), and the so-called "broken invariant" situation occurs:
public class ExampleBase : ICapabilityX
{
...
}
class MySubclass : ExampleBase, ICapabilityX
{
...
}
That's your point, right? Obviously you made a good point, but here's another good point: The "broken invariant" is only theory. The reality is different than the theory. In the real world, when Microsoft later releases the .NET Framework v5.0 containing the new version of ExampleBase
, the app still compiles and runs successfully, and doesn't experience any new bugs, and behaves exactly the same as it did previously. You said it _"introduces bugs in the base class"_, but in reality the app still compiles and runs successfully.
_"That's impossible!!"_, replies the theorist. Actually, it's only impossible in theory, not in reality. Test it yourself in a real ".csproj" in Visual Studio and you'll see that the app still compiles and runs successfully without any new bugs. That's hard evidence, and hard evidence is better than schwammig/hazy theories. What's more important, the reality or the theory? Obviously the real world is the top priority, and theory is the second priority after reality, and this ranking needs to be respected otherwise people would live in an imaginary dream world of their own invention.
The reason why the app still compiles and runs successfully (despite the allegedly catastrophic "broken invariant") is that each ".csproj" targets a particular version of .NET Framework, or UWP target version, or NuGet package version. The app doesn't suddenly break when a new version of .NET Framework is released. When the app developer wrote MySubclass
, the ".csproj" was configured to target .NET Framework v4.8. When .NET Framework v5.0 is later released, the app is unaffected -- it remains configured to target .NET Framework v4.8, until the app developer decides that he/she is ready to change the target version and test his/her app's compatibility with the new version of .NET Framework and make changes where necessary.
The app developer comfortably waits until he/she is ready and has the time to modify MySubclass
to make it compatible with .NET Framework v5.0. No panic. No catastrophe. No end-users suffering. Thus I cannot comprehend why you are making such a big fuss about so-called "broken invariants". Yes the invariant breaks, good point, but it doesn't matter as much as you claim.
The conclusion appears to be that the child was correct after all. That's the reason why I push myself to try to think like the child, as I said.
I would also suggest paying attention to what @MikeHillberg wrote, because he achieved a nice balance between theory and reality, without ignoring either of these aspects. First he took the theory into consideration; he wrote:
The concern we've always had about unsealed classes is that it has to be well-designed to tolerate subclasses overriding ..... For example, .....
Then in the same message, he also took the practical reality into consideration; he wrote:
But there are hundreds of those today, it just hasn't been an issue, and there's real value to unsealing.
This means he didn't wildly spring to either of the two extremes, and he didn't ignore either aspect. Nicely done, in my opinion. Finding a reasonable balance is the key.
@weltkante wrote:
If a subclass can override a method which never returns null and then just does that, returning null, it will cause bugs in the base class by introducing a condition which normally could never have happened, all without changing a single line in the base class .cs file.
So you're saying the app will crash (or throw System.NullReferenceException
) because the base class didn't check for null. And then, as you said, _"This usually results in buggy programs, with the end-users having to carry the consequences"._
I disagree. The end-users don't carry this consequence and don't experience this bug because obviously the app developer will not release a crashing app to end-users. The app developer normally thinks: _"I wrote a derived class and then tested it but my app crashed. Why? How can I change my derived class in order to eliminate the crash?"_
The app developer does not think: _"I wrote a derived class and then tested it but it crashed, so now I'll send my crashing app to all of the end-users, pronto! Yeeeehaaaw!!"_
I believe I am most likely correct in saying that not even in Texas are the app developers cowboys who say _"Yeeeehaaaw!!"_ while immediately sending their crashing app to end-users.
I find it difficult to believe that many software engineers are irresponsible cowboys who recklessly ignore theoretical concepts ("broken invariants" etc) and carelessly release buggy software that end-users must suffer. You wrote, _"This is the attitude of programmers who do not care"_, and so forth. Are you sure that your opinion of software engineers is not an unfairly low assessment or an exaggeration of a problem that only rarely occurs?
You might argue that this is not a bug in the base class, but that is just semantics, all stack traces and diagnostics will point at the base class code failing to handle the "impossible" condition, as such the programmer of the base class will be the first to be sought to "fix the bug"
Even if the app is somehow released to end-users despite crashing, the end-users will report the bug to the app developer, not to the Microsoft employee "Joe Bloggs" who is the programmer of the base class in .NET Framework or UWP etc.
The app developer then runs the app in the VS Debugger and sees that the NullReferenceException
is thrown in the base class (true), but also sees that the type of the instance is the app developer's own derived class. At this point, under normal circumstances, a typical app developer would not wildly spring to a conclusion that it _must definitely_ be Microsoft's fault, and certainly the end-users also don't think it's Microsoft's fault. A far more likely scenario is that the app developer thinks something similar to:
_"Hmmm, the base class worked fine originally, without my derived class. It only started crashing when I wrote my derived class. Maybe I misunderstood the base class and need to change something in my derived class."_
Although you're correct in saying that the stack trace shows that the NullReferenceException
is thrown in the base class, only a highly inexperienced programmer would think that stack traces always pinpoint the real location of the bug accurately. Via practice in writing and debugging real-world software, software engineers quickly learn that stack traces are often misleading and that multiple debugging features in the VS Debugger must be used, not only the stack trace. Except for beginners, software engineers know that they must use their own brain to find the real underlying cause and location, not just blindly trust a dumb (auto-generated) stack trace.
Again the conclusion appears to be that the child was correct after all. That's the reason why I push myself to try to think like the child, as I said.
@verelpode
It is true that the JIT could be far better at devirtualization. The Java HotSpot VM is really good at that. From following the JIT issues on this issue tracker, and from my own experimentation with code quality, I do not place too much hope into the JIT performing advanced optimizations. Technically, it can happen but practically it will not happen anytime soon.
The correctness concerns about unsealing things weigh very heavy in my mind. If inheritance is not planned for, potential for bugs arises (which are costly), architecture becomes murkier, compatibility concerns appear, support tickets are generated, developer confusion is increased and propably other things.
I recently posted an opinion about unsealing classes (in which I am skeptical of the idea). Posting it here as well: https://github.com/dotnet/coreclr/issues/26465#issuecomment-527150168
@GSPP
The correctness concerns about unsealing things weigh very heavy in my mind.
Would you accept a compromise-solution of sealing members instead of sealing classes? I realize that this compromise means that the program correctness is not perfect, but there exists a reason for this lack of perfection: In order to achieve a working balance between the theoretical ideal and the practical requirements.
(The above question is regardless of the fact that UWP/WinRT metadata currently doesn't support member sealing. Theoretically WinRT might support member sealing in future, and ofcourse C# already supports member sealing. So if member sealing will be supported in WinRT, would you accept this compromise?)
Would you accept another compromise-solution where the behavior of sealed class
(in C# and/or WinRT) remains the same as currently, except that the compiler would treat it as a strong recommendation instead of enforcing it as a strict rule? When an app defines a class derived from a sealed base class, the compiler would issue a warning instead of the current hard error that aborts compilation. Each app developer would be given the freedom to respond to the warning in the manner that overall best suits his/her/their particular circumstances (the circumstances are different for different developers).
i.e., what I wrote in my previous message was actually a serious suggestion, despite the lighthearted wording. I wrote:
"Can't you just warn me and give me the freedom to make my own decision about whether to take the risk? I'm over 18 years old, you know."
Thus: Would you accept that it's sufficient to warn developers, or do you say that developers MUST always be forced to strictly obey the rule of no derivation of sealed classes ?
@verelpode I do not sufficiently understand the consequences of allowing to derive from sealed classes. So I don't know what specifically I would accept.
Java's virtual by default approach is a disaster in my mind from a correctness perspective. You just can't code if every method can be pulled out under you and replaced with something else. Every time you mark something virtual you need a good understanding of the contract that this method must fulfil. If you override Stream.Read
that's clear. If you override some random method such as TextBox.SetText
then who knows what internal invariants that base method was maintaining.
If inheritance is not designed for but allowed you are changing library internals. This comes with the same set of problems that private reflection comes with.
I always wonder why people are so fond of deriving from things when there are no members to be overridden. It really is just a way to add fields or convenience methods. In my experience, composition handles these cases quite cleanly and better.
But I guess every developer is biased by the set of applications he has been working on. So I might not fully appreciate the usefulness of these "hack inheritance" patterns. Nothing wrong with a good hack at the right time :smile:
Just realized I couldn't inherit from AutoSuggestBox
... 馃槩
Can this be done in the next WinUI 3.0 preview? Seems very simple to do and would close a few issues and help people porting from WPF.
I am here to show support for as many unsealed classes as practically possible. Virtual methods are always a huge plus too.
My main concern today is with respect to the RepeatButton.
ButtonBase, ToggleButton, HyperLinkButton, DropDownButton, SplitButton and ToggleSplitButton, are all defined to be inheritable classes while RepeatButton is marked with the Sealed modifier. I will now need to re-invent the RepeatButton to achieve my goal today.
@ryandemopoulos Do you have any updates on this issue? Is it already known which steps there are to unseal all the things?
Maybe this is something the community can help with when WinUI 3 is open source.
I just saw on the community call that this may not happen for 3.0 and would then be blocked until 4.0.
Simply put, thats not acceptable for a number of scenarios. Could a little background on why this takes so long to do be provided? It seems relatively simple.
If the code is made open source before 3.0 release this is also something the community might be able to contribute. As I said, I dont expect this is too difficult. I also can see winui 3.0 open sourcing not done until launch -- a full year behind schedule.
So far I'm glad I changed course and evaluated Avalonia. I have been able to achieve much more in a shorter timeframe so far.
For WPF centric developers writing Desktop applications who seek better rendering performance with CSS style selectors, it's a great choice. Cross platform capabilities are a huge bonus considering that wasn't my goal for finding a WPF alternative. (I initially chose it over Uno because Uno seemed to have a dependency directly on WinUI.)
This isn't meant to downplay WinUI or any teammate working tirelessly on the project. However, there's just too many signs that more time is needed for them to provide a decoupled and mature enough framework for enterprise application use.
My desperation for what WinUI could provide caused me to dive in too early. At the pace the project is moving, I have to wonder how relevant it will be for direct consumption by modern applications when it's ready. By the time it achieves the milestones we all need to see, it still won't provide cross-platform capabilities and would seem to affect downstream products like MAUI and Uno? Not facts, just wondering out loud since that's all we can ever do.
I know none of this is without challenge, but I just can't get over the fact that it's nearly 2021 and we still don't have the ability to write enterprise desktop applications using a UI framework started in 2012! There are countless half-baked ways to somewhat meet our needs, but WPF is still the best choice for Desktop developers if you want to stay with a Microsoft backed product, a goal I always try to achieve.
I believe in Microsoft's ability to execute a plan, just look at how amazing dotnet5.0 is! Knowing the brain power they have aboard, it makes it harder to accept that one of the most important pieces of application development is missing. We are missing WinUI.
Sorry to be a pessimist, I know COVID isn't helping anyone's schedule. I just wish Microsoft would throw a huge pile of cash at this and get it done. (If resources are the issue, I wouldn't know and don't try to pretend I do.)
Come on WinUI, we believe in you! You got this, give us the goodies so we can write some absolutely amazing Microsoft applications! We depend on you, are downstream from you and cannot shine without you.
I'm super impressed with the alternative I chose, I feel relieved I have a solution. However, I still have my bucket of popcorn and am eager to see this succeed.
Getting a little side-tracked but to continue your line of discussion: I wish Avalonia nothing but success. In the end it may win out against all the others. Innovation drives the future. The vast majority of design changes Avalonia made from WPF/UWP were done for justifiable and good reasons. That is not something that can be said for UWP (WinUI 2.x). UWP has been a thorn in the side of developers since its inception. However, it very much can be used to create WPF-level desktop apps. It just takes a bit more work in a few places -- less in others. Stability is the real issue though.
Uno has no dependency on WinUI other than it is following the same API and choose not to deviate. That is the correct choice for their use case and its what Maui should have done. I also wish Uno great success and am banking on it now myself.
As you said on another issue, it will be very interesting to see where we are in a few years time. There will be 3 cross-platform frameworks for writing c# apps with .net 5. It's a great place to be... we just have to get there before the competition.
Since XP, Microsoft's problem has always been the same thing: overpromise and under-deliver, then cancel when you can't get users and never understand why. I've worked for some very large companies and this is almost always the result of management turnover and too few people doing the work vs. managing the work -- among other reasons I won't add. I have nothing against the WinUI team and they do a lot of great things and I appreciate the transparency, even schedule slips. But I can't easily accept 1 year delays from a company this size.
Most helpful comment
Re: protecting the base implementation
The concern we've always had about unsealed classes is that it has to be well-designed to tolerate subclasses overriding a random number of members, maybe calling base or maybe not, maybe calling base before or after doing some work, etc.
For example, maybe Base has OnFoo and OnBar virtuals, OnBar assumes that OnFoo ran first (it's a virtual not an abstract), but the subclass overrode and didn't call OnFoo. So Base needs to write OnBar to tolerate this, and have test code to validate it, etc.
That's a somewhat false argument because it's not caused by an unsealed class, it's caused by having unsealed _members_ on an unsealed class. Sealing a class just helps because you don't have to figure out which virtual members you need to seal.
The truth to that argument though is that you can override interfaces, and all class members are internally implemented as interfaces in WinRT. So the unsealed, activatable types in Xaml today are susceptible to this. But there are hundreds of those today, it just hasn't been an issue, and there's real value to unsealing.