Runtime: Make interfaces as the official ADO.NET Provider API instead of classes

Created on 26 Sep 2015  Â·  174Comments  Â·  Source: dotnet/runtime

From what I can see currently on the corefx-progress page for System.Data.Common, the interfaces (IDbCommand, IDbConnection, etc) were removed in favor of the usage of abstract classes.

But in the new API, most of the main methods are not virtual or abstract. On DbCommand alone we can see this:

public DbConnection Connection { get; set; }
public DbParameterCollection Parameters { get; }
public DbTransaction Transaction { get; set; }
public DbParameter CreateParameter();
public Task<int> ExecuteNonQueryAsync();
public DbDataReader ExecuteReader();
public DbDataReader ExecuteReader(CommandBehavior behavior);
public Task<DbDataReader> ExecuteReaderAsync();
public Task<DbDataReader> ExecuteReaderAsync(CommandBehavior behavior);
public Task<DbDataReader> ExecuteReaderAsync(CommandBehavior behavior, CancellationToken cancellationToken);
public Task<DbDataReader> ExecuteReaderAsync(CancellationToken cancellationToken);
public Task<object> ExecuteScalarAsync();

While these methods can certainly be made virtual or abstract, it would be much more useful to have the real interfaces back, and make any public API depend on these interfaces instead of the abstract classes.

This is mostly useful when developing libraries. Today it's very hard to mock a datareader to make it return a specific value for testing purposes. The same for ensuring that ExecuteReaderAsync was called, not ExecuteReader, etc.

I propose the provider factory instead should be made as an interface:

public interface IDbProviderFactory {
    IDbCommand CreateCommand();
    IDbConnection CreateConnection();
    IDbConnectionStringBuilder CreateConnectionStringBuilder();
    IDbParameter CreateParameter();
}

And then follow from there to the rest of the provider to things like IDbDataReader, IDbTransaction, etc.

We know the interfaces became out of sync for some reason in the past and the abstract classes were made the official API, but this doesn't need to be the case anymore in corefx.

Note that this doesn't mean removing the System.Data.Common in any way, but instead make the Common classes implement these interfaces, and you wouldn't use System.Data.Common unless you're implementing the provider. Applications would depend only on the interfaces instead.

Please consider this to make the API more testable on corefx 1.0.

Related to discussions on dotnet/runtime#14302 and dotnet/runtime#15269.

area-System.Data

Most helpful comment

We cannot add members to interfaces

Correct, and that's a _good_ feature of interfaces. The preference for abstract base classes is the safest way to help API entropy along, instead of fighting it.

While you don't _have_ to follow the principles of OOD, I'd suggest that you do, when creating OO APIs. In short, the Interface Segregation Principle (ISP) states that _no client should be forced to depend on methods it does not use_.

If you add new methods to an existing abstraction, you automatically violate the ISP.

You can decide that you 'don't have to to adhere to SOLID' because you're Microsoft, and you're working with the BCL, so therefore 'normal rules don't apply' (not actual quotes; just paraphrasing the normal counter-arguments).

Having maintained a couple of open source projects for 6-7 years, in my experience it's better to keep interfaces small. If you need to add new capabilities to an abstraction, introduce a new interface.

All 174 comments

They would have switched to abstract base classes because interfaces can't be versioned.

I'm guessing the non-virtual methods call another method that is virtual. Virtual methods hurt performance so you don't want to make things virtual unnecessarily.

@JamesNK I see. But with .NET Core being a new API and ADO.NET API being quite stable over almost a decade, do you think this is still a valid concern? Also, talking about database access my guess is that the cost of virtual methods is dwarfed by the cost of the database access.

@NickCraver, @roji, @FransBouma since you guys seem to have interest in the ADO.NET API, have anything to say about this?

@YoungGah, is this something worth pursuing?

I'm guessing the non-virtual methods call another method that is virtual. Virtual methods hurt performance so you don't want to make things virtual unnecessarily.

In the process of executing a query on a remote database and processing results, the nanoseconds lost on a virtcall are negligible. Besides, ADO.NET uses this system since the beginning (so do lots of other APIs in .NET) and no-one complained that their DB code is so slow due to virtual method calls ;)

I can see async methods in your list so I'm guessing just a couple of years ago MS couldn't of added async to IDbCommand. Who knows what tomorrow will bring that will require new methods or properties.

Interfaces don't version.

Performance is just one reason not to make something virtual. Reducing the surface area for implentations maybe? I'll let someone at MS say why they decided not to, I don't know much about ADO.NET so I'd just be speculating.

@JamesNK I think your concerns are valid, but there are 2 important points to consider:

  1. ADO.NET has been pretty much stable since .NET 2.0, which a decade - although the async API was added later, it didn't change the behavior of the API, just added async counterparts - I don't see any big changes in the database driver paradigm anytime soon
  2. CoreFx is supposed to have a different versioning idea, since you can just keep the previous CLR for old apps. So interface versioning issues shouldn't have such an impact here

Consider also that even a sql server on "localhost" will spend at lease a few ms just to connect and return an empty query. In practice, most _fast_ queries to relational databases take ~20ms.

Being able to mock the API with standard tools like NSubstitute or Moq is much more valuable to the developer today than saving microseconds in virtual method lookups.

I guess I don't have a very strong opinion here, but here are some remarks:

  • Agree with the above that removing virtual vs. non-virtual is negligible in an API for database access
  • Base classes do allow ADO.NET to provide implementations, I'm guessing that's what most the non-virtual non-abstract methods are about - the overload of ExecuteReader that doesn't accept a CommandBehavior passes CommandBehavior.Default to the overload that does. If you switch to interfaces, every provider will have to implement ExecuteReader() with the exact same boilerplate...
  • Am not sure this is valid across all major mocking frameworks, but at least in Moq isn't it's just as easy to mock a base class as it is an interface?

So overall the idea of dropping either the base classes or the interfaces seems good (simpler). Since I don't see any advantage to interfaces (unless I'm wrong about the ease of mocking), and base classes can provide common functionality (i.e. the non-virtual non-abstract methods), I guess Microsoft's approach of dumping the interfaces seems good to me...

I agree with @roji on all his points.

@roji just a note, I'm not proposing droping the base classes, I'm proposing adding the interfaces as the default API. Base classes can still implement default behavior.

As for testing, I had huge issues testing if my API was calling the right methods. To check if ExecuteDataReader received the right parameters for example, you must check another protected method that is called internally with different parameters. This is far from ideal.

Currently, unless I'm mistaken, the only framework that can mock the ADO.NET API is the MS Fakes framework which can mock absolutely anything by intercepting calls. Moq and others can't do that.

I'm interested to see if other people had similar issues.

@roji just a note, I'm not proposing droping the base classes, I'm proposing adding the interfaces as the default API. Base classes can still implement default behavior.

Sorry, I misunderstood that. In that case, isn't your proposition more or less keeping things the way they are in .NET (not that there's anything wrong with that)?

As for testing, I had huge issues testing if my API was calling the right methods. To check if ExecuteDataReader received the right parameters for example, you must check another protected method that is called internally with different parameters. This is far from ideal.

If I understand your scenario (not sure), Moq's CallBase is useful for this kind of scenario - default implementations are inherited from the base class

@roji

isn't your proposition more or less keeping things the way they are in .NET (not that there's anything wrong with that)?

Not exactly. The interface API was added in .NET 1.0 and deprecated on 2.0. Since 2.0, the interfaces are there for compatibility, but there is no interface for ProviderFactory or other classes in Data.Common. There is also nothing for the async API or 2.0 or newer methods.

Moq can only mock things that are mockable. There must be some method that is either virtual or abstract that it can override, or a protected method it can call. The current API provides method for some cases, but not for most of them. There are many things that are internal, private and out of reach unless you use reflection. Only MS Fakes can do it because it replaces the reference with a shim, but this is only available on VS Enterprise and useless for open source projects.

It sounds like I have a very specific case, but certainly anyone who ever tried to mock this api faced this issue. Just google, almost every solution ends up with "mock the legacy interface API or build a wrapper you can mock":

@nvivo OK, thanks for the extra details - I admit I haven't gone very far with mocking ADO.NET.

The thing I don't understand, is why you would want to mock internal, private and otherwise out of reach methods of an API. Shouldn't you be mocking public methods that are directly available to your own application code (which is what you're trying to test)? I do see the issue with the non-virtual methods (e.g. the 0-parameter ExecuteReader() overload), but given that in ADO.NET these always (?) call some virtual overload (e.g. ExecuteReader(CommandBehavior)), is there a real issue here?

Just trying to understand your problem scenario, can you give a simple example?

@nvivo We currently have no plan to bring interfaces in because of the versioning issue which was already pointed out by several people on this thread. Good example of interfaces getting behind is when async and streaming methods were added in .NET Framework 4.5. When we added those new features, we carefully looked into extending interfaces. The options we had at that time were either provide InterfaceFooV2 or separate interfaces for asyn and streaming. We didn't want to add InterfaceFooV2 as we can foresee that we would want to add more APIs in the future. Keep adding separate interfaces for each new functionality would be confusing as they are not tied to the existing interfaces.

@roji I had cases where I want to ensure that a specific overload of ExecuteReader was called, and not that "any of the overloads". It's the kind of thing you have only in libraries, not on user code.

@YoungGah thanks for the info. I'm closing this then.

Do people responsible for this change have any idea of its effect? The core ADO.NET Interfaces have been around for over a decade and data access being the center of most business apps, I'm having trouble conceiving how not purposely breaking so many existing code bases isn't the highest priority? These are some of the most critical high-level interfaces in .NET, removing these breaks every ADO .NET data access library out there and by consequence every project using it. Removing them creates an artificial fragmentation, causing frustration and confusion that will hamper adoption of CoreCLR.

You can still version interfaces and make them source compatible by just adding extension methods for any New API's off IDbCommand, e.g:

public interface IDbCommand
{
    //...
}

public class DbCommand : IDbCommand
{
    void NewApi();
}

public static class DbCommandExtensions
{
    public static void NewApi(this IDbCommand cmd)
    {
        ((DbCommand)cmd).NewApi();
    }
}

The core IDbCommand interface never has to change after DNX is released and you can continue adding functionality with the strategy above. You could also later (in a major breaking version), roll up these extensions and merge them into the core interface. Either way we get the core stable ADO.NET interfaces that's critical for migration of existing code bases and consequently for adoption of CoreCLR.

I've been asked by @davkean to provide concrete examples of what impact removing core ADO .NET interfaces will have. I can't imagine this change was considered without evaluating the immeasurable impact it would have on the existing .NET ecosystem, but then again it's also been done so there's a possibility it wasn't considered - which I'm going to assume here-in it wasn't.

Despite EF's role of being .NET's default ORM and its outstanding success in capturing a large majority of market share, there's still a large population of .NET developers who prefer to instead use alternative ORM's for a number of different reasons. E.g. an important feature as it relates to CoreCLR is that they have first-class support running on Mono/Linux/OSX as well as supporting multiple alternative RDBMS's. Since CoreCLR is pitching heavily for the Linux/OSX developer market, the more support there is for alt RDBM's, the better. Another important trait of the population of devs who adopt Micro ORM's is that they've evaluated outside of MS ecosystem defaults to choose the ORM that's most appropriate for them. From everything I've seen there's a high correlation between active .NET OSS (i.e. anti-Dark Matter) devs and devs who adopt Micro ORMs, likewise I expect this to have a high correlation with early adopters of CoreCLR - whose major value proposition is to develop on OSX/Linux. These are some of the reasons why it'd be beneficial to include the surrounding .NET ecosystem in your decision making when making fundamental breaking design choices like this.

Alternative ORM downloads

A cursory glance at NuGet downloads provides an indication of what the non-EF market-share looks like:

NHibernate - 1M+
Dapper - 1M+
OrmLite - 500k+
Simple.Data - 300k+
PetaPoco - ~100k
NPoco - 30k+

The real numbers are a lot more than this as many Micro ORM's like Dapper, Massive, PetaPoco, NPoco, etc were designed to fit in a single drop-in .cs so NuGet isn't reporting its true usage. There's also closed source ORM's like LLBLGen Pro which have a large user base but its usage isn't reported by NuGet, likewise I'm sure I've missed a number of other ORM's I forgot / don't know about.

Impact to alternative ORM's

Thanks to GitHub we can do a quick search to see how many different source files contain the core
IDbConnection, IDbCommand and IDataReader ADO .NET interfaces impacted by this change:

IDbConnectionIDbCommandIDataReader
NHibernate59181132
Dapper172117
OrmLite1795426
Simple.Data29276
NPoco4103

Note: these results only show source files, the actual number of broken references are much higher.

Impact to Customer Source code

The actual impact of this change also extends to all projects dependencies using these ORM's.
Unfortunately the effect isn't limited to internal implementations as it also breaks the customer
source code as many Micro ORM's are just extension methods over ADO.NET Interfaces so the client
code looks like:

IDbConnection db = ...

//Dapper
db.Query<Dog>("select Age = @Age, Id = @Id", new { Age = (int?)null, Id = guid });

//OrmLite
db.Select<Author>(q => q.Name.StartsWith("A"));

One extensibility feature from using extension methods is that these ORM's are "open-ended" and Customers can extend ORM's with their own first-class API's by adding extension methods in their own projects - these are broken as well.

Obviously any source code that passes IDbConnection around is now also prohibited from working on CoreCLR.

E.g. the core ADO.NET Interfaces are heavily used throughout high-level frameworks like ServiceStack, adopted because it's the minimal dependency to enable multi-RDBMS data access. It was also assumed out of all the classes that were unlikely to change, it would be the core ADO.NET Interfaces.

Summary

I'm personally astonished there was ever going to be a future in .NET without these interfaces.
Interfaces are by design purpose-specific to allow for multiple implementations and ADO.NET is one
of the most important "open provider models" in .NET. I've no idea what priorities caused these interfaces to be removed, but both the massive existing .NET code bases that rely on these interfaces along with the alternative-EF .NET ecosystem should be given a much higher priority. This is causing a significant disruption and is a major barrier required to support both existing .NET 4.x and CoreCLR platforms, forcing a non-trivial amount of additional complexity that must be applied to all existing code-bases affected by this.

The current perception is that ADO.NET/CoreCLR is being re-designed to provide first-class support for EF and SQL Server with the rest of the ecosystem being disregarded - non-transparent breaking decisions like this only goes to re-enforce this stereotype.

As a previous member of the .NET team (I now work on Roslyn), I was heavily involved in the original design of the new data common, along with the SQL and Entity Framework teams. I'm not involved in it at the moment, but I can add some background to help correct some of the statements that I'm seeing on twitter and above.

The current design of System.Data.Common for .NET Core started in December 2012, approximately 2 years before we open sourced.

Goals:

  • Design a modern surface area for .NET Core, that reduced duplication of concepts (IDbConnection vs DbConnection), confusions, mistakes and layering issues (split SqlClient from DataCommon, split DataSet from core abstractions) from the original design from .NET 1.0. One that would be easily picked up, both by existing consumers and _new_ developers to .NET Framework.
  • Enable providers and consumers to build a single binary/source against .NET Core, and then run that same binary on .NET Framework. Make note, the reverse was not a goal; being able to take .NET Framework binary/source and run it without some changes on .NET Core.

Correction of a few things being spread around:

  • The interfaces as they stand are not versionable. We cannot add members to interfaces, and the proposal above provided by @mythz via extension methods requires that providers derive from the abstract base classes anyway.
  • System.Data.Common _has not_ moved away from the provider model. The interfaces were removed because they were a legacy .NET 1.0 concept that was replaced/duplicated by the abstract base classes introduced in .NET 2.0. At the time we made this decision, every provider that we could find derived from the base classes.
  • Like the interfaces, the base classes are mockable.
  • We understood there would be some changes needed for those using the .NET 1.0 interfaces, however, it is a very simple port to move to the base classes. For example, see these few lines of change for AutoMapper: (https://github.com/AutoMapper/AutoMapper.Data/blob/master/AutoMapper.Data/DataReaderMapper.cs#L14).

Something I'm having trouble understanding:

We cannot add members to interfaces

How is it not ok to add members to CoreCLR interfaces yet, it's fine to rip them out entirely?

the proposal above provided by @mythz via extension methods requires that providers derive from the abstract base classes anyway.

The important part is that the interfaces exist and allows source code that references them to compile.

If you don't want to version the interfaces, fine EOL them, just restore the interfaces as they were before they were ripped out and mitigate the burden now imposed on every other library using them. I mean these core Interfaces were never obsoleted with no warning or migration path provided. Yet we're getting punished for adopting a published, well-known, stable API as an integral part into our libraries?

it is a very simple port to move to the base classes.

This needs to be added on every source file that references the ADO.NET Interfaces, and forces Customers to litter their code with custom build symbols.

There doesn't seem the same care for backward compatibility here, but purposely breaking existing customers in a future release is just not an option (I'm surprised it's even considered with ADO .NET's much larger market share). We can't break existing 4.x customers and yet we're being asked to support CoreCLR - So where does this leave existing 4.x libraries that want to maintain existing backward compatibility and also support CoreCLR? Should we be duplicating docs/examples as well?

How is it not ok to add members to CoreCLR interfaces yet, it's fine to rip them out entirely?

The surface area in .NET Core needs to be binary compatible with .NET Framework to enable 1st parties and 3rd parties to build against .NET Core, and run portability without changes on .NET Framework. Adding members to interfaces violates that, as consumers of those members would fail when they ran on .NET Framework.

I'm not arguing for the removal or addition of these interfaces, I just wanted to add some background to why the design ended up where it is. I'll let the current owners including @YoungGah and @saurabh500 tackle that.

Just to summarize the thread, the reason that you believe Microsoft should port these interfaces, is to enable the ecosystem to easily port to .NET Core, while maintaining their .NET Framework implementations?

is to enable the ecosystem to easily port to .NET Core, while maintaining their .NET Framework implementations?

Yes.

Just to summarize the thread, the reason that you believe Microsoft should port these interfaces, is to enable the ecosystem to easily port to .NET Core, while maintaining their .NET Framework implementations?

Yes. External APIs are now broken if I port my codebase (LLBLGen Pro) to corefx: I then have to expose 2 apis or break the existing codebase for all my users.

It might be fine for you people to break our stuff as you don't feel the pain, we do. It's not fine by me: I have to either live with a butchered code base and maintain 2 APIs which do the same thing, OR break my users' code because you thought that was OK.

I also don't get why interfaces don't version, it's just an interface, like a class has an interface too. CoreFX can perfectly fine add the async methods to the interfaces.

The surface area in .NET Core needs to be binary compatible with .NET Framework to enable 1st parties and 3rd parties to build against .NET Core, and run portability without changes on .NET Framework. Adding members to interfaces violates that, as consumers of those members would fail when they ran on .NET Framework.

Easy solution: add the interfaces as they are now. And once you all come to your senses that this rule above is actually rather stupid, you can add the methods you needed to add to the interfaces long ago to the interfaces and move on.

I work with MS software long enough that rules like the one above are great on paper but in practice are broken the second an important MS team needs it to be broken. If you are so 'open' and 'different' as you say are in the CoreFX marketing/pr hoopla, show it. All I see with respect to System.Data and CoreFX is 'what MS needs is done, what everybody else needs is on the backburner or ignored'.

Another thing I forgot to mention: Fowler mentioned yesterday on Twitter that you need everybody to port their stuff. I have to pay for porting my 500K LoC codebase to CoreFX myself, it takes time, effort and will take away time for other features. Extra friction that's totally artificial (it's a new platform! How can there be restrictive rules?) really doesn't help at all: it adds extra maintenance costs, takes extra time to port the code and test and gives extra burden for our users.

All that is out of your scope, and not your concern it seems. But you forget one thing: what _if_ we don't port our code and with me more people? I'm willing to invest time and thus my own money to port my large codebase to your new shiny framework, but sorry to say it, whenever I run into a problem I'm met with restrictions, odd rules and endless debates ending in silence. I.o.w.: I feel very much left alone while at the same time you seem so desperately want us to like your new shiny framework.

Like I said a long time ago : Sell me this framework, this new CoreFX. Well, keeping friction and introducing a lot of moved and taken away cheese is not creating a large incentive to invest a large amount of time (and money) into this.

Just my 2cents.

@FransBouma Please let's try and keep this conversation professional, productive and focused on facts.

I'm not arguing for or against adding the interfaces. However, it is not compatible to add methods to interfaces. Let's walk through this:

1) Add IDbConnection.OpenAsync to .NET Core
2) Anyone who calls this method, will now fail to run on .NET Framework (breaking a core principle/goal that I called out above). This also breaks the XAML designer and a few other VS features which relies on this very fact.
3) To bring .NET Framework up-to-date, we ship a new version of .NET Framework "4.7" with IDbConnection.OpenAsync
4) Every single type that implemented IDbConnection prior to adding this method now fails to load on .NET Framework "4.7"

This is why we cannot add methods to interfaces.

If I keep my frustration with how things go with respect to communicating issues with MS to myself, you all won't know about it and think everything is roses and rainbows. If that looks unprofessional, so be it, I'm beyond caring whether MS thinks I'm a professional or not.

That said: I'm not married to the interfaces, so if they're gone, the fact that from then on there are classes and no interfaces to work with in theory won't make me a sad panda: what should be done can be done in theory through the base classes as well, today, as today all major ADO.NET providers play nice and derive from the base classes (this hasn't been the case in the past IIRC with ODP.NET implementing an interface but not deriving from the base class). This is also the reason why I initially up above earlier in this thread didn't really think removing them was a big deal. Since then I had some time to think about it and I think it _is_ a big deal.

We don't live in a vacuum on Mars, and middleware/frameworks at the bottom of the stack have a problem now: users of current .NET full versions of these frameworks want to keep using them on CoreFX as they know these frameworks. Porting them over to CoreFX is however a big PITA, because of a myriad of reasons, one of them being often used interfaces exposed in public APIs not being present on CoreFX (and the reason for this thread).

For that reason alone I'd like to see the interfaces back. For me personally not for technical reasons (e.g. async needs base classes, it's a mess already). I know they lack certain methods, but that's your problem, not mine. Removing them makes that my problem and (paraphrasing now) the MS response to that is: throw up your hands with "can't be done!". But I don't have that luxury. You created this mess, you solve it. You want me to port my code, to invest a lot of time and money (which I have to pay for myself) to support your new framework, why are you then making _your_ problem _my_ problem?

Looking at your 4 step scenario: adding methods to interfaces isn't a problem IF you see CoreFX as a separate framework. And isn't that the case anyway? It's the same as with Compact Framework all those years ago (which I did port my framework to, and I learned a couple of hard lessons then that tell me that porting to CoreFX won't be simple, fast and easy and keeping two code bases won't either): we start with 1 API, then someone forgot something or some team within MS needs something, and viola a breaking change only a handful low-level stack devs will run into and so on and the two roads will split.

(example: Compact Framework forgot 'SerializableAttribute'. They added that with a dummy attribute doing nothing in a later version, but that broke code which anticipated on that not being present and which defined their own)

Splitting roads is understandable though: trying to keep things compatible is too restrictive. I predict here now that this rule will be broken in the future.

Seeing things as 'compatible' is important not only at the API signature level, but also on the API _behavior_ level. Trusting that those two will be completely the same (CoreFX and .NET Full) in API behavior is too risky: a framework developer will have to test the same functionality on CoreFX and on .NET full, there's no way testing on CoreFX alone will be enough to assume the code works 100% the same on .NET full in the future: because how can you guarantee that? A call stack 20 calls deep on CoreFX has touched so much other code than on .NET full, a small detail here and there and things change.

The point in all of this is: it's a separate framework: code compiled against CoreFX can be expected to be different from code compiled against .NET full.

There are a couple of situations:

1) a framework has a code base of which 100% compiles on CoreFX. This gives a dll which is runnable on .NET full
2) a framework has a code base of which 70% compiles on CoreFX and 100% on .NET full. This gives 2 dlls: one for CoreFX, and one for .NET full. It's silly to run the CoreFX version on .NET full, as one misses 30% of the functionality.

In case of 1) I understand your point. In case of 2) (which is the case for all current .NET full targeting frameworks, among them _all_ 3rd party ORMs) your point is really meaningless, as they'll have to work with 2 dlls anyway: effectively 2 codebases which have to be maintained separately, tested separately and migrated to their own new versions separately. Especially if CoreFX gets new features which aren't part of .NET full (which will be the case) yet. (btw: if you add DbDataReader.GetSchemaTable() to CoreFX which returns a different datastructure than a DataTable, because MS refuses to port that, code using DbDataReader.GetSchemaTable on CoreFX will break on .NET full as well. If you name it differently it will break as well as the method isn't there. I.o.w.: code will break if things which aren't in _both_ frameworks are used. That doesn't mean things thus shouldn't be present in CoreFX).

To have no interfaces on CoreFX makes the situation of the framework in situation 2) a persistent one: they can't move to become a framework which fits in 1) because e.g. their API exposes the interfaces.

Microsoft rewriting their own stuff so their frameworks become frameworks in situation 1) is cool, however we don't have a million$ budget, 15+ people on the ORM runtime and a big PR machine on our side who will smooth over the wrinkles of breaking every app out there. So we're either stuck in 2) or require a little help from MS to move to 1).

That's what's at stake here. You said on twitter "tell us what you need". We did. Repeatedly. Especially regarding System.Data there is no communication. Nothing. No future plans, no discussion what to do, just dead ends and sometimes if a MS person steps in it's one who has no real stake in the matter. I appreciate your time on this, the more background we get the better, but at the same time, it's like talking to a co-worker about this: it won't get solved because the person(s) in charge are absent and not participating in the discussion.

If that makes me sound frustrated and god forbid 'unprofessional', then so be it.

Thanks for listening. Btw I have no illusions wrt System.Data: it will be a trainwreck of an API to port code to and as there's no communication from the people in charge with developers who write key frameworks on top of their API, there's little to no hope things will change. Not your fault, @davkean, it's nothing personal.

I have to echo the frustrations above about the lack of communication. We need bulk inserts and schema information as well. There has been no advancement or communication in over a month (see dotnet/runtime#15269 and dotnet/runtime#14302) of these missing core (in both ways) functionality. Yet, Microsoft is labelling the current code as "a candidate for release", which itself is a message of "it's good enough." It's not. Core things are missing that need to be added and if you follow these threads, need to be in the first version for similar versioning reasons.

Look at the last update on dotnet/runtime#14302 ("Why is DataTable/View/Set Absent?"), it's from 22 days ago asking:

So System.Data.Common is feature complete now for V1 of CoreCLR?

Yes, frustration can come off as unprofessional. The tone and context of text sucks and always has, but that's what we're restricted to here. I think everyone is trying to be productive here, but we're getting quite a bit of stonewalling from the CoreFX side on actual progress in the System.Data area and that is, to be blunt, infuriating both as a library author and a user of these bits.

We need these core functional pieces, interfaces or not - I'm not hard set on interfaces and we've ported Dapper without them. But lack of DataTable, result schema info, bulk insert, and such are unacceptable in a "release candidate". Microsoft is the one increasing the frustration with labelling the current code as RC when it's almost universally agreed that it's not ready for release. Yes, it's just a label, but it's both an incorrect label and one that drastically increases the level of urgency because it's based on an arbitrary schedule (that should have changed to reflect reality). I don't think anyone in this thread is responsible for that schedule, but it's worth stating as a major factor in the frustration _level_.

Let's get back to the root problem. We need these pieces, and many of our millions of users do to. So let's fix it.

Lets not forget NHibernate with 1M+ downloads:

| IDbConnection | IDbCommand | IDataReader |
| --- | --- | --- |
| 59 | 181 | 132 |

The current perception is that ADO.NET/CoreCLR is being re-designed to provide first-class support for EF and SQL Server with the rest of the ecosystem being disregarded - non-transparent breaking decisions like this only goes to re-enforce this stereotype.

That perception is reinforced by things like this: https://github.com/dotnet/corefx/issues/4646

As far as I can tell, there is zero way of implementing that API in any useful way outside of the SqlClient assembly.

I'm currently ok with testing without interfaces. But honestly I don't get the reasoning with interface versioning and compatibility.

Isn't the idea of .NET Core is that it's a new framework without the burden of compatibility, and that it's bundled with your application, so you don't have to deal with issues like that? The provider is already incompatible with the ones in .NET due to lack of things like the schemas and datatables, so what would break compatibility with? If the interface changes, just compile against the new version and bundle it with your app.

It just sounds like most of the excuses for the design are just worries from the old framework that are not applicable to the new one. Anyway, let's see how it turns out to be in practice.

For those of use who intend to support multiple frameworks, and have historically targeted the interfaces... I just want to share a pile of ugly that Dapper uses; I'm not saying this is _good_, but it is enough to make it compile. Of course, it is duplicated in a huge pile of files... I am mainly sharing this to emphasize yet another of the impacts:

https://github.com/StackExchange/dapper-dot-net/blob/master/Dapper/SqlMapper.cs#L6-L16

We cannot add members to interfaces

Correct, and that's a _good_ feature of interfaces. The preference for abstract base classes is the safest way to help API entropy along, instead of fighting it.

While you don't _have_ to follow the principles of OOD, I'd suggest that you do, when creating OO APIs. In short, the Interface Segregation Principle (ISP) states that _no client should be forced to depend on methods it does not use_.

If you add new methods to an existing abstraction, you automatically violate the ISP.

You can decide that you 'don't have to to adhere to SOLID' because you're Microsoft, and you're working with the BCL, so therefore 'normal rules don't apply' (not actual quotes; just paraphrasing the normal counter-arguments).

Having maintained a couple of open source projects for 6-7 years, in my experience it's better to keep interfaces small. If you need to add new capabilities to an abstraction, introduce a new interface.

If there were upvotes here, I'd have upvoted @ploeh's comment +1000

Insightful comment from @ploeh as usual.

@FransBouma, we will be adding the replacement functionality of DbDataReader.GetSchemaTable() in a way it will not break full framework.
@NickCraver, SqlBulkCopy is in our future plan and we are working on schema. We are slow in making progress on schema as we also need to make reasonable progress on making our stack to work on x-platform.
@mythz, thanks for providing the examples, numbers, and assessment on customer impact. We will review them.

@YoungGah Please update the issues involved with information so these issues stay up to date, e.g. https://github.com/dotnet/corefx/issues/1039, as otherwise the sparse info is scattered all around. It's nice you will be adding GetSchemaTable (and DbConnection's equivalent, don't forget that one!), but it's so hard to get any info around what will happen and _when_. Is there any plan what will be added when? All we have to go on now is a hint that in 'the future' something might be added. That's not really great for planning a port of a code base, to be honest.

@FransBouma, yes, I will update the other threads as well. As for your request on the more information regarding what and when it will be available, I fully understand why you guys need it. I will publish the list that will indicate if the feature/functionality will be available on v1, removed purposefully, will be available post v1, or the design of the future availability is pending. I will try to post it during next 2 weeks. As for get schema function on DbDataReader and DbConnection function, our plan is to make it available for rc2 release. If the plan changes for some unforeseeable reasons, we will update the community.

What ever happens here, and for future reference @YoungGah; IDataReader has a dependency on DataTable, which has a dependency on DataSet (which we consider a separate layer - because it's policy heavy, unlike these types which are policy-free), so there's some design work here to break that if these interfaces were ever brought back.

I'd place an other vote here for @ploeh 's approach; have interfaces, but much, much more fine grained than most of the interfaces currently in the BCL. This deals both with @davkean 's comments on decoupling as well as addressing versioning.

Why can't you just have a new interface inheriting the old one. Obsolete the old one. Remove the old one in the future. At least then you can extend it and not break existing uses.

Or multiple smaller interfaces. I just got to ploehs comment.

I don't understand this need to have perfect compatibility with the original .NET. There is nothing to break here, this is a new framework and the perfect opportunity to break the ties with legacy code and apply changes that are needed for a long time but would hurt in the original framework.

When I proposed interfaces back, I wasn't thinking about bringing the original 1.1 interfaces, but updated interfaces with the new design. There could be even be more of them, as @ploeh said.

Interfaces yes, legacy support if possible, but shouldn't be a priority at this point.

Reopening since there is a lot of interest in this topic.

There is nothing to break here, this is a new framework and the perfect opportunity to break the ties with legacy code and apply changes that are needed for a long time but would hurt in the original framework.

So the perfect opportunity to get rid of the original badly designed interfaces and standardize on base classes like ADO.NET is already doing? :trollface:

Seriously though, you have a choice between a clean API or backwards compatibility. Pick one. I don't see a way to have your cake and eat it too.

@JamesNK but that'd exactly the point. Backwards compatibility is not required, period.

You joke, but the badly design API with interfaces were bad because they were badly designed, not because they were interfaces. =) It's not like interfaces are not used absolutely everywhere in NET or this is something new. Just design the thing correctly this time and move on.

ADO.NET is one of the most stable pieces of code in all .NET. It had two or three serious changes in 15 years. It's the perfect API to stabilize to an interface and make it simpler for everybody.

As a note, this topic here is one of the most commented issues too, and had the same long discussion on the interfaces vs virtual methods, testability and stuff.

@nvivo I must admit, I'm confused. After we established that the base classes enabled testability, this thread morphed into bringing back the interfaces to enable porting .NET Framework code to .NET Core. How does redesigning the interfaces and introducing something new help that?

Why can't we have original interfaces for the backward compatibility and go forward with whatever you opt for (either abstract classes or small interfaces)? The original interfaces could sit on top of new stack and provide the backward compatibility. This part could be optional as well.
That'd make porting easy and still allow new way.

@davkean

I can't respond for everyone that commented here. I proposed using interfaces as the API of ADO.NET, updated to the new current API. I didn't ask to bring the original interfaces and all the problems it had. The purpose was to have a cleaner defined API and make it easier to mock it and test code that depends on it, mostly data abstraction libraries and not user code.

Interfaces are better at describing APIs as @ploeh wisely said, and a hell lot easier to mock. The current design is terrible at mocking, and requires you to implement almost the entire provider as a library to do simple tests. If that's not important to everyone, I can understand. But I definitely don't agree it's testable enough today. Testing if a method A was called with parameter X by checking if method B called C with parameters X, Y, Z is not ideal.

Now, just to see how classes are already creating a bad design:

  • Why does DbCommand have a DesignTimeVisible property? Is design time support a requirement for a connection to be defined as a connection?
  • Why is there an event to notify state changes but not notify other things like commands executed or transactions started? Is notification even a requirement for connections to exist or something that make it easier to build UIs?
  • Is a ConnectionStringBuilder a requirement for a provider to exist? Or more of a nice thing to make VS wizards work out of the box?
  • Why DbDataReader defines Get methods for some core types but doesn't add for other like GetByteArray() and GetDateTimeOffset()? And is retrieving a TextReader even required if this can be done outside using strings or streams? If this was an interface, would methods like these be added to the API or created as extension methods or helpers in the concrete classes (like the GetSql* family in SqlDataReader)?

These are all rhetorical questions. I'm sure all of them have compelling answers and things have been considered. The point is that the current design is clearly not something that has been thought as an API definition, something that probably would receive more attention with interfaces.

Honestly, from the outside the discussion of the design sounds like it went as:

  • let's keep these events here because it's easier for wizards in visual studio to work out of the box
  • let's remove schema retrieval methods because we have EF and that's what everyone in the world should be using, period.
  • let's keep these convenience methods because it's supported since .NET 1.1 and we can't break compatibility EVER!
  • let's remove datatables and datasets and make everyone coming from .NET 1.1 change their entire codebase anyway!
  • let's build a new provider model focused on cloud computing, community, open source and the applications of tomorrow...
  • ... and let's build this model based on the needs of yesterday using SqlClient as the _sole_ test case!
  • let's build a brand new framework that will be bundled with each application, so nobody has to worry about updates breaking their application _ever again_!
  • .. then let's then make decisions of not adding interfaces because any changes may break their updates!

Yes, there is a little rant there and this discussion is going nowhere =)... But just wanted to get this out of my chest. It's 2015, everything breaks all the time and we're used to it. There will be 20 updates to ASP.NET MVC in the next years that will cause a lot more breaking changes than interface changes in ADO.NET.

I still love .NET and what you're doing with it in general, I'm sure it's a rush to get .NET Core v1 out in time and not everything will be perfect. I just hope the community can help steer this to other directions as the time goes and you're not afraid to break things as we move.

For ORM maintainers, why not do

``` c#

if COREFX

namespace System.Data {
public interface IDbConnection { ... }
}
```

and use the adapter pattern to wrap the new System.Data with your own implementations? In fact you could make an open source code package for it and share.

It's 2015, everything breaks all the time and we're used to it. There will be 20 updates to ASP.NET MVC in the next years that will cause a lot more breaking changes than interface changes in ADO.NET.

I still love .NET and what you're doing with it in general, I'm sure it's a rush to get .NET Core v1 out in time and not everything will be perfect. I just hope the community can help steer this to other directions as the time goes and you're not afraid to break things as we move.
- nvivo

This is the problem; rather than a considered approach we're getting a rushed rewrite in order to meet some arbitrary deadline.
I'd sooner it was late, Q4/16 if need be, than have some shiny new broken crap. It's 2015 and everything breaks is a terrible justification.

@thefringeninja that either adds a completely unnecessary and confusing dependency that only works with half the systems (in the shared case), or leads to name collisions requiring extern alias to unpick (and a lot of confusion why the method that takes a System.Data.IDbConnection won't accept the different-but-same System.Data.IDbConnection that you're offering it). Basically, that would be making things 10 times worse.

Can you give a concrete example @mgravell ? I can see how this would break if you used Type.GetType("System.Data.IDbConnection, System.Data"), or maybe in PCL scenarios.

If orm A defines System.Data.IDbConnection, and orm B defines
System.Data.IDbConnection, then there are now two completely different and
incompatible interfaces that have the same name/namespace, conflict, and
don't actually work from any of the DB providers. It solves nothing,
basically. Worse: it lease to unusable APIs where someone expects to pass
in a SqlConnection or NgpSqlConnection - and it doesn't work.

In fact you could make an open source code package for it and share.

If it isn't System.Data.Common, then that means DbConnection doesn't implement it, and : you might as well not bother.

You would never get consensus for every ORM or ADO .NET provider maintainer to take on an external 3rd party dependency (pretty much guarantee SQL Server wont) and 2 libraries redefining a core BCL interface can't be used together. Even worse for high-level frameworks (like ServiceStack) that reference System.Data interfaces in core libs (that it'd now need to define) will no longer be able to use any ORM that didn't reference the same interfaces - which no-one would or should.

The only way you can ensure every library references the same System.Data interfaces is if they were restored with the base classes implementing them - which I'm still not clear on what harm this has.

@mgravell ah transitive dependency, hadn't considered that. :+1:

You know I don't see why this is an issue unless you are tightly coupling your code to code you don't own. Protect your code from it's dependencies! Wrap it up and abstract it away. There are many ways to do this and make your code testable. Many are mentioned above. You integration test the bits you don't own. That is the way it works. You should not be mocking BCL objects! If you are then your design is not good.

@nvivo I get that this is your original issue, but the direction of it has now turned into a thread about bringing back the v1 era interfaces for compat reasons. Let's keep it focused on that - if you would like to discuss making changes to the current surface area, please file new issues.

@mythz There's two issues that we had with the interfaces; 1) they brought in a (heavy) dependency on DataSet, which does not belong with policy-free abstractions and 2) they bring in a parallel set of abstractions to the base classes but are locked with the v1 surface area. We wanted to avoid that confusion.

I agree it is not viable for a 3rd party to provide these interfaces - they need to be implemented on the core abstractions to be useful.

I get that this is your original issue, but the direction of it has now turned into a thread about bringing back the v1 era interfaces for compat reasons. Let's keep it focused on that - if you would like to discuss making changes to the current surface area, please file new issues.

This makes absolutely no sense.

@nvivo it means, you're not the one with the issue despite having filed it - evident by having closed it. The issue is about restoring the System.Data interfaces to ease the burden of supporting the entire ecosystem that's reliant on them. You seem to be ok with:

It's 2015, everything breaks all the time and we're used to it.

But this is not a satisfactory strategy for those of us having to support existing code bases and our Customer's code-bases, and should definitely not be the default strategy for custodians of BCL libraries affecting all of .NET.

But this is not a satisfactory strategy for those of us having to support existing code bases

@mythz This is out of context. That's not what I meant. Evertybody here have to support existing code bases, i doubt there is any newcomer to .NET in the discussion.

The issue with what this conversation turned into is that it doesn't make much sense. .NET Core is a new framework, not an upgrade. A lot of the existing full .NET API is not there and won't be. Backward compatibility won't work like that anyways.

@nvivo This exact sentiment is why this issue doesn't apply to you. If you think backwards compatibility is not important, you've never tried supporting a meaningful code-base targeting multiple platforms - you're also not speaking on behalf of the CoreCLR team. These libraries were not developed from scratch, if you read above you'll find that it's a primary goal for CoreCLR libraries to run on the full .NET Framework. CoreCLR is another platform target of which porting existing libraries is vital to its success, something the .NET team is actively encouraging and which these missing interfaces are currently hindering.

With all this talk about interfaces not being version friendly it makes me think about how the Go programming language sidesteps this problem with implicit interfaces.

I was asked to expand on what I meant by _policy-free_ abstractions.

Basically, by policy-free I mean that the System.Data.Common abstractions contain almost zero business rules - all they do is provide a shape of APIs that a given provider must implement. This is in contrast to DataSet, which has lots of business rules and code. Policy-free types, due to their very construction, tend to version less frequently than types that contain policy, as there's less code, hence less bugs and design changes. We want abstractions and types that are _exchanged_[1] between 3rd party libraries to version infrequently[2] to reduce the number of issues you encounter when coordinating dependencies across a large package graph. You cannot embed or ilmerge exchange types as part your application, whereas, non-exchange types you can. Hence, why we want them split.

[1] By _exchanged_, I mean types that tend to appear on public APIs. For example, we consider Collection<T> an exchange type, but not List<T>.
[2] Using semantic versioning for these would be cause breaks almost identical to above with the removed interfaces, because you "fork" the ecosystem; libraries need to make a decision whether to target the library before or after the break, or fork themselves to handle the split.

@mythz I have to support orms/customers as part of a commercial app and existing code.... I'm not saying I agree with either side but the fact of the matter is that dnx is completely new framework/runtime. If it doesn't have some interfaces deal with it... with a compiler directive.

If it doesn't have some interfaces deal with it... with a compiler directive.

oh? how about a method that returns IDataReader, or a method that accepts IDbConnection? Or IDbTransaction? How are you going to #ifdev around that, if your _customers_ code uses that api?

'Deal with it'... what kind of arrogance is that?

Simple, update the underlying package (your org) to return your own type or the base type that they've supported since 2.0. If you are targeting older versions of .net you could use a directive to return the interface instead and mark it as deprecated.

Yes it does suck, but I'm sure they (really smart individuals who think about this all the time) did it for a good reason at the time (back in .net 2.0). Can a talk occur and maybe this gets changed sure.. But the fact of the matter is people upgrading to the new runtime will have to do some work. It's not click a button in a fancy ui and have there work cut out for them.. But at the same time, I agree with you that upgrading to the new framework should be easier than going and replacing a bunch of types.

@FransBouma he's probably someone who advocates strong naming too.

@FransBouma @phillip-haydon can take your trolling other places, you can't expect to act like that and people to take you seriously. Take a look at my open source contributions / projects involved with if you have any doubts... I will have to deal with this... either way...

For the record I'm against strong naming.

deal with it...

And you say I'm trolling?

"@FransBouma he's probably someone who advocates strong naming too." is off topic, and not very helpful to this talk. Yes it feels like your trolling..

One important fact to consider in this discussion is that the IDb* interfaces were deprecated as an API for ADO.NET in .NET 2.0 a decade ago when the base classes were introduced. They weren't marked deprecated, but anything built since that time depends on the base classes instead. App.config providers and connection string support comes to mind.

If you have code depending on those interfaces, you're coding against a very obsolete API with no support to things like async methods, which means you'll need to update it anyway if you want people to keep using it.

Simple, update the underlying package (your org) to return your own type or the base type that they've supported since 2.0. If you are targeting older versions of .net you could use a directive to return the interface instead and mark it as deprecated.

It's a public API, used by many thousands of applications and the API has been public and in use since .net 1.0. It's not 'simple', on the contrary. We can't just change the API because Microsoft thinks that's what they have to do to make our lives better: it will be a big burden for our users and thus for us.

Yes it does suck, but I'm sure they (really smart individuals who think about this all the time) did it for a good reason at the time (back in .net 2.0). Can a talk occur and maybe this gets changed sure.. But the fact of the matter is people upgrading to the new runtime will have to do some work. It's not click a button in a fancy ui and have there work cut out for them.. But at the same time, I agree with you that upgrading to the new framework should be easier than going and replacing a bunch of types.

that's the thing: it's not seen as a 'new runtime'. If that was the case, like I already discussed, it wouldn't be a problem. However Microsoft has the idea that CoreFX targeting dlls have to work on .NET full as well, so there's no new framework. For maintainers of libraries targeting .NET full as well, with functionality not in CoreFX (so a lot of the libraries out there today) are in for a lot of 'fun' as they will have to maintain 2 versions. Not 2 or 3 #ifdevs and the problem is solved, but many many of them. Perhaps it's different for you as you can perhaps bill your client for the hours you have to spend on changing the apis. It's different when you create a general purpose system that's used by many.

If compact framework support is any guideline, supporting an ORM on full .net and CoreCLR will be a terrible time sink, lots of frustration and actually not really much gained: you won't get new features, you only work around the _absence_ of them.

(and before someone starts with: "but it runs on linux, you'll gain that": our stuff runs on Mono since many years. So no, it's not really a new feature to gain, it was already there).

SellMeThisFramework. Oh why am I even bothering.

but anything built since that time depends on the base classes instead

_cough_ Linq-to-SQL DataContext _cough_

As, indeed, do many non-MS ORMs, hence the problem. For dapper we just bit
the bullet and migrated to DbConnection. If we had the time over again, I
would strongly suggest MS use [Obsolete] when they obsolete something. But:
we can't change the past.

It _is_ a very painful problem to solve, though, especially since most
library authors need to continue with both APIs (compiling one way for
net40 etc, and another way for DNX). I posted previously the godawful mess
that dapper uses to do this: it is not pretty, and it is not as simple as

if.

On 27 Nov 2015 20:07, "Natan Vivo" [email protected] wrote:

One important fact to consider in this discussion is that the IDb*
interfaces were deprecated as an API for ADO.NET in .NET 2.0 a decade ago
when the base classes were introduced. They weren't marked deprecated, but
anything built since that time depends on the base classes instead.
App.config providers and connection string support comes to mind.

If you have code depending on those interfaces, you're coding against a
very obsolete API with no support to things like async methods, which means
you'll need to update it anyway if you want people to keep using it.

—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/corefx/issues/3480#issuecomment-160198453.

I fail to see the point with that async methods couldn't be added as a result of using interfaces. You could have created new interfaces for the async methods. IAsyncDbCommand, IAsyncDataReader etc. Then you could make the base classes implement both types of interfaces.

The ADO.NET users are either using the async version or the synchronous versions, not both. So that would have worked just great.

For library developers it doesn't really matter if functionality grows and the interfaces remain the same. Isn't that the purpose? Introduce new interfaces for new functionality. Working with base classes is just a pain.

Can I just summarise the thread here?

Multiple independent acknowledged community experts on .NET data tooling,
including multiple ORM authors and maintainers are telling you - quite
clearly - that this represents a significant set of problems. I don't think
any of us are ignorant of the subtleties, or naïve of programming
principles, and most if not all of us know all of the back story fine,
because we were there at the time.

The official response seems to be "it seems fine to us, and EF is happy".
Yes, we know that, because the decision was made in the first place.

Well, we've all expressed our opinions, even if it wasn't fruitful.
On 27 Nov 2015 20:41, "Jonas Gauffin" [email protected] wrote:

I fail to see the point with that async methods couldn't be added as a
result of using interfaces. You could have created _new_ interfaces for
the async methods. IAsyncDbCommand, IAsyncDataReader etc. Then you could
make the base classes implement both types of interfaces.

The ADO.NET users are either using the async version or the synchronous
versions, not both. So that would have worked just great.

For library developers it doesn't really matter if functionality grows and
the interfaces remain the same. Isn't that the purpose? Introduce new
interfaces for new functionality. Working with base classes is just a pain.

—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/corefx/issues/3480#issuecomment-160201361.

Dudes.. update your code and bump your major version. Done.

Yes, but there is no reason you couldn't pollyfill that interface with a compiler directive when targeting core. I've done that with some of our pcl packages.

I do think Microsoft needs to reiterate that core isn't dot net full but that still doesn't help.. I think that Microsoft needs to cleanup some interfaces to be honest. There was a blog post going around recently where there are very inconsistent interfaces and you never know which one to pick.. I think that defining a second async interface sucks. Would be nice if everything was async..

Would be nice if the full framework was gone through to make sure things that need to be marked as deprecated are... And released as a 4.6.2

@mgravell +100. Well said, 100% agreed.

How much is really affected? We talking about coreclr here? .NET desktop will be alive for many years to come until coreclr can catch up. Exactly for those complaining, what you coverage loss here? Many people are basically saying it is the end of the world.

@leppie Indeed, it will be around for many years. And we'll have to maintain these hacks and workarounds in our code for years to come as well. The point of contention here is removal of the common bridge between the two. That workload has been shifted to all of the library developers instead of in the BCL. I understand both sides of the interface pros and cons, but I don't understand the "it's minor, move on" attitude from some here.

Let's be blunt here: If library consumers all have to do the same thing, it should be in the BCL. The debate is "what form does that take?"

On the plus side to removing the interfaces, there's an _additional_ versioning mechanism with the new packaging model now in play: which classes are available in X, Y and Z is now much better supported by tooling. E.g. dotnet5.2 vs 5.4 currently. But then there are drawbacks there as well. For example SqlClient still isn't to the point of implementing the interfaces as it stands today (see dotnet/runtime#14302 and dotnet/runtime#15269), and given what @YoungGah said (unless I'm misreading) we're waiting on 5.4 or 5.5 for that same level of support for schema, bulk inserts, etc.

So what happens with 5.6 (1.5)? If more members are added to the abstract classes each data provider is expected to keep pace with the moving target that can change with every moving person? Every consumer needs to make a version determination on features available in each? We need to compile a version of the assembly for every version of the platform moving forward to match the abstract base of the class being passed in? How all of this works going forward with additions isn't 100% clear. An interface that isn't changing is far clearer. On top of this is documentation: which features, methods, etc. are available in which versions _and platforms_ is going to be a huge pain point for all library authors moving forward. That concern is only semi-related here, but it's in play.

As for now, I'm anxiously awaiting the update in the next 2 weeks. My fear is it will effectively be, as the messaging the entire time has been: "We're doing X because it works for SQL and Entity Framework, and that's good enough" - without using any of those words. The frustration on the library author side, to me, has been a lack of progress (in both code and discussion) on these fronts for months now.

100% agree.

When I've designed v2 (and up) of SqlFu (my Micro ORM), I had to decide where to attach the extension methods: to DbConnection/DbCommand or to the interfaces. I found this and I've decided to go for the abstract classes.

Therefore, I'm not THAT affected, although I am affected by IDataReader removal, because MS in their wisdom decided not to make it very clear which interfaces should be treated as obsolete and which shouldn't. But in my case, it's not hard to replace the interface.

However, I see the value of having dedicated interfaces and I don't think it's THAT hard for MS to keep/add them back. If they decide the old interfaces are not that well designed, fine! Design the new ones to be more specific. At least in the future we won't have to deal with the same problem.

(This is Thanksgiving week in the US, so responses by Microsofties will be pretty limited until they get back in the office next week)

Just want to reiterate - so that the don't lose any good suggestions/bugs in this thread, if you have issues with the current base classes/surface area, and/or the way they version going forward, please file a new issue, let keep this discussion solely about the v1 interfaces.

@NickCraver Similar to version-to-version compatibility of .NET Framework, there will be no breaking changes between versions of the .NET Core surface area. For example, the addition of abstract members to the base classes would be an example of a change that _will not_ be made.

@davkean how confident are you that won't happen? Given that we're seeing an incomplete surface area and no guarantees that'll change, it's hard to believe the missing pieces won't show up later if at all. However, I'd think that no breaking changes is a _more_ important thing, which I'm sure most library authors here would assume as well. That means it's even more critical these items are taken care of well before RTM hits.

For the record there separate issues filed on the surface area, see dotnet/runtime#14302 and dotnet/runtime#15269 with latest updates from Microsoft on Sep 25th and October 2nd respectively - despite asking for updates and activity several times after that. That's 2 months and 2 releases that have come on gone, with silence. This is despite dotnet/runtime#14302 being the most active issue in this repo (and this one has just become the 2nd). Can you understand our frustration?

I'm absolutely confident we won't make breaking changes, Data Common team is looking at making them without introducing abstract members.

@NickCraver Sorry we're doing badly here - these issues have not been forgotten, @YoungGah provided an update above about them, I'll make sure she updates the issues with the progress. Lots of MS is still getting used to this _working in the open thing_, it will get better over time - thanks for calling us out on it.

@niemyjski

dnx is completely new framework/runtime

If you think dnx runtime and corefx libraries manifested out of thin air, you're seriously underestimating the time it would take to develop it from scratch. The fact that CoreFx libraries runs on the full .NET Framework should give you a clue that, no, it's not completely new.

If it doesn't have some interfaces deal with it... with a compiler directive.

Yes, but there is no reason you couldn't pollyfill that interface with a compiler directive when targeting core.

If you've bothered reading the comments before flying into this thread you'll know that a) there are reasons and b) this is a broken unworkable strategy for core BCL interfaces.

@nvivo

One important fact to consider in this discussion is that the IDb* interfaces were deprecated as an API for ADO.NET in .NET 2.0 a decade ago when the base classes were introduced.

If that were the case then you wouldn't have opened an issue telling them to revert to using interfaces you knowingly knew were deprecated by base classes a decade ago. The way you communicate an API is deprecated is to use the [Obsolete] attribute which is its sole purpose for existing. If it's not widely communicated, it's not deprecated regardless of what your thoughts are now. The fact that most non-MS .NET ORM's are reliant on them should give an indication that its deprecation was poorly communicated, if at all.

If you have code depending on those interfaces, you're coding against a very obsolete API with no support to things like async methods, which means you'll need to update it anyway if you want people to keep using it.

A false strawman - one does not imply the other. We've already added support for async API's, and no, adding them didn't break existing Customer code-bases nor did any of the existing API's need to change.

Okay, this whole making a clean start thing is great, but can I ask one question: What compromises have been made to support your own frameworks? What horrors of the past have been migrated because they're needed to get, say, Entity Framework running?

It would be a shame to make the MicroORMs disappear, they make .Net code somewhat performant (EF's an unusable beast for applications where 500ms to load a few rows is not acceptable).

As for interfaces vs base classes: base classes are great as long as everything that can ever be re-used is virtual. For example one of the most irritating design decisions in WCF is the copious use of sealed classes which contain a lot of functionality. Say you have to make a tiny tweak to the way say XML messages are handled (because: interop). Instead of inheriting and overriding one small function, you have to re-implement. In the WCF example the S in SOLID was skipped so you're usually left implementing a large interface with none of the tests that are required to ensure it is of production quality.

So: base classes which we can adapt are a good thing.

I'm absolutely confident we won't make breaking changes, Data Common team is looking at making them without introducing abstract members.

@davkean That's impossible to guarantee, and you know that. ADO.NET is a subsystem to communicate with 3rd party software with a wide variety of features, which are exposed through a common API with some extension points. Things change, even in database-land, and these changes ripple through to the API used to communicate and consume these external database services. Besides: changes in behavior _is_ also a breaking change. And we've seen those too in ADO.NET (e.g. about error handling) in the past years.

The whole ADO.NET API is full with the side effects of those changes, often driven by SQL Server; it's hardly been the case that things were designed in general and then moved to SQL Client, but the other way around (there are e.g. not many, if any, features in the base classes which are ignored by SqlClient). Added to that things needed by 3rd parties never made it to the API.

So in short, it's an API that, at the start with .NET 1.0 had a general design (which proved to be seriously flawed in many areas) and which since then as been patched up with features left and right to cope with the changes in the landscape. _Most_ of those are still in the API (if not all). And now Microsoft will remove one: the interfaces.

There's absolutely _nothing_ gained by removing a random piece of the API: no feature is added through this (you could spend the time on that instead of pushing back here for instance), but code that leverages on that piece of API _will_ not work. If the point behind all this cheese moving is 'to start over' then by all means, do, but do it by redesigning the API to make it a true general purpose API, get rid of _all_ the cruft that's been piled up over the years.

But that's not done. No-one has to wonder why, we all know why. We also know that if a team within MS would have been hurt badly by the removal of the interfaces, they would never have been removed in the first place.

If Microsoft can add new functionality through base classes so 3rd party providers automatically implement 'a form of' the feature (like with Async, where the async method falls back to the sync method if the provider doesn't implement it), great: that means us 3rd party ORM developers (and face it: many many developers access _your_ API through the code of 3rd party (micro)ORMs) can simply target one class and that's it.

But it's not guaranteed you'll do that. E.g. no-one within Microsoft has ever bothered with specific features for Oracle or PostgreSql to be added to ADO.NET. E.g. usage of UDTs, Document trees, multiple resultsets through cursors, every ADO.NET provider has to find their own way of dealing with these features. What if (when?) SQL Server gets a document tree feature e.g. with JSON docs, will ADO.NET be updated then with a new API for that? Like you have done in the past with error reporting from ADO.NET providers?

You can't make guarantees like that, and it's unwise to state here that MS won't break anything in the future. They always have and sometimes they even had to: after all every API has flaws, and ADO.NET is full of them; one way or the other some day someone will 'fix' them.

So, to recap: the interfaces are part of ADO.NET as also the botched up parts elsewhere in the API are part of ADO.NET. Those aren't removed to fix the API, neither is the API refactored to make it a more general purpose API: it's been left as-is, with some elements removed like DataSet/Table depending elements because these aren't ported (and there are other issues debating that with similar progress), except... interfaces are removed.

From that point of view alone it already doesn't make any sense.

@mythz

If that were the case then you wouldn't have opened an issue telling them to revert to using interfaces you knowingly knew were deprecated

You can't possibly read the OP and understand that. These discussions are getting too religious and you're just assuming things that nobody said.

I opened this issue because I believe interfaces are better at describing an api and help with testing. If it's done, I don't think they should be compatible with a 15 year old api that had its issues. Backwards compatibility was never a point in the issue until you guys moved the discussion to that.

It's not that I believe things should break just for the sake of it. But interface versioning is a problem of the past. If corefx changes something between major versions, its expected major versions to have breaking changes. If they break an interface between minor versions, that's just sloppiness.

We've already added support for async API's

You can't add an async api on top of a sync api. If you did that using IDbConnection or IDbCommand, you did it wrong. If you're not using these interfaces, then you don't actually have any point defending any backwards compatibility with them.

We've already added support for async API's

You can't add an async api on top of a sync api. If you did that using IDbConnection or IDbCommand, you did it wrong. If you're not using these interfaces, then you don't actually have any point defending any backwards compatibility with them.

In ADO.NET that's what they did: the Async methods by default fall back to the sync variants. This way all ADO.NET providers who don't support Async (read: all of them except SqlServer at this moment) don't have to implement things they don't support: 3rd party code in ORMs offering an Async api can program against the Async methods in ADO.NET and if the ado.net provider doesn't support async, no-one will actually know. Well... you will know because it's slower, but that aside.

Now it's also a good illustration of the absence of any 'design' or general architecture within ADO.NET: there's _no_ way to create a transactional savepoint in the general API. Though almost _all_ databases support that with a 'Save(string)' method on their derived class from DbTransaction. All except OleDbTransaction (as MS Access doesn't support it, at least that's my suspicion).

It's not easy, but no-one said it to be easy. This problem isn't new, OleDB and ODBC have dealt with it for many many years, JDBC has found a way to solve it, Microsoft doesn't have to re-invent the wheel to overcome things like this. It's also not unique to the DB realm: e.g. every videocard out there supports a different subset of features through its API, exposed to the developer through Direct3D/X. It's actually interesting how designs go in these other worlds: the API is designed, and the parties who need to support it (JDBC drivers, OleDB driver writers etc.) have to implement these. Your driver won't support X? Your driver isn't compliant with X. "Oracle doesn't support ADO.NET v10". No-one within Oracle wants to read that. Instead SqlClient is the lead, and what falls off the wagon is added to ADO.NET and that's it.

In ADO.NET that's what they did: the Async methods by default fall back to the sync variants.

No, it isn't. The API expose async methods that fallback to sync methods by default, but providers override with real async operations. What @mythz is stating is that he is using IDbCommand and IDbConnection and doing that.

This is not possible, period. If you do it, you either are not doing it right or you're not using the interface. You can't invent async if the underlying api is not async.

No, it isn't. The API expose async methods that fallback to sync methods by default, but providers override with real async operations. What @mythz is stating is that he is using IDbCommand and IDbConnection and doing that.

No official provider does that except SqlClient, all other, e.g. ODP.NET, don't implement any form of async code and thus calling code falls back to the sync variants (the async methods in DbDataReader/DbCommand etc. which actually execute sync code). so user code calls an async variant, which is under the hood doing sync operations. Which results in no async being performed in practice (as all code is simply sync in practice). Perhaps devart's providers implement an async api in their own implementation, not sure.

Anyway, it's not about doing it right, it's about versioning of APIs.

@nvivo

I opened this issue because I believe interfaces are better at describing an api and help with testing. If it's done, I don't think they should be compatible with a 15 year old api that had its issues. Backwards compatibility was never a point in the issue until you guys moved the discussion to that.

Ok so you knew the core ADO.NET interfaces were deprecated 10 years ago, with everything moved to base classes, yet you thought they should just abandon that now and move back to interfaces, coincidentally using the same names as the existing interfaces, but the existing interfaces should no longer exist, because backwards-compatibility not required right? sure, sounds legit.

If you want to move a platform forward, you evolve the API's over time and support them in parallel, giving everyone the ability to also support parallel API's and allow them to plan their way and their Customers off them. Ripping them out without warning unnecessarily breaks the ecosystem relying on them and pushes the complexity down to every downstream dependency.

You can't add an async api on top of a sync api. If you did that using IDbConnection or IDbCommand, you did it wrong. If you're not using these interfaces, then you don't actually have any point defending any backwards compatibility with them.

I wish you'd stop polluting this thread with comments on things you clearly have no knowledge about. Read the source code if you want to know how the Async API's are implemented - and stop blindly spreading falsehoods. It's impossible for a 3rd party library to extend System.Data interfaces. We provide a implementation-agnostic API which in order to support every major RDBMS, exposes the minimum dependency that every ADO.NET provider implements in its external facing API - namely the core System.Data interfaces. Async API's are extension methods off IDbConnection which behind-the-scenes leverages async API's on the concrete ADO.NET providers that have support for them. Internally there are already concrete dependencies on each supported ADO.NET provider, irrespective of async support. Your suggestion that we "don't actually have any point defending any backwards compatibility with them" is inexperienced and completely groundless.

Let me ask this to the Microsoft side of the fence (cc @davkean @YoungGah): Let's say it's a perfect world and nothing ever came up. When _do_ you want to break things? Major versions like 6.0? Some other time? The argument against interfaces is that they don't version. Well yes, that's valid - but _if we're not changing the abstract classes either_, it's also a moot point. So...can we get some clarity there?

Follow-ups:
If the answer is yes (at some post-RTM point there will be changes), then what kind of breaks would we see? Additions, new methods? If I inherit the base class for my provider then what happens when you add a conflicting method people are using, etc.?

If the answer is no (never): why not just add the interfaces back?

This thread is a bit hung on discussing _right now_ - which is mostly a good thing because this stuff needs fixing ASAP. Every library author here knows how hard getting anything after a release done is, and it's why we push so hard. Unfortunately, lack of a clear plan for _future_ additions and changes, if any, makes for more under-informed argument.

What are the plans for the future?

We shouldn't be forcing implementation via abstract classes in this case. IMO

Microsoft won't make changes that are breaking for .NET 4.5. It's part of Windows. Compatibility is king.

Microsoft could make changes that are breaking in .NET Core that don't impact on 4.5. I doubt that they will post 1.0 RTM on something low level like ADO.NET, but the bar is lower. It's not part of Windows and .NET Core versions can be deployed side by side.

Abstract classes can be changed - that's not breaking. Just add a method that is virtual with a default implementation. It's already been done with the ADO.NET async methods. With the introduction of .NET Core I believe changes in shared classes would need to be done in concert with .NET 4.5 releases to keep compatibility. Someone correct me if that is wrong and only applies to interfaces :grin:

@FransBouma

No official provider does that except SqlClient, all other, e.g. ODP.NET, don't implement any form of async code and thus calling code falls back to the sync variants

You're right, but that's not an issue with the API, and more lazyness or lack of understanding by the implementers. MySql connector for instance re-implemented all their async methods by creating a TaskCompletionSource and completing them with sync methods, which is ridiculous. They could just delete half of their code base and remain with the same behavior.

Not saying that interfaces would solve that, but not having a default behavior for async would at least make some of them think this through. The fact that 90% of the very technical people don't understand async operations doesn't help too.

Abstract classes can be changed - that's not breaking. Just add a method that is virtual with a default implementation. It's already been done with the ADO.NET async methods.

This IS breaking. It's breaking for all libraries which sub classed this implementation without knowing it was added and people then consume this thinking. Oh this implementation is now supported with postgresql BAM ERROR wtf happened...

Forced implementation for a db abstraction is wrong.

Does not matter if its interfaces or a base class. There will be breaking changes. But forced predefined implementation is wrong.

Polymorphism doesn't work that way. You can't override a method without knowing it. If your reference is a DbConnection and you call QueryAsync, it will only call that method or whatever it has been overriden as. A method called QueryAsync that happened to already exist in a subclass won't be called.

You're confusing overriding a method verses hiding it with the same name.

@JamesNK

If the methods are defined as abstract, no implementation exists in the base class. This breaks the contract for the 3rd parties as it requires them to add implementation in the sub class.

If you make the method virtual so it can be overridden, then implementation exists in the base class that could make no sense to the sub class. This is still breaking because implementation exists that was not implemented by the library author. Sure your app can compile, and everything is hunky dory, but someone calls that method, and its not valid for the sub class. That's wrong. That is forced implementation that does not belong to the sub class.

So, abstract class where implementation can exist that does not belong to the sub class. Or interfaces where no default implementation exists for the 3rd parties.

@phillip-haydon that's why it's implemented as a virtual method, not an abstract method.

You can add stuff it will only break subclasses which already have a member with the same signature (name/args). If the args are different it could introduce subtle bugs if developers mistake the overloads.

That is forced implementation that does not belong to the sub class.

Then don't put it there.

@jamesnk

Don't put it there. That's why we are arguing the removal of interfaces.

Making it virtual does not solve the problem. There should be no pre defined implementation. End of story

@JamesNK In this case we didn't put it there, _Microsoft_ put it there by including it in the abstract. Adding methods that are assumed to work across _all_ providers ever inheriting I don't really see going smoothly of efficiently even if not technically breaking (I'll concede "should use new" compile warnings _technically_ aren't breaking). There simply won't be a shared or immediately shared implementation in _most_ cases. So what's the alternative? throw new NotImplementedException() inside that virtual? That's hardly an argument for it existing in the first place, it's rife with more (runtime) problems.

Let's look at today: I'd much rather see an IDbAsyncConnection added when a provider supports it rather than a bunch of methods that are synchronous under the covers leading to confusion and inefficiency, which is what we have today on these abstracts.

I'd much rather see an IDbAsyncConnection added when a provider supports it rather than a bunch of methods that are synchronous under the covers leading to confusion and inefficiency

@NickCraver +1000 to that. Like this bug here where the oracle team just doesn't understand what async means.

You could do that with interfaces. The problem with them is you then can't accept arguments that demand multiple interfaces, e.g. I need a type that is both IDbAsyncConnection and IDbConnection. You lose strong typing and you have to start querying for interfaces COM style which I don't think is very user friendly. It's a tool in API design that has its place but I don't know if I would go to it by default.

If the default implementation was throwing NotImplementedException then bolting it onto the base class is the wrong thing to do. Like I said, don't put it there then. If you see someone do it then raise an issue.

Either way, whether it is interfaces or abstract base classes, my experience is adding new features onto a library that wasn't originally designed for them without breaking the world is very difficult.

@JamesNK presumably IDbAsyncConnection would inherit IDbConnection here, but that need not necessarily be the case - they could share common members or inherit from a common base. For example in Dapper we would probably implement as follows:

``` C#
IEnumerable Query(this IDbConnection cnn, CommandDefinition cmd)

``` C#
Task<IEnumerable<T>> QueryAsync<T>(this IDbAsyncConnection cnn, CommandDefinition cmd)

I imagine most libraries with sync/async methods would have similar uses and implementations.

_Edit:_ after typing that I realize just how much better Async on the end of the name would be for all of these...

ADO.NET had the following major changes over the years:

  1. On 2003 (1.1) they made a breaking change from 1.0 and redesigned it
  2. On 2005 (2.0) they moved to the provider model with the base classes that exists today
  3. On 2012 (4.5) they added async support, which didn't actually change anything other than adding new methods that did the same things asynchronously.

Moving back to the way the 2003 api was defined is a change that will either break compatibility (which people don't want) or remove features added in the last decade. But adding a new interface to .NET Core with the current design is _source compatible_ with any .NET version. Recompiling is all you need to keep most code written in the last 15 years working. And you will need to recompile anyway to target corefx.

This API has been stable for a long time. It could be redesigned as interfaces if people wanted. As usual, there are no technical issues here, it boils down to scars, preferences and ego.

Why not bring the interfaces back and mark them obsolete?

Although this idea was dropped i am wondering if assembly neutral inferfaces could have helped in this kind of situation see this http://davidfowl.com/assembly-neutral-interfaces/ and then their implementation
http://davidfowl.com/assembly-neutral-interfaces-implementation/

I think assembly neutral interfaces are a red herring here; if anything is
to happen, this is totally a "common" thing, since "common" exists. Plus it
is moot since the feature evaporated.
On 28 Nov 2015 5:38 p.m., "Shahid Khan" [email protected] wrote:

Although this idea was dropped i am wondering if assembly neutral
inferfaces could have helped in this kind of situation see this
http://davidfowl.com/assembly-neutral-interfaces/ and then their
implementation
http://davidfowl.com/assembly-neutral-interfaces-implementation/

—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/corefx/issues/3480#issuecomment-160323344.

What I observe:

  • The people behind the idea of CoreClr (new shiny framework) have completely different mindset from the people behind its implementation (backward compatibility with 15 years old codebase is king).

What I think:

  • By removing interfaces you're making things worse.
  • Nothing is obsolete until is decorated with [Obsolete]. And it doesn't matter who thinks what at any point of time. This the contract and if you simply aren't satisfying it then you aren't.

What I want:

  • Use interface(s) as base surface of the API rather than base class(es).

What I feel:

  • We're in late 2015 and it's frustrating and sad to see people arguing around this.

interfaces can't be versioned.

Surely interfaces be easily versioned with interface inheritance? And if its doing something totally different; well its a different interface.

interface IOldInterface {}
interface INewInterface : IOldInterface {}
interface IDifferentInterface {}

class SomeClass : IOldInterface, INewInterface, IDifferentInterface
{
}

Just don't call it IInterfaceV2, IInterfaceV3 it needs to explain what it adds.

To put in context [Obsolete] the old interface, and use some new interfaces and call them what they really are; rather than the non-async methods seeming normal in this day and age.

public interface IDbUtilityProviderFactory 
{
    IDbConnectionStringBuilder CreateConnectionStringBuilder();
    IDbParameter CreateParameter();
}

public interface IDbBlockingProviderFactory : IDbUtilityProviderFactory 
{
    IDbBlockingCommand CreateBlockingCommand();
    IDbBlockingConnection CreateBlockingConnection();
}

public interface IDbAyncProviderFactory : IDbUtilityProviderFactory 
{
    IDbCommandAsync CreateAsyncCommand();
    IDbConnectionAsync CreateAsyncConnection();
}

@abatishchev

The people behind the idea of CoreClr (new shiny framework) have completely different mindset from the people behind its implementation (backward compatibility with 15 years old codebase is king).

Thanks for making this clear, looks like it needs pointing out. Generally I believe MS teams care deeply about backward compatibility which is vital for evolving both languages and their platforms. It's just that decisions around System.Data are not being made with consideration of the wider ecosystem - which these core abstractions should be serving equally.

  • By removing interfaces you're making things worse.
  • Nothing is obsolete until is decorated with [Obsolete]. And it doesn't matter who thinks what at any point of time. This the contract and if you simply aren't satisfying it then you aren't.

Precisely.

Regarding the interfaces versioning. Correct me if I'm wrong but I thought that the point of CoreClr is more finely grained, independent versioning? Breaking change? Boom! New major version released.
It looks like that after 15 years of design experience, you're going to repeat the same mistakes but now having independently versioned and released nuget packages. The arguments above are identical to good old monolithic framework, despite it's not anymore.
If backward compatibility is a must than you just cannot remove those interfaces. If it's not then let's stop talking about it and design the API from scratch, this time listening to major ORM authors and other ADO.NET experienced developers.
Thank you.

@abatishchev very good points. Wondered that myself as well: what really is the point of the backwards compatibility argument? If a new feature is added to CoreClr, everything using it won't run on .net full, so to be safe one could only use the common demeanor. (that never worked well).

Sorry for the long silence on my part.

Porting to .NET Core

Firstly, let's talk about the elephant in the room, which is that .NET Core doesn't nearly have as many APIs available as many folks -- including us -- would hope for.

I'm working with my team to put together a set of docs on how we're going to approach the area of porting existing assets to .NET Core.

We're planning on porting more of the functionality that currently only exists in .NET Framework / Mono to .NET Core. This document will call out how we're going to do that, how we prioritize, and what the mechanics will be. Not only would I like that work to happen in the open, I'd also like to enable the community help us port more functionality.

Breaking Changes

There is one area that causes a lot of confusion when folks talk about .NET Core. Let me clarify one thing:

It's not a breaking change if .NET Core has fewer APIs than the .NET Framework.

The reason being that .NET Core is a new platform and it can technically have an arbitrary set of APIs. However, of course, we don't want an arbitrary set -- that's what we did in the past. The goal for .NET Core is to have a story where folks can author libraries (and with console apps to a certain extend even apps) that will run on .NET Framework and .NET Core. This requires that there is a subset of both platforms which is a 100% compatible. In that context, you'll hear us talk about breaking changes.

On top of that, our intent is to have a high compat bar within .NET Core. In other words, we don't plan on performing API breaking changes between one version of a .NET Core API and another.

Interfaces

Adding members to interfaces is -- by definition -- a breaking change. Some people argue that the impact is low and that there are ways to model that, but as it stands to today it is a binary and source breaking change.

WinRT, which is based on COM and thus heavily depending on interfaces, solves this problem by creating more interfaces, such as IFoo, IFoo2, IFoo3. It's doable, but it's certainly messy without a runtime or language feature to make it bearable. So far, such a feature doesn't exist. However, as a member of the language design team I'm quite interested to hear proposals. Other languages and platforms have related ideas in that space, and I'm actively looking into options as well (such as mix-ins/traits, Swift's extension-everything, or default members for interfaces).

Since we still care about backwards compatibility we generally favor abstract base types over interfaces.

ADO.NET Interfaces

All of that being said, let's talk about the original ask in this thread: exposing the ADO.NET provider interfaces.

As David explained: we considered these deprecated when the abstract base types were introduced, which was in .NET 2 / Visual Studio 2005. It seems that there is strong believe that having these interfaces is critical to port ORM frameworks to .NET Core. To me, this provides sufficient evidence that we should port the interfaces to .NET Core.

However, as it is with all new APIs we need to be mindful of one of the primary goals of .NET Core, which is having a componentized stack. The interface IDataReader has a dependency on DataTable, which has a dependency on DataSet. As outlined in dotnet/runtime#14302, we're not opposed to add support for DataTable but we consider DataSet legacy. However, we might still add it as a separate package, but either way this would require breaking the dependency chain from interfaces -> DataTable -> DataSet. I'll work with with @YoungGah to see what we can do there.

Would this approach address the concerns?

Unless I am mistaken, the only DataTable dependency in IDataReader is the
GetSchemaTable() method, already discussed at length in the DataTable
chain. I do readily acknowledge, however, that the fact that there is a
hope to add _some_ mind of similar functionality at a later date (whether
via DataTable or not) makes it very awkward to expose this on the
interface, as extending the interface later is problematic. It wouldn't be
quite as simple as "remove the method for now, add something else later"
On 5 Dec 2015 12:17 a.m., "Immo Landwerth" [email protected] wrote:

Sorry for the long silence on my part.
Porting to .NET Core

Firstly, let's talk about the elephant in the room, which is that .NET
Core doesn't nearly have as many APIs available as many folks -- including
us -- would hope for.

I'm working with my team to put together a set of docs on how we're going
to approach the area of porting existing assets to .NET Core.

We're planning on porting more of the functionality that currently only
exists in .NET Framework / Mono to .NET Core. This document will call out
how we're going to do that, how we prioritize, and what the mechanics will
be. Not only would I like that work to happen in the open, I'd also like to
enable the community help us port more functionality.
Breaking Changes

There is one area that causes a lot of confusion when folks talk about
.NET Core. Let me clarify one thing:

It's not a breaking change if .NET Core has fewer APIs than the .NET
Framework.

The reason being that .NET Core is a new platform and it can technically
have an arbitrary set of APIs. However, of course, we don't want an
arbitrary set
http://blogs.msdn.com/b/dotnet/archive/2014/12/04/introducing-net-core.aspx
-- that's what we did in the past. The goal for .NET Core is to have a
story where folks can author libraries (and with console apps to a certain
extend even apps) that will run on .NET Framework and .NET Core. This
requires that there is a subset of both platforms which is a 100%
compatible. In that context, you'll hear us talk about breaking changes.

On top of that, our intent is to have a high compat bar within .NET Core.
In other words, we don't plan on performing API breaking changes between
one version of a .NET Core API and another.
Interfaces

Adding members to interfaces is -- by definition -- a breaking change.
Some people argue that the impact is low and that there are ways to model
that, but as it stands to today it is a binary and source breaking change.

WinRT, which is based on COM and thus heavily depending on interfaces,
solves this problem by creating more interfaces, such as IFoo, IFoo2,
IFoo3. It's doable, but it's certainly messy without a runtime or
language feature to make it bearable. So far, such a feature doesn't exist.
However, as a member of the language design team I'm quite interested to
hear proposals. Other languages and platforms have related ideas in that
space, and I'm actively looking into options as well (such as
mix-ins/traits, Swift's extension-everything, or default members for
interfaces).

Since we still care about backwards compatibility we generally favor
abstract base types over interfaces.
ADO.NET Interfaces

All of that being said, let's talk about the original ask in this thread:
exposing the ADO.NET provider interfaces.

As David explained: we considered these deprecated when the abstract base
types were introduced, which was in .NET 2 / Visual Studio 2005. It seems
that there is strong believe that having these interfaces is critical to
port ORM frameworks to .NET Core. To me, this provides sufficient evidence
that we should port the interfaces to .NET Core.

However, as it is with all new APIs we need to be mindful of one of the
primary goals of .NET Core, which is having a componentized stack. The
interface IDataReader has a dependency on DataTable, which has a
dependency on DataSet. As outlined in dotnet/runtime#14302
https://github.com/dotnet/corefx/issues/1039, we're not opposed to add
support for DataTable, but we really don't want to port DataSet. So
adding the interfaces will require breaking that dependency. I'll work with
with @YoungGah https://github.com/YoungGah to see what we can do there.

Would this approach address the concerns?

—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/corefx/issues/3480#issuecomment-162115855.

Yes, IDataReader has a dependency on DataTable which has a dependency on DataSet.

As mentioned earlier, we cannot remove members from interfaces, so we would need to break that dependency some other way; either by not porting the interface, or by breaking DataTable -> DataSet dependency.

Would this approach address the concerns?

Yes restoring the ADO.NET Interfaces even without GetSchemaTable() method would solve the heavy dependencies on ADO.NET interface issues I'm currently faced with.

I'm assuming breaking DataTable -> DataSet dependency if it meant GetSchemaTable() would also be included, would be the preferred approach for those reliant on GetSchemaTable() (so that'd be my vote) - but I'd give more weighting to the devs that this affects.

@terrajobst thank you for your summary and sharing your considerations.

@terrajobst thanks for the summary and explanations. I maintain Npgsql so am writing from the perspective of an ADO.NET provider, not an ORM.

In .NET Core, Microsoft seems to have had the desire to remove some long-time legacy features of ADO.NET, namely DataTable/DataSet and the interfaces. This creates a better, cleaner API moving forward, but creates substantial work for users already depending on the interfaces and the DataTable/DataSet API.

I personally think that killing DataTable/DataSet is a good thing, and that a new and better metadata API should be introduced (I think this is true even without killing DataTable/DataSet). I don't minimize
the pain caused by this breakage to ORM (and other) consumers, but one thing to remember is that if there's ever going to be an opportunity to clean up and introduce breakage - .NET Core is that opportunity.

An additional frustrating factor here is that keeping the ADO.NET interfaces also means keeping DataTable/DataSet as well. In other words, although I personally don't feel very strongly about the interfaces problem, keeping them means keeping DataTable/DataSet and that seems more problematic.

Since I don't know how many ORMs there are out there which still depend on the interfaces, and I also don't know how much of an effort it would be for them to transition to the base classes, the choice isn't very clear here. But my gut feeling is to take this opportunity and clean things up even if it means some people suffer...

PS Whatever you do, don't go down the route of interface explosion, i.e. IFoo2, IFoo3. That's just a non-solution.
PPS If you do decide to create a new non-DataTable metadata API in .NET Core and want .NET Core libraries to run in .NET Framework, this means you'd have to release a new .NET Framework with the new API along with .NET Core.

@terrajobst Thanks for breaking the silence ;). I think the core issue is that there's no design present for ADO.NET. There hasn't been for quite some time and it shows in various parts of the API which is a hodgepodge of padded on features without thinking things through for more than a couple of minutes (it seems). I agree with @roji on this: .NET core is a once in a lifetime chance to do something about things so .NET core shouldn't be hold back by the (in my opinion, silly) rule that it should be backwards compatible with .NET full.

That said, if things aren't going to change for the better for ADO.NET (i.e. it gets a real design where a general API is designed which is then specialized in the SqlClient and not the other way around! So features available not in SQL Server but in other Databases are added to the general API and are not left to the ADO.NET provider writer), then the next best thing is to port as much over as possible, including the interfaces and we 3rd party devs #ifdef ourselves around the potholes that will be left.

The DataTable dependency can be a problem however, as the IDataReader interface will make it hard to keep things backwards compatible with .NET full _if_ datatable in whatever form isn't ported. But I think that's a lost cause anyway. It was said by MS multiple times that .NET full won't receive as many / as frequent updates as .NET core, so if something _new_ is added to .NET core, no-one can use it, as using it makes the library immediately incompatible with .NET full. I likely miss something here, so if that's the case, please correct me :)

That alone makes it a strange situation: there has to be backwards compatibility, but in practice this seems hard to achieve and a red herring anyway. IMHO if that's addressed first, the outcome of that can be used to make proper decisions on what to do with .NET core's APIs, i.e. improve them instead of drag old badly designed cruft from .NET full along. That's not to say all APIs are badly designed, however with ADO.NET (as I described in an earlier post), things haven't been done properly for many years. Making a clean break and instead implementing a system that can be more robust (JDBC is still going strong, ODBC still works like it did 25 years ago) and adaptive to change is preferable.

Thinking a bit more about this, @FransBouma's comment on the backwards compatibility requirement makes a lot of sense. One of the promises of .NET Core is faster iterations thanks to componentized Nuget packaging - instead .NET Framework updates coming once/twice a year, .NET Core updates can be released whenever needed. It seems that the value of this is severely limited if updates can never break backwards compatibility with .NET Framework...?

I understand that backwards compatibility is a requirement that goes way beyond this ADO.NET-specific discussion, am just wondering.

It's the other way around @roji: short iterations are enabled by not breaking APIs.

If you're a .Net developer and your build keeps breaking every week because the API of the underlying framework is constantly changing then it won't be long before you start considering another platform.

So it's fast iteration driven by non-breaking changes.

(edited)
You can only iterate without breaking changes till you have to e.g. add something to a class interface, behavior change (!) or behavior addition, which isn't a breaking change in the literal sense (behavior change is tho), but using it on .net core will make your code not compatible with .net full, so in that sense it will make .net core not backwards compatible anymore with .net full _for that piece of code_. Which IMHO comes down to either .NET core always will have the same (class) interfaces and behavior as in .NET full, to the last byte (which is IMHO unsustainable) or will get new features which are backported later (which is literally what MS said btw) to .NET full, effectively making it not backwards compatible. It depends on what side of the fence you are of course: if you say: 'there's a common set of (class) interfaces X with defined behavior B, and both .NET core and .NET full implement these and X & B won't change in the future', it will still mean there's a new framework outside X & B which will get new things and which is precisely where things can change and also where the future lies.

One could go very far with this, e.g. that the interfaces / classes used in X & B in .net core are actually wrappers around the new classes / interfaces in .NET core. It's then up to the developer using them to use either X & B or the new ones with better design and without an API in common with .NET full.

As we already have to #ifdef our way around missing things in X & B, when supporting both frameworks, IMHO it's preferable to simply be able to target new interfaces / classes in .NET core as sooner or later behavior changes (maybe even very subtle, like a different exception in a given situation) will crop up in there anyway, so 'porting' code to .NET core with its _new_ API (not X&B) would then be better. We've to port anyway, at least that's how I see it.

@ryanbnl, I agree with @FransBouma. The point isn't that we want to be able to break APIs. It's that constraining .NET Core to be runnable within .NET Framework means you'll never be able to add anything to any .NET Core interface, or to add an abstract member to any abstract base class. These two changes don't really break backwards compatibility in any real sense for someone using .NET Core, they one break compatibility with .NET Framework.

@roji unless its a new external from framework package (e.g. not GAC) when it can be compatible with both Framework and core and run on its own cadence; but then that might be even more changes for ORMs providers and the name System.Data.IDbConnection is already taken...

@benaadams that's a valid comment - I only had non-nuget framework APIs such as ADO.NET in mind. However, these do seem to cover enough API space to be a concern - .NET Core won't be able to evolve any of these (read: add interface methods or abstract methods) without becoming unrunnable

I'm not sure what you mean regarding IDbConnection though...

@roji just meant an new package that provided these types for example System.Data.Database; to remain compatible with both core and full, couldn't redefine any types that would be GAC's in full framework or it would clash.

On the nuget point; it there any reason this couldn't live in nuget for both full and core; and deprecate the current System.Data api post 4.6.1+?

It would cause greater pain now; but some compat is already being broken, where its dropping the interfaces, or only dropping DataSet so some rework already needs to be done for coreclr by ORM providers.

An entirely new api that lived in nuget and outside the GAC'd framework could be backwards compatible for use with netstandard1.2 e.g. 4.5.2+, coreclr, UWP, mono/Xamarin etc. Though it would be some more pain now - but its probably a better time than later.

Given that the new APIs are in play for achema and such, denoting DataSet won't be coming over (or DataTable), should this be closed out? It sounds like the base classes are the way forward based on comments in dotnet/corefx#5609 and type forwarding, which means the interfaces have no use given GetSchemaTable() and some others aren't there to bring them over for compat...is that fair to say?

Should what be closed out? If we can't have GetSchemaTable() because of the DataTable/DataSet dependency that's one thing, but the interfaces should still be restored (sans GetSchema if needed) and ease porting existing code bases with deep dependencies on them. The missing interfaces are a blocker, I'm still waiting for a release with them before we can begin the work to support dnx/core.

I agree with @mythz, the interfaces are another subject and very important. Maybe not for most users, but these same users use code which is written by a small group and that code _is_ relying on these interfaces (as well as other important missing ADO.NET features like DbProviderFactory).

To be honest, with the extreme push towards an 'RTM' label, I have little hope we'll get a solid API with 1.0. It will be like .NET 2.0 all over again: all mistakes made by the first version will then be corrected.

@FransBouma

Adding the interfaces to .NET Core would be an additive change. So even if it doesn't make the cut for V1, it could be added in any version following, even 1.1.

all mistakes made by the first version will then be corrected.

No offense, but that's the way software works.

@terrajobst

So even if it doesn't make the cut for V1, it could be added in any version following, even 1.1.

yeah, it's not that it's technically not possible, it's that the results of doing so (or the lack of doing so) have (far reaching) consequences, of which Microsoft isn't the one suffering from, that's us. So far I've seen little apathy for that. It's all cool and exciting to work on new frameworks, but it's not developed in a clean room for a new audience. It's also not the case there's no history to learn from, on the contrary.

No offense, but that's the way software works.

See, this is exactly what I mean above: you're not the one who has to deal with the consequences of your decisions, I have to. And I already have dealt with the consequences of a similar decision by your predecessors, so I gave you a heads up so you wouldn't make the same mistake.

I know how software works, I've been a professional software developer now for over 21 years. I gave my honest advice, not as a novice but as a battle-hardened expert in this particular area. You can do what you want with it, it's just that I hope you think twice before making a decision lightly here, as the consequences are far reaching, and as I said: we have to deal with these, not you.

Even a mistake can be fixed later, but it isn't made yet, it's a good reason not to do in first place, isn't it?

You guys are overreacting. I doubt the effects of this would have the same consequences as the old .NET had.

With coreclr bundled within each application, legacy has a very different meaning. Like pretty much nobody cares if features from asp.net mvc 5 are not backported to mvc 4 or 3. Legacy here will have a different meaning, and there is history in other projects to show that.

Good thing @terrajobst is open to consider this for the next versions.

@nvivo Please don't try to downtalk consequences _I_ have to deal with, as you don't have to deal with them, but I do.

Thanks @FransBouma for putting me into my place. It was my mistake to think I could comment on the issue. You certainly are more qualified than me to know what kind of things affect my work.

In fact, although I opened the issue, it has absolutely no effect on my work or the things I care about. I was just thinking about the poor developers like you that do all the hard work on the planet.

I'm really glad people like you are here to take care of the hard problems. Please don't hesitate to tell us again and again and again (and again) how much more important issues _you_ have to deal with are.

Thank you @FransBouma.

_sigh_ Where do I say all that? All I say is please don't downplay things, as you do that with 'you're overreacting', I don't think I am overreacting. With 'you don't have to deal with them' I mean: the consequences for _me_. Because I know what those are I react the way I did. Apparently that's 'overreacting'.

But whatever.

We have answers for this issue here and here.

The official answer is: Interfaces will not be on .NET Core 1.0, and although unlikely, they may be considered for future releases in some different form from what existed on .NET.

I'm closing this issue as the original question was addressed.

@nvivo Thanks, but best to leave any official responses to people actually responsible for the project who are also capable of closing issues themselves once they've decided it's been resolved.

@terrajobst Is there an updated official response/timeline for the Interfaces? and what's the best way to track this work item going forward? should we open a new issue or will you continue to provide any updates here?

Let's leave this open for now. The way I see it, the answer wasn't "let's not expose the interfaces". The answer was "let's find a way to expose them but let's think about what it means for the DataTable dependency".

Sorry for getting back to you guys so late. After discussing various options within our team, we decided to bring back the interfaces with an empty class of DataTable. It is not an ideal solution, but given the time frame of RTM, this approach will ensure we can pursue viable options around DataTable/DataSet in the future. We will try to bring in the interfaces for System.Data.Common by v1 RTM; SqlClient won't implement the interfaces by v1. Thanks for your feedback and patience. You feedback is a key part of making the data stack a viable product.

@YoungGah thx for the update, if the DataTable classes are just going to be empty placeholders what's requiring so much time/effort (i.e. holding them back) from being implemented by SqlClient v1?

@mythz The cost is in implementing the interfaces in the base type / forwarding them to existing methods. The cost should be minimal, but usually things show up :smile:

We have added the following interfaces to .Net CoreFX in System.Data.Common

IDataParameter
IDataParameterCollection
IDataReader
IDataRecord
IDbCommand
IDbConnection
IDbDataParameter
IDbTransaction

This was done in the PR https://github.com/dotnet/corefx/pull/6359

@saurabh500 Great stuff, thx!

:+1:

:+1:

Awesome; is there a milestone for this to hit nuget? rc3?

On 25 February 2016 at 02:54, Saurabh Singh [email protected]
wrote:

We have added the following interfaces to .Net CoreFX in System.Data.Common

IDataParameter
IDataParameterCollection
IDataReader
IDataRecord
IDbCommand
IDbConnection
IDbDataParameter
IDbTransaction

This was done in the PR dotnet/corefx#6359 https://github.com/dotnet/corefx/pull/6359

—
Reply to this email directly or view it on GitHub
https://github.com/dotnet/corefx/issues/3480#issuecomment-188577701.

Regards,

Marc

As commented on the PR, we need to understand why there are no async methods on these interfaces. As it was proposed, this is basically a trimmed down version of ADO.NET 1.1 interfaces, and I don't think the idea should be only compatibility with old code.

The interfaces should focus on the current state of ado.net, as async methods should be the default way to access any database today. Without real support for async methods, these interfaces are useless for modern develop
development.

And even including the async methods from .NET 4.5, some additional methods, like DbTrabsaction.CommitAsync should be added as well.

The postgres provider added some additional methods like CommitAsync to their api which are quite useful and necessary.

The current interfaces are fine as they are. The implication of changing them is just too large.

The async model is quite different from the synchronous one and as you might know, if you go for the async model you should do it all the way. Hence there are really no reason to have the same interfaces for both APIs. Create new ones for the async API.

If the .NET team wan to provide a more modern API, why not just create a new API which is not called ADO.NET? No legacy to be hindered by and no complaints from the community. That also fits well with how dnx is going to be distributed? i.e, independent packages.

:+1: on the interfaces, good compromise.

I don't think the idea should be only compatibility with old code.

That's the _entire_ idea here. Otherwise, base classes would be fine. That's a whole lot of porting pain we want to avoid.

Without real support for async methods, these interfaces are useless for modern development.

I disagree with this, but I don't disagree with an async version of the interfaces necessarily (which no one implements today). This would be a new feature. We can't retroactively be adding members to existing interfaces, that just breaks far too many things. Having an IDbReaderAsync or something isn't crazy IMO, but it is a different discussion.

I strongly believe async methods should _not_ be on the base classes, if the default implementation is a sync wrapper - that's actively bad and wasteful. If there's another proposal there then so be it, but again: that should be a different issue anyway.

Ok, maybe I expressed myself in the wrong way here or was too strong with my words.

I'm in favor of an additional interface for async if needed. What I'm not OK with is having something that defines an official contract for ADO.NET (that's what interfaces are) but doesn't have async methods anywhere in it.

But then, having alternative interfaces for async methods would probably cause other problems...

I strongly believe async methods should not be on the base classes, if the default implementation is a sync wrapper - that's actively bad and wasteful.

I agree, this is the main reason why most providers don't bother implement a real async api. But changing that would break much more code and probably cause much more noise than removing interfaces, as the base classes have been the actual API for providers since 2.0.

Upgrading a library to not use any of the 1.1 interfaces would cause almost zero impact compared to removing all async code written in the last years, that would be disastrous. The compromise is having both. Any code written today should be using async apis, so leaving that out doesn't make any sense.

Any code written today should be using async APIs.

I don't want to wound too harsh, but that ideal world is a very far cry from reality. Async is very pervasive and infectious. You simply can't rely only on async APIs in libraries and expect entire applications to be async consumers (changing a _ton_ of their code to be async as well) on a whim. Sync -> Async everywhere is also very bad for lots of deadlock and efficiency reasons. There will be synchronous code written for many years to come.

There is a strong need for both APIs. The point is: let's not remove the current ones or delay their presence for a hypothetical and not-yet-designed new set. We can worry about the second/new set independently.

Upgrading a library to not use any of the 1.1 interfaces would cause almost zero impact compared to removing all async code written in the last years

To what are you referring? There have been no async APIs for such code to exist. If you're relying on such APIs they're not on a base class or an interface, but on a provider directly. That will not be impacted by this.

Any code written today should be using async apis, so leaving that out doesn't make any sense.

Leaving out a lot of things doesn't make sense...except for the fact we're all constrained by resources (especially time). I don't believe anyone has left anything out _permanently_. Nothing is off the table. It simply hasn't been gotten to yet. I'd open another issue specifically for starting a spec on async interfaces for a future generation.

To what are you referring? There have been no async APIs for such code to exist. If you're relying on such APIs they're not on a base class or an interface, but on a provider directly. That will not be impacted by this.

.NET 4.5 introduced async methods on the provider base classes. This was in 2012, almost 4 years ago, so it has been part of the ADO.NET provider API for a while. Entity Framework 6 (released on 2013) depends on this async APIs for all providers.

Async methods are already part of the ADO.NET for enough time that a lot of people would scream if that was not included in .NET Core. I have _legacy code_ that uses async methods in ADO.NET.

I'm advocating that since they're _already_ part of ADO.NET, this should be present on the new interface API as well.

If people want (and they should) to use the async APIs, they can already do
that _before this change_ by using the base types. Ultimately, the request
to support the interfaces was made for backwards compatibility reasons;
adding methods to an interface _completely blows this out of the water_.
That said, it is actually just about possible as _extension methods_ and
type-checking against the abstract base-types, but... pretty ugly and not
worth the pain IMO.

So; short version: I can't personally get behind adding async to the
interfaces, as that destroys the one thing we wanted them for in the first
place. If you want async: you need to code against the base-classes, or use
tools that gloss over these details for you.

I'm advocating that since they're already part of ADO.NET, this should be present on the new interface API as well.

You're completely misunderstanding the purpose of these ADO.NET Interfaces which is to maintain compatibility with existing code. These aren't _new_ interfaces, these are _existing_ interfaces. If you want to access the newer API's, reference the concrete base types.

@nvivo Apologies, I'm just not following you - I was talking about _interface_ APIs - those have never existed. The base types already have all of the same *Async methods - is there something specific that's missing? I think you're arguing that they should be bundled into interfaces...yeah sure, but that's another issue which I encourage you to open.

I'd much rather they have been an interface from the start since the base implementations needed to make the current approach work (async over sync) are terrible tradeoffs to make the whole approach work. But, we also can't have it both ways: either they move to interfaces or they're present (as is the current case) to minimize breaks.

Yeah, I think we're going in circles here. I stated this before, I don't think interfaces should be added _just_ to help porting code. From a compatibility standpoint, the base classes have been the official API for ADO.NET since 2005, and that's what providers implement. Anything using an IDbCommand or IDbConnection could easily be ported (and should have been ported) to use base classes with a search/replace and have no downsides.

I know you're not a fan of ifdefs, but supporting this for a new platform will only be part of the migration anyway.

I agree this should have been interfaces all along, but since they weren't, I'd like to see this problem not repeat itself. If interfaces are being added, they should at least represent the current API, and not what they were a decade ago. Async methods are an integral part of the current API and that's the direction Microsoft is moving for quite some time. It would still be source compatible, just more complete.

@mgravell

If people want (and they should) to use the async APIs, they can already do that _before this change_ by using the base types.

This is not about being able to do anything. It's about architecture. Interfaces are contracts, .NET Core is a new framework that is adding this contract to a redesigned version of the API.

.NET core should not add an official contract to a new API just to help migrate really old code, while most of the other stuff will be missing anyway. If that is a concern, people are just not looking enough for reasons they will need to change their code anyway.

If that's all the team is doing, than that's OK.. it's just a bad choice IMO.

Anything using an IDbCommand or IDbConnection could easily be ported (and should have been ported) to use base classes with a search/replace and have no downsides.

False. The issues have been discussed numerous times in this thread from multiple library authors with first hand experience affected by this.

I know you're not a fan of ifdefs

Any solutions requiring end customers to use ifdefs is a broken dev experience and a non-starter, i.e. there will never be a successful product requiring customers to litter their code with #defs when alternatives exist.

If interfaces are being added, they should at least represent the current API

These are not new interfaces, they're restored interfaces. The current and future APIs are the base classes, not these interfaces. There should be no issue here, you can forget these interfaces exist and continue using the base types just as before these interfaces were restored.

There's no new value being added to this thread anymore. The existing ADO.NET interfaces have been restored so this thread can be put to rest. Only thing needed from this thread are updates to DataTable and GetSchemaTable() as they pertain to the existing interfaces. If you want to propose architectural changes or advocate for new interfaces, open a new issue - which will stop everyone in this list from being spammed.

@mythz let's agree to disagree.

Just adding my 2 cents as another ORM developer, abstract classes are always a code smell when not backed by an interface. Would love to see new interfaces provided to match the abstract classes and method signatures overloaded with a minimum requirement interface API.

Thumbs up to the community for speaking up.

abstract classes are always a code smell when not backed by an interface

@psibernetic Can you help me understand that statement? What about this is a code smell?

@psibernetic

Interfaces and abstract classes give us contract, both give us an abstraction and a good definition for the API. Interfaces are most useful when implementing classes that could implement more than one interface or are subclasses of another base class (assuming that is a very big advantage of that base class). In this case in particular, concrete classes for Connection, Command, and so on for specific providers have a strong IS A relationship with the abstract API definitions. I really can't imagine a scenario when some developer needs to add a concrete implementation for IDbConnection or IConnection to a subclass. The almost only scenario will be new classes that only derive for the abstract class and to "duplicate" the same definition on an Interface is more work (unnecessary) for the API designer.

Do you see a specific and concrete advantage or scenario to have two equal abstractions? When the interface does provide practical and real benefit _over_ the abstract class in this specific API design?

The only advantage that I can think of for the Interfaces is the backward compatibility that we need with the old ones to break less actual running code depended on those interfaces. If we doesn't had the old interfaces I'm pretty sure that the abstract classes will be enough.

@eocampo You are right that the abstract classes probably provide "good enough" abstraction and contracts. I always try to provide very narrow interfaces that represent actions that can be taken such as IAsyncCommand and the likes. That allows my frameworks to be plugged into in ways that may have not been considered at design time of the framework with less chance of terrible NotSupportedExceptions or NotImplementedExceptions.

@davkean The code smell is that in most cases, although not all, you are requiring an implementer to implement or inherit an entire base set of functionality that may not be relevant. I remember seeing IDataReader implementations that read from a cache or in memory instead. Not sure if the DbDataReader abstract class would allow that, but the name implies no.

The best-practices model followed predominately in dot net has been expose interfaces and inherit from base classes has it not?

The best-practices model followed predominately in dot net has been expose interfaces and inherit from base classes has it not?

@psibernetic Well not always. For example this recommendation on the MSDN site has more than a decade there. And that guideline is very common from .Net Framework 2.0 at least.

Also this is a good reference of the guidelines for library design in .Net from the early days:

http://www.amazon.com/Framework-Design-Guidelines-Conventions-Libraries/dp/0321545613

Anyway I think the hard discussion here is about two subjects now:

a) The Interfaces are just for backward compatibility or can we "start from scratch" (breaking code) to allow a cleaner interface and API design.
b) How much can we go for a modern and clean design at the cost of not be compatible with the full .Net framework. (Compatibility specifically between .Net Core and Full core on data access [not the most low level and mandatory compatibility])

From my perspective, if we have the abstract base classes as the primary and preferred contract, then the _interfaces_ must match the old ones just for compatibility. I understand that @nvivo already has stated that after .Net 2.0 the official contract was the abstract base classes so we _could_ think that interfaces will not resolve the compatibility issue but @mythz and @mikeobrien had also provided hard data here about the dependency of providers on the 1.1 interfaces.

To stop spamming and discussing the topics here we will need to read again this long conversation and I don't know if we can agree on the LIST of specific topics that we are addressing or if it is a good idea to create two or three new issues for each specific topic. I'm more of the first suggestion because there is a lot of good points here. I don't have a good idea of how we can re-summarize all these and clear some noise (even mine).

Speaking of interfaces, are there plans to finally make parts of System.Data generic? It has always bothered me that System.Data never really got its API updated beyond .NET 1.1, leaving people with having to use hacks like the .AsEnumerable() extension method to get an IEnumerable out of a DataTable. Why haven't collections like DataRowCollection been upgraded to implement the generic interfaces when everything else in the framework did when 2.0 came out?

Will there be a stub System.Data with type redirections in it? I need to use ODP.NET but now I cannot.

Created dotnet/corefx#7874

@mgravell @ploeh "Rickasaurus" implied typeclasses were on the horizon (for F# at least, not sure about C# or .NET in general https://news.ycombinator.com/threads?id=Rickasaurus). If it's the case that they're coming for all .NET, would it resolve the issue?

I'm not a Haskell expert, but my understanding is they'd allow you to bolt on a simple IDbConnection, IDbConnectionAsync, and any future shared interfaces after-the-fact without breaking source or binary compatibility, and without forcing 3rd-party providers to implement everything. This, while retaining easy mockability.

Is this a correct understanding? If so, is there any chance this feature is coming to .NET for real?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

EgorBo picture EgorBo  Â·  3Comments

omajid picture omajid  Â·  3Comments

chunseoklee picture chunseoklee  Â·  3Comments

yahorsi picture yahorsi  Â·  3Comments

GitAntoinee picture GitAntoinee  Â·  3Comments