Roslyn: Proposal: Method Contracts

Created on 28 Jan 2015  ·  259Comments  ·  Source: dotnet/roslyn

(Note: this proposal was briefly discussed in #98, the C# design notes for Jan 21, 2015. It has not been updated based on the discussion that's already occurred on that thread.)

Background

"Code Contracts" were introduced in .NET 4 as a feature of the Framework, primarily exposed through the System.Diagnostics.Contracts.Contract class. Through method calls like Contract.Requires and Contract.Ensures, developers could add code to their methods to specify preconditions and postconditions, and then a subsequent tool known as a rewriter could be used to post-process the code generated by the compiler to appropriately implement the expressed contracts, e.g. whereas the developer puts a call to Ensures at the beginning of the method, the rewriter ensures that path through the method that could cause it to exit ends with a validation of the expressed postcondition.

``` C#
public int Insert(T item, int index)
{
Contract.Requires(index >= 0 && index <= Count);
Contract.Ensures(Contract.Result() >= 0 && Contract.Result() < Count);
return InsertCore(item, index);
}

## Problem

Very little code actually uses these API-based code contracts.  They require additional tooling to be integrated into the build, they're verbose, and it's difficult to write additional robust tooling related to them (e.g. integrating contract information into documentation).
## Solution: Language support for method contracts

Basic contract support for preconditions and postconditions can be built directly into the language, without requiring any separate tooling or custom build processes, and more thoroughly integrated throughout the language. 

The compiler would generate all of the relevant code to correctly validate the contracts, and the contracts would show up as part of the method signature in the Visual Studio IDE, such as in IntelliSense at call sites to the method.

``` C#
public int Insert(T item, int index)
    requires index >= 0 && index <= Count
    ensures return >= 0 && return < Count
{
    return InsertCore(item, index);
}

These contracts would primarily be validated at run time, however if the compiler (or a 3rd-party diagnostic) could statically detect a violated contract (e.g. a precondition requires that an argument be non-null, and null is being passed as the argument), it could choose to fail the compilation rather than allowing the bug to make its way to run time. Similarly, if the compiler (frontend or backend) is able to statically determine that a contract will always be satisfied, it can optimize based on that knowledge.

Unlike .NET Code Contracts, which have configurable but complicated behavior provided by a post-compilation rewriter tool (and which are no-ops if such a rewriter isn’t used), failed validation of ‘requires’ and ‘ensures’ clauses would ideally result in "fail fast" (ala Environment.FailFast(...)), meaning the program abandons immediately. This is very useful for validating preconditions and postconditions, which are typically used to catch usage errors by the developer. Such errors should never make it into production.

The compiler would require that preconditions and postconditions can be validated by the caller, and as such it requires that any members used in the requires and ensures clauses are all accessible to any caller, e.g. the requires clause of a public method may not access internal or private members, and the requires clause of an internal method may not access private members.

Preconditions and postconditions should also not throw exceptions.

Virtual methods and interface methods may have preconditions and postconditions, in which case the compiler guarantees that overrides and implementations of these methods satisfy the preconditions. To make this clear to a developer reading the code, the override or interface implementation would state "requires base" or "ensures base", to indicate that there are imposed constraints, while not forcing the developer writing the code to explicitly type them out.

``` C#
interface IItemCollection
{
int Count ensures return >= 0 { get; }

}

class MyItemCollection : IItemCollection
{
public int Count ensures base => m_count;

}

## Alternatives: Fail fast vs exceptions

In support of migrating existing code to use contracts, or for places where an exception is really desired over fail fast, we could optionally consider additional syntax to specify that an exception should be thrown instead of fail fasting.

For example, the previous Insert example's requires and ensures clauses would result in fail fasting if a condition isn't met:

``` C#
public int Insert(T item, int index)
    requires index >= 0 && index <= Count
    ensures return >= 0 && return < Count
{
    return InsertCore(item, index);
}

But, the developer could explicitly override the fail fast by specifying what should happen in case of failure:

C# public int Insert(T item, int index) requires index >= 0 && index <= Count else throw new ArgumentOutOfRangeException(nameof(index)) ensures return >= 0 && return < Count { return InsertCore(item, index); }

2 - Ready Area-Language Design Feature Request Feature Specification Language-C#

Most helpful comment

@jaredpar is there anything new in this space? will we ever see this as part of the language? I know it won't be part of C# 7 but is there any progress in this area? like discussions? different way to tackle this? what are your thoughts?

All 259 comments

Any possibility of getting invariants as well?

Would love to see the static analysis tools we have in Code Contracts working with these new contracts.

I really like the presented syntax to decide between exceptions and fail-fast for preconditions.

Could some kind of flag be provided to omit private/internal contracts from release builds if we have static analysis tools to prove them?

@aarondandy

In previous research we have shied way from invariants in part because it's actually incredibly difficult for developers to predict the effect it will have on the code. Typically invariants checks are validated to be true on every exit / entry point of a given method. Given that can you point out all of the places on the following method where that happens?

class C
{
  invarint Count >= 0

  int[] _array;
  int _count;

  void AddRange(IEnumerable<int> all) 
  {
    EnsureCapacity();
    foreach (var cur in all) {
      _array[i] = cur;
    }
  }
}

Here are just a few of the places where an invariant check must occur in the above code:

  • In the generated call to GetEnumerator.
  • Twice per loop iteration. Once for MoveNext and Current each.
  • Before and after the call to EnsureCapacity
  • At least twice inside the implementation of EnsureCapacity
  • At the beginning and end of the method.

Sure a few of these may be able to be optimized away. But even then on this really simple method the invariant check is occurring at a very high frequency.

In the past when we looked deeply at places where developers wanted invariants we found that requires and ensures were sufficient to the task.

@jaredpar

That is fair considering the complexity The real reason I want invariants is just to reduce the amount of redundant contracts I would need to write. As a compromise would it be terrible to have a way to reuse some pre/post conditions? Something like [ContractAbbreviator] but probably way different?

@aarondandy Are you referring to reusing a pre-condition as a post-condition, or referring to reusing a precondition across several methods in a class?

For the first situation it'd be nice if requires ensures or some other syntactic sugar was supported. For the second, it could be handled with allowing the expression to be a method (which can be declared with the shorthand lambda method syntax), so you can have a CheckClassInvariants method and do requires ensures CheckClassInvariants() which isn't too bad.

@mirhagk

"reusing a precondition across several methods" is what I meant but I like your other thoughts too. I don't really want to have a true provable invariant for Count >=0 but I definitely want a way to prove and communicate that each method call on my type won't do something crazy to Count.

I wonder if ensures could be made to work naturally with async method results.

Good to see that this is being considered!

Preconditions and postconditions should also not throw exceptions.

How are you going to verify this? For instance, are we allowed to do property access in contracts: requires id != this.Owner.Id? What if Owner is null? Is it an uncompilable code, a NullReferenceException at runtime, or a contract failure?

Just like in API-based Code Contracts, there should be some way to refer to the old (pre-invocation) value of the field/property/parameter. old(foo) looks good, but that makes old a new contextual keyword, which you don't like to introduce often.

@apskim

Preconditions and postconditions should also not throw exceptions.

This may or may not be enforced. It might just be a "you shouldn't do that" kind of thing rather than a compile time exception (perhaps static analyzers could look for it).

If it is a compile time thing, then regular property access would have to be disallowed in your case to instead do something like requires id <= this.Owner.?Id. This doesn't throw any exceptions and will fail the validation.

To what extent can the compiler check a type's contracts for internal consistency? For example, it'd be hard to reliably solve for arbitrary expressions:

class C
{
    double Value ensures Math.log10(return) > 0 { get; }

    void Foo() ensures Value < 1 { ... }
}

There is no value of x < 1 for which Math.log10(x) > 0.

Would these constraints primarily be evaluated at run-time, with best-effort use from tooling?


Also, if arbitrary code can be executed in requires and ensures statements, side effects must be considered. Code may behave differently when compiled to evaluate contracts vs. without.

@drewnoakes I'm assuming that an extension will be created that checks all sorts of crazy situations that you'd never want to add to a compiler (for complexity reasons). The compiler can do very minimum checks, and leave most of the checks to extensions.

In #98 it was mentioned that there may want to be some shorthand syntax for common checks (null being the most common). Something like requires item seems like a simple shorthand for the case of null checks, and combined with the ability to separate with a , would probably lead to a wider usage (requires item, data, index would be very simple to write and there'd be little reason to not use it).

An alternative is to use requires item! which makes use of the proposed non-nullable reference syntax ! and separates it from a boolean.

what about situations where a compile-time check is nice, but a runtime check may impart more of a performance penalty? For example, there's this comment in System.Collections.Immutable's array:

public T this[int index]
{
    get
    {
        // We intentionally do not check this.array != null, and throw NullReferenceException
        // if this is called while uninitialized.
        // The reason for this is perf.
        // Length and the indexer must be absolutely trivially implemented for the JIT optimization
        // of removing array bounds checking to work.
        return this.array[index];
    }
}

...which states that, while you want compile-time checks for existence of the array, and index range, the penalties for runtime checks are too high. (In this case they're letting the hardware/vm throw the errors directly, a rarer occurrence than checking all accesses)

Maybe the validator could biggyback off of hardware/vm protections? So it could see something like requires this.array, index >= 0 && index < this.array.Length, note that it has built-in runtime checks (effectively) for those values anyways, and noop them. Which would probably require sitting in the compiler, than as an external tool.

@Clockwork-Muse

We've did pretty extensive profiling of contracts in our project and found that in general the performance hit they caused was minimal. It was usually in the 1-2% range across the system.

Sure there was the occasional check that was too expensive to make a contract. In those situations we first looked to the API itself. If verifying the invariants of the type was too expensive to put in a contract then perhaps the API itself needs to be changed.

If the API couldn't be made more efficient then we would move the check into Debug.Assert. That was pretty rare though (only a handful of occurrences).

Note: contracts do not always result in a negative performance hit. It is possible that they establish invariants (like bounds) that the JIT can optimize against and result in faster code. There were definitely places where that happened.

Would it be possible to have a method contract in this position?

``` c#
T Item(int index)
{
requires (0 <= index) && (index < Size)
get{ return _a[index]; }
set(T item) { _a[index]=item; }
}

Using a **trait** ( #129) 

``` c#
T Item(int index)
{
  trait { _a!; range:  0 <= index < _a,Length; }
  get { _a[index]; }
  set(T item) { _a[index] = item; }
}

The part about Virtual methods and interface methods made me starting thinking about how this might be seen as an extension to the type system in a way...or rather another kind of type, similar to units of measure in F#. I'm bored, so I'm going to riff on the idea...

If we start with the first example in the solution above:

public int Insert(T item, int index)
    requires index >= 0 && index <= Count
    ensures return >= 0 && return < Count
{
    return InsertCore(item, index);
}

The 'requires' are additional constraints on the methods parameters, similar to how you have a types associated with each parameter. For fun I'm using 'where' below instead of adding new keywords. I think it's worth considering making it look LINQ'ish:

public int Insert(T item, int index where index >= 0 && index <= Count)
    where return >= 0 && return < Count
{
    return InsertCore(item, index);
}

Which then makes me think of the notion of generic type constraints. Similar to how a delegate gives a name to the shape of a function, a typeconstraint is used to give a name to a constraint.

typeconstraint Bounded<Type, Lower, Upper> = Type N where N >=0 Lower && N <= Upper

Perhaps you could then use typeconstraints anywhere you would normally use a type. The generic parameters could be bound to other types, constants, members or other arguments. Now the above can be rewritten:

public Bounded<int, 0, Count> Insert(T item, Bounded<int, 0, Count> index) 
{
    return InsertCore(item, index);
}

...which is looking pretty clean IMO.

And maybe I'm going off the deep end here, but lets consider going a step further by thinking of the traditional type syntax as being syntactic sugar for a specific where clause...

typeconstraint Bounded<T, Lower, Upper> n where typeof(n) is typeof(T) && n >= Lower && n <= Upper

In other words, 'traditional type syntax'...

int x

...is shorthand for...

var x where typeof(x) == typeof(int)

...or for parameters and returns...

x where typeof(x) == typeof(int)

Since non-nullable types are being considered at C#7 design meetings, lets see how that fits in with this:

typeconstraint NotNull n where n != null

...and union types (a bit like TypeScript has)

typeconstraint Union<T1, T2> n where typeof(n) == typeof(T1) || typeof(n) == typeof(T2)

And of course, similar to delegates, an assembly should be able to expose public typeconstraints like any other kind of type.

I mentioned it in the original discussion, but what about a short hand syntax for the most common scenarios:

public void M(string! x) {}

expands to:

public void M(string x) requies x != null {}
public void M(int >= 0 x) {}

expands to:

public void M(int x) requires x >= 0 {}
public int >= 0 Count() {}

expands to:

public int Count() ensures return >= 0 {}
public void M(int < y x, int > x y) {}

expands to:

public void M(int x, int y) requires x < y, requires y > x {}

(this specific case looks messy, thus probably ok to only support short hand for simple cases, like operators and constant literals (no method calls, no refering to other arguments or fields/properties etc, for that you would use the "full" syntax).

public C! M(B! b) {}

expands to:

public C M(B b) requires m != null, ensures return != null

I think of contracts as a harder "type", thus I feel it might make sense to have the requirements right there with the parameter (it also avoids double typing).
For instance you don't say
public M(x) requires x is int, the int requirement is right there with the parameter, so why would further requirements on x not be right along there with it?

The downside is it might look a bit messy, especially if there are lots of parameters (but that's code smell anyways), and I guess it's (much?) harder to parse for the compiler.

@chrisaut public void M(int < y x, int > x y) {} can never be satisfied

I think the ( #129 ) would be a better approach.

public void M(int x, int y)
  trait { x < y; y > x }
  {}

As it doesn't mess with the parameters syntax, yet still available for the IDE to use the user and collapse the trait definition.

@chrisaut public void M(int < y x, int > x y) {} can never be satisfied

why not?
it expands to:

public void M(int x, int y) requires x < y, requires y > x {}

x needs to be smaller than y, and y needs to be bigger than x (which comes from the first constraint, you wouldn't even need the second constraint).
M(0, 1) satisfies it.
But it just goes to show that for anything but trivial "< constant" stuff the syntax is confusing, hence it's probably not a good idea to allow it for anything that referes to other named parameters/fields/etc.

May have misread it as x < y and y < x

I don't see why this should be method only. Why not be able to use with variables also. In addition some distinction on runtime, compile time if unable at compile time at runtime and enforceability.

@jaredpar consider binding this feature to non-nullable reference types.
Although they can't be implemented as they are implemented in, for example, F# but some kind of 'global precondition' can mitigate the problem: it can state that no reference type within a class can have null value.
This will help to reduce amount of '!= null' preconditions in code that tries to be more functional.

Could you explain why run-time contract violation shouldn't throw an exception? I've shied away from using CodeContracts static analysis (build are too slow; too much work to fix all warnings), but find them great for run-time validation and documentation. It's also great to be able to dial up and down the amount of run-time checks for debug and release builds (full checks vs ReleaseRequires). In the case of release builds, you DEFINITELY don't want fail-fast behavior; and arguably you'd want consistent behavior but more checks in debug builds.

My biggest problems with CodeContracts have been:

  1. The post-processing step slows down the build a HUGE amount (CodeContracts seems to account for 80% of the build time). This is the main reason I've considered dropping CodeContracts on several occasions.
  2. The syntax could be more readable/expressive/concise
  3. The implementation is completely broken on Mono - this is the primary reason why developers with Macs are having to run Windows VMs.

I would expect that this proposal would address 1 and 2 at a minimum, so I'm a huge fan.

Another important trait to preserve is the ability to intercept contract failures and customize the response: log them, pop up a dialog, etc.

@johncrim

Could you explain why run-time contract violation shouldn't throw an exception?

Contract violations occur when code has called into a method and provided invalid arguments or when the type itself was in an invalid state. This represents a coding error in the calling code. It simply failed to meet the basic contract of the method in question.

The safest thing to do in the face of a coding error is to stop executing code. The program is in an undefined state and continuing to execute code could lead to all manner of issues including: data corruption, later errors which mask the original bug, etc ...

This is why contract violations use Environment.FailFast (or the like) in the event of a failure. It ensures the process will be torn down in the most expedient manner possible. Crash dumps will point directly at the bug in question. Throwing an exception is instead a way to guarantee that user code will continue executing the program in an undefined state.

I totally agree that in theory a developer's / coding error in the calling code should immediately stop the program.

But: Are we sure that in practice there are no legitimate use cases where we want to fail into a safe state in an exception handler (i.e. log errors _using existing infrastructure_; send error replies in a distributed system; ...) _even_ in case of developer errors i.e. contract violations at run-time?

@ulrichb

Sure but in those case I would say that exceptions are a better choice than contracts. In my mind I view the difference between exceptions and contracts as follows:

  • Contracts: a precondition than can easily and efficiently be tested / guaranteed by the caller. If this precondition is violated then it clearly indicates a bug in the program.
  • Exceptions: used when a situation where a precondition can't efficiently, conveniently or simply can't be tested / guaranteed by the caller. Greatest example here is IO (it can always fail).

@jaredpar

What I meant was, that even in the case that we have a contract violation _due to a developer error / bug in the program_, do we always want to FailFast()?

One example: If we implement an HTTP server and request input leads to a precondition violation (due to a bug in the program), do we want to crash the whole server application (including all other request threads)? I think we should have the possibility that the contract violation throws an exception, i.e. just aborts the current request, and logs the contract violation, like it would happen with thrown NullReferenceExceptions (which are also bugs).

@jaredpar

I could understand that view if static analysis tools were forced to run for everybody. Currently with the Code Contracts implementation I think it is rare for a user of a library to actually run static analysis against their project. Even if it was baked into the compiler I would think that especially during day-to-day development activities some people would want to temporarily disable static analysis if they are building and testing frequently. That is how I work anyway. In these cases it is pretty nice if I just get an ArgumentNull exception that I can see in my test and say "oops!"

I do still agree with "contract violation shouldn't throw an exception" but for a totally different reason I think. I like to see it kind of like driving with a sharp spike on your steering wheel, you would hopefully take extra care to not rear-end anybody. I find that when I am more likely to get an exception (or any failure really) due to violating a contract I am less likely to violate the contract as it makes me more careful. Static analysis tools help a bunch as well in that area.

@ulrichb

One example: If we implement an HTTP server and request input leads to a precondition violation (due to a bug in the program), do we want to crash the whole server application (including all other request threads)?

This is confusing user input validation with method argument validation.

User input should never be done with a contract because as you pointed out it shouldn't be a tool to crash the process. That type of sanitation should be a separate process.

@aarondandy

I think it is rare for a user of a library to actually run static analysis against their project. Even if it was baked into the compiler I would think that especially during day-to-day development activities some people would want to temporarily disable static analysis if they are building and testing frequently

I do want to be clear that the compiler won't be doing any static analysis on the contracts the users supplies. These are simply a tool for argument validation.

That being said the output of contracts is expressive enough that theorem provers can run over the code.

@jaredpar

... leads to a precondition violation (due to a bug in the program) ...

@ulrichb

How is that inconsistent with my argument?

@jaredpar I may have misunderstood your previous comments, so ignore if so. The point I am making is that if FastFail is the only way that contracts are enforced I think it would be vary inconvenient for users of those methods. I think that having a choice between a contract throwing an exception or doing fastfail is important.

How is that inconsistent with my argument?
User input should never be done with a contract because as you pointed out it shouldn't be a tool to crash the process. That type of sanitation should be a separate process.

You again ignored my assumption. Given we have a bug in our program and user input is passed non-sanitized to a precondition-guarded method. Given this assumption, do we _always_ want to FailFast?

@ulrichb

Given we have a bug in our program and user input is passed non-sanitized to a precondition-guarded method. Given this assumption, do we always want to FailFast?

Yes.

The alternative is to continue running a program which has just proven that it's not doing user input sanitation. Allowing it to continue is a recipe for attacks like SQL injections to occur.

@ulrichb @jaredpar

But I could also terminate execution by not catching the exception or (properly) re-throwing it. You may not agree with somebody's view on how to handle contract failures like they don't agree with yours. I think that right there shows that a choice should be allowed.

@jaredpar

Allowing it to continue is a recipe for attacks like SQL injections to occur.

I never spoke about continuing processing the request. My example was to enter a _safe state_:

... aborts the current request, and logs the contract violation ...

@aarondandy

I think that right there shows that a choice should be allowed.

Developers do have a choice: if believe that failure of a particular precondition represents a recoverable action then don't use contracts, use manual verification and exceptions as they are today. Contracts should only be used for places that indicate programming errors.

@jaredpar aka. the eponymous // this should never happen?

@jaredpar As you said, contracts are simply a tool for argument validation. Passing arguments that do not fulfill the contract is a programming error.

Programming errors exist in code during development and QA. If code currently being developed or quality-tested crashed the current process (assuming Environment.FailFast), development, testing, and bug analysis would all become very inconvenient. (Think about the unit test runner crashing all the time because of programming errors. No more red tests, just abandoned execution.)

Programming errors exist even in production code, if they weren't found during development and QA. If production code failed fast due to a bug without any chance to produce a log, production bug analysis would become even harder than it already is. If a whole web server crashed due to one request triggering a bug, a small error could suddenly become a tremendous security issue - denial of service made easy.

The .NET Framework already contains a number of exception classes used to report contract validations; see the family of ArgumentExceptions. Similarly, deferencing a null value or performing an invalid cast, which clearly are programming errors, throw NullReferenceExceptions and InvalidOperationExceptions. The same reasoning of why these exceptions exist also applies to a new hypothetic language-supported contract feature.

Exceptions indicating that a programming error occurred are not meant to be caught and handled (or even ignored). They are not provided as a way to continue execution in a broken state. They are instead meant as a means to terminate execution of the current "operation", falling back the call stack up to a safe location, where the _application_ can decide what to do about that unexpected error. It can log it, decide to terminate the current request, etc.

@fschmied

They are not provided as a way to continue execution in a broken state. They are instead meant as a means to terminate execution of the current "operation", falling back the call stack up to a safe location, where the application can decide what to do about that unexpected error.

This is the root of the disagreement. How can the application make an educated decision that it is a safe location? The application has identified that the code it is calling has a coding error in it. That is all it knows.

It has no understanding of why this failure occurred, if the code was left in a recoverable state or really even what component the failure came from. The error could have occurred directly in the code that was called or even transitive dependencies. The argument exception could be anything from a negative index to passing unsatized user input to a back end. There is just no way of knowing.

The application simply cannot make an informed decision on whether or not it is safe to continue executing. This is why termination is the best option.

The application simply cannot make an informed decision on whether or not it is safe to continue executing. This is why termination is the best option.

There are quite a few circumstances when the application can make an informed decision to continue executing. If action is isolated and not critical to the application itself, then it can carry on. If you have a feature in your game to share your high score with your friends, it doesn't need to kill the entire application when saving the high score fails a contract.

Your POV seems to be that any programming error could leave the program in a completely undefined state, but this isn't like C++ where the stack could be corrupted by any section of the code. You can know which parts of the program can't be affected and are safe to continue. There may be situations where you are writing critical code and maybe you do want fail fast, but I hardly think that it's impossible to know which part of your program failed.

It's just simply ridiculous to assume that the correct action would always be to kill the process immediately. If microsoft word had a bug when it saved to a network path because of a coding error, would you want it to immediately kill the app? No, you'd want it to save the file to a temporary location, give an error message to the user, log the problem, and then offer to try and restore the file when it loads next time.

I think that it's absolutely silly for a contract violation on argument validation to cause the entire process to crash. Implementing it that way is a sure-fire way to guarantee that the feature is _never_ used, at least by anyone writing anything non-esoteric. The point of that side of this feature is argument validation, and argument validation is _inherently_ something from which the program can recover in some manner. And frankly, if it can't, the _caller_ can make that decision by not catching the exception. This is how all implementations of code contracts that I've ever seen works on .NET now or in any other language.

On the other side of contracts, output validation, I can somewhat more understand this decision. In that case the state really is corrupt and it's hard to determine what is the appropriate course of action. However, even then I think failing fast is horribly abrupt.

In short, to fail fast is not only incredibly unintuitive and unexpected, but it will also wreak absolute havoc on all existing testing frameworks designed to help developers find and catch these very problems. Rather than make developer's lives easier it will only make it significantly harder. Or, frankly, it just won't be used, at least by anybody who understands what it does.

@mirhagk

Your POV seems to be that any programming error could leave the program in a completely undefined state, but this isn't like C++ where the stack could be corrupted by any section of the code.

True, managed code makes it less likely, though definitely not impossible, for memory / stack to be in corrupted state. However this is not the only type of bug that can lead to undefined behavior. There are many other cases:

  • Aborting a write to a file buffer in memory half way through leaving the file in a corrupted state.
  • Failing to properly secure a https channel.
  • Failing to properly sanitize inputs.

These are represent real errors that can lead to data loss or data exposure.

Customers will be annoyed if your program crashes a lot, they'll be furious if you corrupt their data.

You can know which parts of the program can't be affected and are safe to continue.

Sorry this is just not true. It is nearly impossible in a sufficiently large program to isolate the effects on data from one piece of code to another. There are just so many ambient ways in which they can share data.

If microsoft word had a bug when it saved to a network path because of a coding error, would you want it to immediately kill the app?

This is an example of IO, an operation which is fundamentally unpredictable and not suitable for contracts. This is instead a place where exceptions shine.

@HaloFour

I think that it's absolutely silly for a contract violation on argument validation to cause the entire process to crash. Implementing it that way is a sure-fire way to guarantee that the feature is never used, at least by anyone writing anything non-esoteric.

I don't take the position that contract violations should cause a process failure lightly. I have this belief after years of repeated experiences across a wide variety of code bases where we moved from Debug.Assert or exception style validation to fail fast semantics. Every single time it had the same results.

Short term instability did increase. Years of bugs that were hidden by debug only semantics or exceptions that were eventually swallowed were suddenly un-ignorable (a process crash is a sure fire way of getting a bug filed). After about a month though two things began to happen.

  1. Product stability dramatically increased.
  2. Lots of bugs simply stopped reproing. Turns out they were just side effects of bugs the weak argument validation was hiding.

This pattern repeated across a variety of code bases I worked on over the years. These code bases ranged from small (about ten thousand lines) to very large (millions of lines).

Rather than make developer's lives easier it will only make it significantly harder. Or, frankly, it just won't be used, at least by anybody who understands what it does.

This is exactly how I felt the first time I was introduced to this concept. I thought it was a mistake and that we'd suffer greatly from it. After a couple weeks of living with it and seeing the bugs it was finding I became a convert. All my experiences since then have re-enforced this stance.

All my arguments being made I think this feature is more likely to end up with the two varieties:

  • Fail fast: requires <condition>
  • Fail with exceptions: requires <condition> else throw <exception>

The reason being that it's a great back compat story. It allows the unification of all of the existing forms of argument validation into one single feature with a more concise and documentable syntax.

Even if this is the eventual outcome I highly encourage you to embrace fail fast. Yes it will cause short term instability in the code base. In the long run though it will make your code better by removing a whole class of silent failures and the strange bugs that result from them.

@jaredpar

I would totally use fail fast, but I would like to be able to use that within my private surface. I really want exceptions for my public surface. I feel that if a user of my code decides to feed exceptions to the exception monster, that is their choice and they (implicitly, their users too) will deal with the consequences.

While you may see the light and I _sometimes_ see it too I don't think people will be very into contracts if it is 100% fail fast, it's just not friendly enough to be practical in my opinion. I will defend and make noise in favor of fail fast just as I do for exceptions however. I want them both and think they both have their place in my code. I don't consider myself special so I am certain that there are many others out there who feel as I do. I think the world would be better with exceptions than with no contracts at all and I strongly feel that people will not take kindly to it, if it is 100% fail fast.

The bigger deal to me is the static analysis. I already have if and throw and Code Contracts so I don't need code contracts added to roslyn. I want something better though, who doesn't? I know that has nothing to do with roslyn from an implementation point of view but if contracts and exceptions are to be separated, I at least want one unified way of encoding contracts for analysis, for both "roslyn contracts" and exceptions, so that they can all be processed together by tools. Xml docs are not enough.

How about exception/fail fast switch in app manifest?

2015-02-13 5:20 GMT+01:00 Aaron Dandy [email protected]:

@jaredpar https://github.com/jaredpar

I would totally use fail fast, but I would like to be able to use that
within my private surface. I really want exceptions for my public surface.
I feel that if a user of my code decides to feed exceptions to the
exception monster, that is their choice and they (implicitly, their users
too) will deal with the consequences.

While you may see the light and I _sometimes_ see it too I don't think
people will be very into contracts if it is 100% fail fast, it's just not
friendly enough to be practical in my opinion. I will defend and make noise
in favor of fail fast just as I do for exceptions however. I want them both
and think they both have their place in my code. I don't consider myself
special so I am certain that there are many others out there who feel as I
do. I think the world would be better with exceptions than with no
contracts at all and I strongly feel that people will not take kindly to
it, if it is 100% fail fast.

The bigger deal to me is the static analysis. I already have if and throw
and Code Contracts so I don't need code contracts added to roslyn. I want
something better though, who doesn't? I know that has nothing to do with
roslyn from an implementation point of view but if contracts and exceptions
are to be separated, I at least want one unified way of encoding contracts
for analysis, for both "roslyn contracts" and exceptions, so that they can
all be processed together by tools. Xml docs are not enough.


Reply to this email directly or view it on GitHub
https://github.com/dotnet/roslyn/issues/119#issuecomment-74203216.

@jaredpar

This is the root of the disagreement. How can the application make an educated decision that it is a safe location? The application has identified that the code it is calling has a coding error in it. That is all it knows.
[...]
The application simply cannot make an informed decision on whether or not it is safe to continue executing. This is why termination is the best option.

This is not about continuing executing. This is mainly about allowing application code to decide _how to fail_. Failing could mean:

  • logging, then terminating;
  • showing UI and sending a bug report, then terminating;
  • saving as much of the user's work as possible, then terminating;
  • immediately terminating;
  • only terminating the Application Domain;
  • only terminating the current Web Request.

You can know which parts of the program can't be affected and are safe to continue.

Sorry this is just not true. It is nearly impossible in a sufficiently large program to isolate the effects on data from one piece of code to another. There are just so many ambient ways in which they can share data.

It is safe to execute code after a programming error occurs

  • if you know that the block of code that triggered the programming error _cannot have any side effects on the rest fo the application_ (and balance the probability of your being wrong against the risk), or
  • if you only execute code _explicitly crafted_ to run in case of a programming error.

I (and I think the others) are not advocating generally catching the exception and just continue running as if nothing happened. But a _general purpose programming language_ simply cannot assume that it is okay to call Environment.FailFast when a programmer made a mistake, this really depends on the concrete application.

As @mirhagk wrote:

If microsoft word had a bug when it saved to a network path because of a coding error, would you want it to immediately kill the app?

This is an example of IO, an operation which is fundamentally unpredictable and not suitable for contracts. This is instead a place where exceptions shine.

No, it can also be because of a coding error. Assume the programmers of Microsoft Word thought that a path had certain properties and built a corresponding contract check. In production, a certain network path does not fulfill those properties, and the contract check fails. You would of course _not_ want Word to simply catch that unexpected error and pretend nothing happened. But you would also _not_ want Word to crash immediately. Instead, you want Word to log the error, try to save your contents as well as it can to a temporary location for recovery, show a user interface, then quit.

The application developers have to be able to decide what happens when an unexpected programming error occurs.

All my arguments being made I think this feature is more likely to end up with the two varieties: [...]

That's not enough. The else throw new ... syntax is great for customizing what exception is thrown. But if the ordinary contract syntax always failed fast, you

  • couldn't use it in web applications (being able to take down a whole web server process imposes the security risk of cheap DoS attacks)
  • couldn't use it in client applications that want to be able to recover a user's work in case of a programming error,
  • couldn't use it in client applications that want to display UI before crashing,
  • couldn't use it in applications that need to implement error logging or error telemetry,
  • couldn't use it in libraries (don't know what application is using the library).

In fact, you could _only_ use it in applications where it's okay to immediately crash when a programmer's bug is encountered.

About what you yourself wrote:

I don't take the position that contract violations should cause a process failure lightly. I have this belief after years of repeated experiences across a wide variety of code bases where we moved from Debug.Assert or exception style validation to fail fast semantics. Every single time it had the same results.

Short term instability did increase. Years of bugs that were hidden by debug only semantics or exceptions that were eventually swallowed were suddenly un-ignorable (a process crash is a sure fire way of getting a bug filed).

First, I'm not talking about using debug assertions for contract checking. I'm very much opposed to that. I'm talking about exceptions for signaling contract validation only.

Second, in "exceptions that were eventuelly swallowed", exceptions are not the problem. Swallowing exceptions that indicate programming errors is. IMO, failing fast always is like throwing out the baby with the bath water.

@Przemyslaw-W That would be one way of solving it, yes.

I'd rather see method contracts throw exceptions (at least for the requires clause) and add a new language keyword assert to fail-fast if whatever condition is not met.

If you polled regarding this feature I'm sure that the _vast_, _vast_ majority of developers just want a simpler way to validate arguments and to throw some variation of ArgumentException in the case that the argument is invalid. The alternate syntax for method contracts makes this possible (assuming it would be adopted) but manages to actually be more verbose than coding it by hand. As such a developer has very little impetus to use method contracts.

I agree that Debug.Assert is basically useless and I wouldn't associate method contracts with it at all.

As for the concern about developers just swallowing validation exceptions, sure, that is a valid concern. But the developer has to opt-into being stupid and the language shouldn't be second-guessing whether or not the developer had a valid reason for wanting to handle the exception differently. Given that the default behavior of the run time if an exception manages to bubble up to the threadproc is to fail fast anyway I don't see why method contracts should be jumping the gun on that decision.

I think "failing fast" is a horrible idea for all the reasons mentioned by other posts. It takes all control away from the programmer. If that was the default behavior of method contracts I can guarantee that I and many others would never use them.

The runtime and language has a robust exception mechanism that allows the user to decide how to handle errors. It's not the role of the language designers to make this decision for us.

Throwing exceptions should absolutely be the default behavior of any method contract.

I'm going to throw in my two cents here and agree with @jaredpar, with caveats. I agree that there's a difference between input validation and sanitation, and code contracts. Things like IO often can have unexpected and undefined circumstances, so exceptions fit well there. But if I'm in a core library and I receive unexpected input, then something's seriously wrong and I want to bail out (fail fast).

That being said, I also see and understand the concerns others are voicing. The biggest contention seems to be that most everyone else wants a shorthand for validating method parameters rather than strongly enforced contracts. Not only that, I do agree with the issue of not being able to record what contracts failed in the application.

Some possible solution (some mentioned above) could be:

  1. Providing two different contract keywords. One (requires) which produces an exception and another (asserts) that triggers a FailFast.
  2. A reliable extension point where developers could record failures that trigger Enviroment.FailFast.

How about exception/fail fast switch in app manifest?

This seems far to non-deterministic to me.

@fschmied

It is safe to execute code after a programming error occurs

The bar you laid out here is very high and sure in those very specific circumstances it would be okay. The number of places in real code though which meet that high bar are very few.

But a general purpose programming language simply cannot assume that it is okay to call Environment.FailFast when a programmer made a mistake, this really depends on the concrete application.

As some whose implemented this feature in a programming language used across a wide variety of application scenarios I very much disagree.

@RichiCoder1 Yes, I think that's missing is the capacity to succinctly express the severity of if the condition fails. I think something like assert (not to be confused with Debug.Assert) captures this pretty well, either in the contract portion or as a standalone keyword.

However, even then I'm a little hesitant about failing fast only because doing so is so unfriendly to other tooling like test frameworks. I'd probably like to see a way to control that at the application config level so that test runners can be updated/configured to not simply explode on a failed assertion. This wouldn't be dissimilar to the application config key which would allow a process to continue executing if an unhandled exception bubbled up past the threadproc as that behavior was changed in .NET 2.0.

@jaredpar

The bar you laid out here is very high and sure in those very specific circumstances it would be okay. The number of places in real code though which meet that high bar are very few.

We're going to have to disagree. My experience is very much the opposite, at least when it comes to contracts regarding argument validation. I'm sure that you can enlighten me about any other languages or code contract frameworks that behave in this fashion? I've yet to see one.

Even Eiffel just throws an exception.

@HaloFour

I'm sure that you can enlighten me about any other languages or code contract frameworks that behave in this fashion? I've yet to see one.

Most of the framework mechanisms I've seen / used has simply been building off of existing ones or simply hand implementing them:

  • Switching C++ validation macros from asserts to hard fail logic.
  • Hooking into events like Contract.ContractFailed to crash the process vs. letting normal exception semantics occur.
  • Hand implemented APIs like Contract.Requires plugging into Environment.FailFast
  • etc ...

Language wise there aren't any main stream languages that I'm aware of which have a hard contract system. Most shy away from argument validation entirely. There are a few out there with contracts but they tend to be on the theorem proving side which is not what this feature is about.

However part of my previous job was adding contracts, and other extensions, to C#. The resulting language was applied across a wide variety of applications and use cases in a very large code base.

Contracts was particularly exciting to add because up until then I'd only seen it deployed as ad-hoc frameworks. I was curious to see how it would scale to an actual language implementation and it did not disappoint. The language integration allowed us to take it a bit further than a run of the mill API would have (plugging into inheritance and interfaces in particular).

The research team viewed contracts as a very successful language feature for many of the reasons I've previously listed. This particular proposal stems directly from that implementation.

We're going to have to disagree.

That's a fine outcome. I understand this can be seen as an intimidating and possibly reckless approach to argument validation. I assure you though it is not :)

I still encourage you, and others, to try it. I've seen it implemented a number of times now, almost always in the face of opposition, and every time it's been viewed as a success.

@jaredpar How have the products you've used this with captured diagnostic information about failures so this issues could be fixed? Did you just lean on Window's error reporting/logging? How was testing handled?

@HaloFour

I like the assert idea but I think the syntax require expression else fail would be more in line with the others and more explicit about what will happen. require expression could be used to assert and require expression else throw Exception() for exceptions. Assert and Fast fail could also be toggled using a switch too, as previously suggested. Just some thoughts.

@RichiCoder1

Yes we just relied on Windows error reporting / watson for post mortem debugging.

@Przemyslaw-W

I'm very much against a switch to toggle fail fast on / off. Having the behavior of the application changed based on a togglable setting makes code that much harder to read. For every call site to a method it implicitly doubles the number of possible code paths possible. It also double the test matrix of the code in question.

@jaredpar If this system was similarly pluggable and the developer could make the decision then I wouldn't be as opposed to it. The default behavior being fail-fast simply does not appeal to me. At the very least support an app.config method to specifying the behavior. Having a test runner just die is ludicrous.

As for trying it, I primarily write libraries consumed by other developers. If using my library caused their process to fail the end result will simply be that they will stop using my library. I could point them to crash dumps all day long, most developers have no idea what to do with those things anyway. They understand stack traces and failing tests.

I can guarantee you that if fail-fast is the default behavior that this feature will go unused, you're better off spinning that effort elsewhere.

@HaloFour

I can guarantee you that if fail-fast is the default behavior that this feature will go unused, you're better off spinning that effort elsewhere.

This is a point we will have to disagree. I've implemented this feature in libraries and in languages. It was not shunned by any means but very much embraced.

@jaredpar

This is a point we will have to disagree. I've implemented this feature in libraries and in languages. It was not shunned by any means but very much embraced.

The community seems to think otherwise and I've yet to see compelling arguments for the "fail-fast"
case beyond "trust me".

I write unit tests to ensure that my methods fail when I pass them invalid arguments. This makes it impossible to do that. There goes the test runner again.

@HaloFour

The community seems to think otherwise and I've yet to see compelling arguments for the "fail-fast"
case beyond "trust me".

The community is far more than the 10 or so people who contribute to a GitHub discussion.

The community is far more than the 10 or so people who contribute to a GitHub discussion.

I think this is the important thing. From a general perspective it shouldn't be too difficult to switch the compiler from a Environment.FailFast to throwing exceptions. I think that it should be implemented as Environment.FailFast and given to the community to play with. If the community at large has an issue with it then it can be changed before it's actually released.

As much as I think the failfast won't work, I do think that should be the approach going into C# 7 previews. Only once the community at large plays with it will we know if it won't work. And if it won't work for most people (which most people in this thread are guessing it won't) then it should be switched before the actual release. The language team has been very good at listening to the community and switching implementations before release (like how string interpolation is switching before final release).

A reliable extension point where developers could record failures that trigger Enviroment.FailFast.

This actually could solve a lot of the issues. Give the application the chance to use it's own logging, attempt to save the application state for the startup code to attempt to recover etc. Perhaps it could even be a way to abort the fail fast and turn it into an exception. That might be a little confusing but then it makes it possible for a testing framework to catch the code. I don't know if it'd work well, but I'd like to give it a shot.

@jaredpar Having previously worked on Code Contracts, you are surely aware of the massive debate that went on there over whether or not to fail fast. The Code Contracts team tried for months (years?) to convince the community that failing fast was the right thing to do, but were ultimately unsuccessful. Having the opportunity to feel the effects of such a misguided decision, I certainly did not embrace it. I was one of those decrying the absurdity of fail fast then, and will continue to do so now.

You haven't responded to any of the objections brought up in this thread:

1) How do you do error logging? Watson is simply not sufficient; the vast majority of .NET applications don't use it because of the limited and arcane information it provides, and the difficult in accessing that information. EVERY .NET application I have ever seen produces its own error log.
2) Is it appropriate to take down a production web server serving millions of users around the world because of an innocuous logic bug in a corner case for one particular user?
3) What happens when a unit test violates a contract? Crash the unit test runner?
4) If a programming error should always instantly crash the process, why does every other error condition throughout .NET thrown an exception? Why does NullReferenceException even exist: shouldn't the process just die? Why does failing to JIT a method (which surely indicates a much more serious problem than a contract violation) throw an exception instead of killing the process?

I believe strongly in surfacing errors quickly and not hiding problems where they can fester. But insta-killing the process is not the only way to quickly surface errors, and it eliminates all other potential options.

Frankly, it is extremely condescending to continually assert that if everyone would just try it your way, they would see the light and realize that you were correct all along. Assuming that anyone who disagrees with you must be less knowledgeable or experienced than you gives you an excuse to ignore them and leaves you blind to the possibility that _you might be wrong_. The point of opening up the development process on a public forum like GitHub is not so you can read the concerns of the community and then write them off.

I am a smart person, and there are a lot of other people who are smarter than I am who are passionate about building a better language that lets us build better software. Don't write us off, convince us; and if you can't convince us, consider the possibility that you might have missed something, and be willing to reconsider your own position.

@jaredpar So you consider a "success" to be an application where the only way to tell what went wrong is to look in Windows error logs? In my business we consider that to be a poorly written application that is difficult to debug. We use actual log files and developer-friendly error reporting. Could you give any examples of "real" applications that benefited from such a system?

Although I think this is a bad idea even for everyday applications- it's especially ludicrous when you consider mission critical systems. I can just imagine someone trying to tell their boss their trading system crashed and they've been off the market for an hour because of this great new "feature" some academic in Redmond decided to add to the language. Or how about the doctor telling his dead patient's relatives that their pacemaker stopped working because someone asserted February had 28 days? There is an endless list of examples that you could give as to why this is such a bad idea.

Also- yes the community is more than the 10 people who contribute on Github, but the people who do contribute there are C# enthusiasts who likely have a higher-than-normal tolerance for new language features. If we all think its a bad idea I can only imagine what the average Joe developer would think. It's also incredibly insulting for you to be so dismissive of the community's opinions- that's pretty much the opposite of what open source development is supposed to be about.

Bottom line- if this feature made it into the language as proposed I would use (or write myself) an analyzer to ensure that it is not used anywhere in our codebase. We're not writing academic software. We need to handle exceptional conditions with thorough logging and error reporting.

@mirhagk

I think this is the important thing. From a general perspective it shouldn't be too difficult to switch the compiler from a Environment.FailFast to throwing exceptions. I think that it should be implemented as Environment.FailFast and given to the community to play with. If the community at large has an issue with it then it can be changed before it's actually released.

That's fair. Drop a build, post a feature list explicitly layout out the behavior on MSDN magazine (especially mentioning the fail-fast behavior), and let the flame-fest fly. I do think that the gut reaction of the vast majority of developers will be the same as mine (and seemingly everyone else on this thread thus far). If I'm wrong then I will gladly shut up and let the community have what it wants without objection.

I do think that the gut reaction of the vast majority of developers will be the same as mine (and seemingly everyone else on this thread thus far). If I'm wrong then I will gladly shut up and let the community have what it wants without objection.

I 100% agree. But I also think that it's not enough to assume that we (github followers of the repo) are the typical developer community for .NET or to assume what they will want based on what we want. I know that I stand out very much at my workplace, and while I can predict my workplace's reaction to this feature using failfast (we wouldn't use contracts in that case) I can't predict those outside my workplace.

I think the tricky part is getting their feedback. Most C# developers aren't even aware of C# 6 changes yet (and the majority I've seen don't know how to use C# 5 features effectively). It's going to be tricky getting their feedback (not just for this issue either, in general any new feature). One way is to put something on uservoice and share the link on popular programming communities (like reddit and hackernews), another is to simply create the feature for the preview, and get some prominent C# authors to blog about it. Those will make their way to things like The Morning Brew, and ultimately gather a lot of the community.

@eatdrinksleepcode

How do you do error logging? Watson is simply not sufficient; the vast majority of .NET applications don't use it because of the limited and arcane information it provides, and the difficult in accessing that information. EVERY .NET application I have ever seen produces its own error log.

I disagree with the classification of Watson. I've used it to track down countless bugs over the years. Yes it is arcane and cryptic but it's also extremely useful.

As for logging specifically the only item in question would be logging of the actual failure. Everything up to that point would proceed unaltered.

Variants of the design in the past have discussed having a terminate routine. Essentially a call back that runs on a contract failure which allows for items like more detailed logging, emails, etc ... to occur. There are some gory details around this function but I've always felt it was a good compromise.

Is it appropriate to take down a production web server serving millions of users around the world because of an innocuous logic bug in a corner case for one particular user?

Is it appropriate to reveal the personal information of millions of users around the world because the server failed to shut down when it detected a programming bug?

What happens when a unit test violates a contract? Crash the unit test runner?

Unit testing is definitely the trickiest aspect to handle. There are a couple of strategies that I've seen tried here.

One is to simply isolate contract tests on a per process basis. This works, requires no compromises to the model but is impractical for real world use. Especially on Windows where processes tend to be a bit heavier weight.

Another is to leverage the idea of the terminate routine is with logging. The test framework adds a helper in the flavor of Assert.FailsPrecondition(Action action). Under the hood it manipulates the terminate routine to set a flag noting it was hit and thrown an exception that the implementation of FailsPrecondition catches.

Allowing the terminate routine to continue execution by throwing or setting a flag does make me squirm in my seat a bit. But it's a practical compromise for a couple of scenarios of which unit testing is the most prominent.

If a programming error should always instantly crash the process, why does every other error condition throughout .NET thrown an exception? Why does NullReferenceException even exist: shouldn't the process just die?

I agree these are very much analogous to contract failures: predictable conditions which when triggered represent coding errors in the program.

Should these failures trigger fail fast? In my opinion yes. I've never seen a case where I wanted a program to continue executing after an NRE and seen many cases where the continued execution after the NRE lead to must worse and hard to track down bugs. But this is a runtime decision.

I do understand that developers want to have an option of throwing here because it's the way .NET typically handles this situation. This is ultimately why I think the option of else throw will be a part of the design (if contracts are taken at all). It is a nice unification and back compat story for the feature.

Frankly, it is extremely condescending to continually assert that if everyone would just try it your way, they would see the light and realize that you were correct all along.

I don't mean that to be condescending and I apologize that my writing is coming off that way. I mean that as a genuine piece of advice / recommendation. As someone who was very skeptical of ti at first my advice to others is to try it.

I am a smart person, and there are a lot of other people who are smarter than I am who are passionate about building a better language that lets us build better software. Don't write us off, convince us; and if you can't convince us, consider the possibility that you might have missed something, and be willing to reconsider your own position.

I'm definitely not writing anyone off. Unfortunately though I'm not sure what to do in terms of convincing others except to ask them to try it.

I started out as a skeptic of fail fast in the beginning and at the time I'm not sure what anyone could have said to convince me it was a good idea. I was junior enough at the time that I didn't really factor into the decision process for the team. It was more or less a decision that was forced on me. Had it not been I may not have ever embraced it the way I have.

@jaredpar I have no opinion one way of the other at this point.I do think in many scenarios, failing fast is the right thing to do, as such a failure clearly points to developer error.

However... there's gotta be a compromise into how to figure that out. I have to agree with some of the arguments made by other people in that sometimes those of us inside of corp use for troubleshooting is not necessarily what our customers out there use (or are in a position to use). DrWatson is fantastic, but I've only met a couple of people that even knew it existed.

Crash dump analysis is fantastic and I thoroughly enjoy it, but it's a black art as far as a huge chunk of our user base concerns. And unfortunately, I do see lots of applications in production throwing random ArgumentExceptions in production, to know that this _would_ happen with contracts just as well :(

Let's discuss the approach of Environment.FailFast but with calling some sort of logging callback. Should this be an event that is raised with code that wants to do something subscribing to the event? Should the event allow transforming the call into an exception instead? (My opinion on these is yes for both, to enable the discussed scenarios).

I think the event should allow transformation, but I also think this should produce a warning. There are circumstances where this is necessary, but in general the correct practice is to log the error and exit the application. The exceptions I can think of are things like testing frameworks and web servers, where you'd want to kill the code that's running, but not kill the entire process (i.e. kill the incoming request, refuse to respond, but continue allowing requests for the rest of the site that doesn't have the bug).

@tomasr

I agree that compromise is necessary and the most likely path for this feature to be accepted. The summary of where I think this should go is:

  • requires <expr>: triggers terminate routine on contract failure
  • requires <expr> else throw <expr>: triggers the throw clause on contract failure
  • Contract.Failed: event / callback which is invoked on failures of the first contract variety. Default behavior is to use Environment.FailFast. Can be overridden to cancel process termination.
  • ensures variety for both of the above requires

Note: the name of Contract.Failed would almost certainly be different than that. I used it as a place holder for the discussion, not an actual suggestion of the name.

@jaredpar that sounds pretty reasonable to me. My only concern is having to deal with yet another "unhandled error, process will terminate" event (mostly that developers will need to be aware of it, remember to hook it up, which almost represents a silent behavior change for ported code), and not entirely sure how that would play with platform-specific termination routines (say, ASP.NET).

@jaredpar Sounds reasonable to me too.

Another random though that would be nice is the ability to evaluate a methods contract against a set of arguments. Something like Contract.Test(MyMethodInfo, params): boolean.

Edit: A "?" there would be the behavior of Contract.Failed if requires <expr> else throws <expr> or ensures is used. Would/should they just locally override the Contract.Failed behavior?

@tomasr I think it would kind of fit into the pay to play mentality. If you want to use contracts, you have to recognize all the cases that comes with that.

@RichiCoder1 don't disagree but problem is that the one paying might not be the one choosing to play in the first place :)

@jaredpar I think that give the end user (we, the developers) the ability to choose is a good compromise. I prefer the exception behavior over the fail fast, and I appreciate the ability to choose what happens when a contract is violated.

@RichiCoder1

You actually hit on a really important scenario there and it does need to be added. The scenarios for it are a bit more corner case (client server architectures in particular) but once you get into that scenario it's a must have.

In terms of http application input validation. Please take a look at my blog post: Code contracts vs input validation

@jaredpar
1) I think you should leave us (developers) a right to shoot our leg. If we want to throw an exception - let it be. It would be nice to have a choice between exceptions and Environment.FailFast.

2) It seems like Contract.Failed event makes requires <expr> else throw <expr> syntax redundant in most cases. Do you think there will be cases where using different precondition behaviors (i.e. requires <expr> else throw <expr> and requires <expr>) would be justified? I think they will be used either for throwing exceptions or crashing the app, not both. Also, using different syntax complicates switching between these behaviors.

3) It would be nice if the preconditions will be able to call pure methods to make some more complex validation:

public void DoSmth(int param)
  requires IsValid(param)
{ ... }

@vkhorikov I agree that there is a distinct difference between code contracts and input validation and in a wholly closed system the former absolutely represents a bug. However most systems aren't fully closed, they are built from many disparate components.

As a library maintainer I basically need the equivalent of input validation but at the method level. I don't want to fail-fast because frankly that's not my decision to make. If that library is used in a web service or the like a fail-fast will bring down the application pool, immediately killing every other request in flight and requiring that the application pool be relaunched, which is usually an expensive process. A simple error would result in a very effective DoS vector. Worse, 99.999% of the web site developers who would face this issue would have no clue how to deal with it. Simply put, "my library" would be deemed flaky and people would stop using it until I got rid of this "fragile" validation mechanism.

I want code contracts. I want simplified argument validation. I want metadata generated from that validation that can be used by analyzers to catch potential invalid calls in the IDE, long before that invalid code makes it into production. I don't want argument validation that actually requires more code than the simple if statements that we write today, or the helper methods that everyone under the sun has written. I don't want to have to waste my time trying to convince other developers that the dark art of code dump debugging is the appropriate way to troubleshoot invalid arguments. It's difficult enough to get them to read the logs that IIS so thoughtfully already generates when these exceptions go unhandled.

The precedent has long since been set in previous languages and throughout the BCL. Runtime and validation exceptions exist for a reason and they are the appropriate route to go. If there is all of this concern about "swallowed exceptions" then the C# team should just make it impossible to catch any form of runtime exception because graceful degradation is clearly a horrific idea. And yes, I will remain pretty vehement about it, especially in the absolute absence of a justification, because this is a feature that I've wanted for years, but not in this esoteric and useless form.

@HaloFour
I totally agree with all you have said. Your arguments are actually the reason why I am too asking @jaredpar to preserve our right to choose between throwing exceptions and crashing the app. I would also add one more: unit tests. If the app will crash every time a contract is broken, it would be extremely hard to test them. Whether to crash the app or swallow the exception should be decided by the app developers, not by .Net architects.

Nevertheless, what you are writing about in terms of the library you are maintaining _is_ a contract. Your point is that you don't want the client app to crash and it is a totally valid point because, again, it should be our choice, not .Net architects'.

We use exceptions for both contract violation and other exceptional situations, but not all exceptions signalize a bug. Contract violation exceptions should be treated differently. While "regular" exceptional situations (like connectivity issues or I/O problems) can be easily ignored by developers after the cause was removed, contract violation exceptions should be carefully investigated and the code generating them fixed.

I think that the biggest problem here is a disconnect between what @jaredpar thinks of a code contract and what others think of a code contract.

My understanding of @jaredpar 's view of code contracts is that it's an extension of the compiler (although he states the they should be proven by other tools). Thinking of all contract types beyond parameter validation, I'd have to agree (although I think that, if it's in the language, it should be verified by the compiler).

My understanding of the everyone else's view is that it's just function parameter validation.

Another disconnect I see is the difference between "I (the function) can deal with your bad input and throw it in your face" and @jaredpar 's "I the system ...".

@jaredpar 's view writing does come across a bit academic and that triggered a lot of "I, in the real world" reactions.

Although I've written applications that tried to survive whatever errors I introduced in it, because it was the requirement that they do (keep working, not having errors), they weren't dealing with any important data. When dealing with important data, I have chosen to fail fast many times.

I consider @jaredpar 's last proposal very reasonable and a good working draft of what the final version should be.

I think we absolutely have a disconnect. The original proposal seems to have evolved from what was done in MSR involving Code Contracts which was more of an academic process and the idea there was that the appropriate course of action was to fail fast. Of course what shipped as Code Contracts in Visual Studio doesn't behave that way, it still throws exceptions unless you go out of your way to wire up the ContractFailed event to manually fail fast.

Thing is, I don't think that it's appropriate for the actual code contract to define what happens when it is deemed invalid. Doing so removes that choice from whomever and whatever is consuming that code. I think the contract should just describe the rules and nothing more. I think that the application itself should then define what the behavior should be for a failed contract. I'm even fine if fail-fast is the default behavior, at least of release build applications. But I think that any application should be able to change that behavior to something between fail-fast and throwing exceptions, as well as have different options for logging the validation. And I think that if Microsoft is about to unleash something that will catastrophically detonate people's processes that they better both ready a whole suite of "Core Dump Troubleshooting for Dummies" tools as well as prepare the Microsoft support team to steel themselves for the onslaught of support requests to figure out why this framework is so buggy and fragile.

When I think of a feature like "code contracts" my mind immediately jumps to argument validation. Why? It's an insanely common problem and one that developers frequently shy away from because it's tedious and boring to write a bunch of manual checks. Like many others I have my own library, which I rolled long ago, which rivals even Code Contracts in terms of how succinct it is to use. What my library doesn't do, and the one thing that I severely want it to be able to do, is to tie into a larger static analysis framework to provide better compiler support for detecting when an invalid call might be made so that these types of issues are caught earlier. It's the one aspect of this proposal which is painfully absent.

I'll throw another scenario out there: SQLCLR. I actually have written and maintain a number of SQLCLR libraries. Thankfully the host won't permit failing fast. An assembly that tried to use code contracts would probably be considered unloadable.

@HaloFour, once again, there's a big difference between "my system" and "my code/library".

I agree that, in most cases, fail fast is the right way to go. I would also argue that an SQLCLR library should be one of those since it's touching my data.

I'm also living in the real world, which means I have to take a lot of crap. But that doesn't mean that I agree with it or that I'll stop fighting it.

For those that ultimately disagree with @jaredpar, I sincerely hope that you'll never be in a position of "Jared was right!".

@jaredpar, I think your last proposal is more realistic without straying away from the correct course of action. But i do think the compiler should do something other than generating code. Admittedly, more complex theorem provers might be needed, but the compiler should provide a first level of validation.

@jaredpar And now for something completely different - your proposal seems to assume that integrating with System.Diagnostics.Contracts is not desirable. However, I'd like to raise the concern that it will be quite confusing (and probably surprising) to have two _different_ variants of code contracts on the same platform.

I'd therefore propose that you reconsider reusing Contract.Requires and Contract.Ensures (i.e., have the C# compiler generate calls to those methods) and either

  • change the default behavior of those methods to match the desired behavior of C# code contracts (breaking change), or
  • add overloads/new methods to the Contractclass that implement the new desired behavior of C# code contracts.

@paulomorgado

I would also argue that an SQLCLR library should be one of those since it's touching my data.

So you think that the entire SQL Server service process should immediately die because someone passed a NULL to a stored procedure parameter that shouldn't be permitted to accept it? That all other databases in use should be immediately taken offline? That all operations in flight discarded and databases in flux marked suspect and require a potentially lengthy recovery operation? That's ridiculous. Fortunately the SQLCLR team made such calls impossible, and for that very reason.

@HaloFour
I don't think contracts are the same as parameter checking. For parameter checking, one already has the canonical if (param == null) throw new ArgumentNullException().
I would see contracts as a kind of guarantee. If you are sure that your method cannot possibly be called with param == null, you add a contract. If you've got a public method, the client is allowed to call it with whatever arguments he wants. But if you control the method visibility (say, the method is internal), and therefore control all calls to it, than you can be _sure_ that your method is called with non-null arguments (and sometimes the compiler even can prove that).
So your example doesn't apply here.
(And I am a strong proponent of not suppressing bugs but crashing as soon as possible, otherwise the program is anyway in an inconsistent state which can result in a massive damage.)

@vladd In which case we would be talking about two different features. However, that would still lead to lots of confusion as practically the first thing any code contract library mentions, including that from Microsoft Research, is argument validation and API documentation. Even purely for internal purposes fail-fast is not the precedent (not even by the existing facilities provided by Microsoft) and won't be the expected behavior.

The canonical argument validation method only covers the simplest of cases. Even in your example you're not providing enough information and should be passing nameof(param) to ArgumentNullException. Range checks and other simple forms of argument validation have no built-in facilities in the BCL meaning that short of rolling your own you'll get no localization of error messages. And of course none of this feeds any form of metadata that can be used to automatically document the method nor provide any form of static analysis.

Anyway I'm about done flogging this horse. I agree that this needs wider discussion as we're dealing with much too small of a sample set on these forums. When something about C# 7.0 eventually hits MSDN Magazine I hope it contains, in no uncertain terms, the exact behavior to be expected and the rationale behind it.

@fschmied I could be wrong, but I believe the intention would be to simply deprecate System.Diagnostics.Contracts.

@HaloFour @vladd I agree that SQLCLR should fail fast if a contract is broken, because if it doesn't, that puts your data at risk. But that again goes back to Contracts vs. Validation and sanitation. I however agree that the behavior of this feature, and any other features that evolve out of this, should be shouted to the high heavens to ensure (sorry) that people know exactly what they're getting. I also agree the debugging and testing story should definitely be very thoroughly thought out.

@RichiCoder1 The operation should fail-fast. The process of the database server? No way. You'd have a lot of _really_ pissed off DBAs if that were the case. But it's kind of moot because I'm pretty sure that SQL Server will not load any assemblies that use Environment.FailFast.

This is just another example of user assemblies loaded into larger infrastructures where the behavior of contract validation failures should be controlled by an external actor and not decided strictly by the compiler. If the implementation would rely on a potential event allowing for said infrastructural code to handle the failure its own way (e.g. log, tear down AppDomain, etc.) as mentioned by @jaredpar then I'm fine with it.

@paulomorgado

But i do think the compiler should do something other than generating code. Admittedly, more complex theorem provers might be needed, but the compiler should provide a first level of validation.

I think the correct role of the compiler here is to enable theorem provers to operate on contracts vs. being a theorem prover itself.

Theorem proving is a very difficult problem space. The solutions are constantly evolving and are often very computationally expensive even for seemingly simple code snippets. The execution time for the proof can quickly exceed what is considered reasonable for the code being compiled.

The rapidly evolving nature of the algorithms and potential bad turn around time make it unsuitable for a "in the box" compiler feature IMO. Instead I think the compiler should generate contracts in such a way that it is recoverable by theorem provers after the fact. Essentially move it into a post compilation processing step on a more moderated basis where turn around time is less critical (nightly, weekly, etc ...).

@HaloFour
After some more thinking on your example: any process should allow only trusted code to run within itself. For a DB process, catching exceptions while running malfunctioning code isn't a way out of problems. An ill-behaving code might as well deadlock, block resources, never return or whatever.
If the code is seen as a foreign, potentially malfunctioning one, it must be run in a separate AppDomain, and torn off as a whole is something goes wrong. If the code is trusted, then failing to fulfil the contract is a bug and must be detected and fixed ASAP. Catching contract failure is IMHO something like catching IndexOutOfBoundException.

@vladd Agreed. Brings in the whole idea that if there's a risk outside data could cause a contract failure, it should be isolated so it and everything it's touched collapses without affecting core functions. So a query that could cause database corruption would just sink the processing domain rather than the entire database. Again, assumes Contracts are a hard stop mechanism rather than validation though

@vladd , @RichiCoder1

Exactly. My _opinion_ is that the behavior as to how to handle a contract failure belongs to whatever is managing that AppDomain. Using AppDomain isolation for untrusted code is appropriate, but that's still code within the same operating system process and Environment.FailFast will tear down all of those AppDomains immediately. In the case of SQLCLR it goes even further as the SQL Server process runs its own CLR host which severely limits what any assemblies are permitted to do within them, but it's still entirely in-process.

CoreCLR doesn't currently have the concept of AppDomains, does it? Was just chatting on gitter with richlander about that. It would be certaintly be good to enable contracts to be isolated so they still fail fast, but they only kill of data/code that's absolutely in danger. Any experience with this @jaredpar?

@HaloFour

So you think that the entire SQL Server service process should immediately die because someone passed a NULL to a stored procedure parameter that shouldn't be permitted to accept it? That all other databases in use should be immediately taken offline? That all operations in flight discarded and databases in flux marked suspect and require a potentially lengthy recovery operation? That's ridiculous. Fortunately the SQLCLR team made such calls impossible, and for that very reason.

Yes! That's ridiculous!

The hosting environment should implement its own fail fast strategy which, for SQL Server would never be what you're sugesting I sugested.

@jaredpar

I think the correct role of the compiler here is to enable theorem provers to operate on contracts vs. being a theorem prover itself.

Theorem proving is a very difficult problem space. The solutions are constantly evolving and are often very computationally expensive even for seemingly simple code snippets. The execution time for the proof can quickly exceed what is considered reasonable for the code being compiled.

The rapidly evolving nature of the algorithms and potential bad turn around time make it unsuitable for a "in the box" compiler feature IMO. Instead I think the compiler should generate contracts in such a way that it is recoverable by theorem provers after the fact. Essentially move it into a post compilation processing step on a more moderated basis where turn around time is less critical (nightly, weekly, etc

Sory! Abuse of language. I meant "the compiler" as that thing that runs I click build on Visual Studio.

I would like to be able to use the requires feature to eliminate errors about missing branches.

E.g.

public enum MyEnum { One, Two, Three };

public string GetText (MyEnum myEnum)
  requires defined(myEnum)
{
  switch (myEnum)
  {
    case One: return "Single";
    case Two: return "Pair";
    case Three: return "Triple";
  }
  // No error about missing return statement; all branches are covered.
}

See also https://github.com/dotnet/roslyn/issues/179#issuecomment-77338850 about switch all.

For myself (and I would imagine many others), the value of this feature is about reducing boilerplate code in favor of more declarative parameter validation. In my boilerplate code, failure to satisfy the precondition produces an ArgumentNullException. It would be bizarre if - in a private helper function - I elected to write:

if (foo == null)
{
    Environment.FailFast();
}

It is not the decision of my private helper to tear down the entire environment. It is the decision of the private helper to refuse to do work and to surface that refusal through the normal exception mechanism.

An added benefit of declarative parameter validation (in addition to clearer, more concise code) is that those validations can be checked statically and can be reasoned over by tooling. Compiler warnings can notify others on my team when they are doing something that will produce an exception at runtime. However, we would not use contracts if we had to _also_ write the boilerplate exception code, and we could not accept having the environment fail because a single request clipped a single edge case where something was null.

I also think code contracts would be used primarily for marking incoming and outcoming values as non-nullable. It would be great if there will be any technique for us to make such validations at a class (assembly?) level so that we won't need to write them everywhere.

In terms of non-nullable reference types, I wrote a blog post about what the current ways are to mitigate the problem: link

This is quite interesting. Beside everything that the discussion already covers, I see one of the nice side-effects of Code Contracts that the current proposal doesn't account for, is that removal of the a bunch of contracts from the production (release) builds. (There-by providing (almost) Debug.Assert like behavior). Should the language feature be as proposed, there's no way to introduce a Debug only way which is definitely a nice to have in many scenarios. Not to mention the fact that such a design would force developers who are looking for Debug.Assert like functionality to forcefully not use the language feature, thereby creating inconsistencies.

Note: Code contract assertions are technically not exactly the same a Debug Assertions, but I only use it as a example for a better understanding of the scenario.

@prasannavl :+1: I hadn't thought much about the Debug.Assert aspect.

Another potential approach is to extend the XML Documentation Comments to have Contract nodes,

@AdamSpeight2008

  1. The XML Document approach stops this from being a language feature, rather a compiler extension, of sorts.
  2. That doesn't feel natural to me. Personally, I'd simply use the existing Code Contracts over XML document nodes any day.

Personally, I feel a better integration of Code Contracts into Roslyn, and its own improvement would make a lot more sense than turning this into a language feature, which seems to complicate the fundamental language quite a bit, while Code Contracts is well thought-out and covers a number of scenarios quite neatly.

If requires, and ensures are a part of the language as @stephentoub proposed, what about the rest - Complicated Interface semantics (also abstract class contract semantics), how extensible are the conditions in this little language feature block? What if I need to run the predicates through some complicated function? Am I just going to be falling back to good old ways in that case?. It seems to me that this line of thought will create more complications than it solves.

Simply put drawing a line between the language feature, and code is going to create quite a bit of an inconsistency, as already seen by some of the comments above suggesting assert. Having it in a library like Code Contracts makes it easily extensible, and while also being natural (well, it is after-all just plain simple C#)

Would they be supported for properties as well? Given that properties are a shorthand for get/set methods.

This is actually a feature existing in other programming languages like the D language.

You might be able to learn from what the D team already implemented and researched.

But anyway this is a great and compelling feature.
The Contracts methods should include a param name as well.

Personally I welcome this proposal as it allows to visually separate expectation from the code which actual does anything and ideally removes need for writing the same thing several times (code, documentation).

I vote for a variant which throws an exception and which (something not in the original proposal) allows me to use the condition as an argument message. The .NET Code Contracts currently uses the condition as a default message text. This is quite sufficient for most use cases I have experienced.

What I expect from this feature:

  • A failure to meet the conditions is catchable (so it can be logged and reported as other exceptions).
  • Specifying conditions for most used cases (not null as the most prominent example) is as easy/short to write (but still easy to notice/understand and consistent).
  • An ability to specify a little bit complex conditions (allowed values of one parameter depends on a value of another parameter).
  • The order in which the conditions are evaluated is deterministic (so I can test for not null before I use the parameter value in other conditions).
  • Being able to call reasonable set of methods in the conditions (at least pure methods like String.IsNullOrEmpty).
  • The conditions are accessible by reflection and creatable by CodeDom (or another applicable meaning).
  • The conditions are automatically propagated into documentation (so I no longer need to specify them in tags).
  • Conditions can be specified in the interface syntax itself (no longer need of ContractClassAttribute).
  • Conditions specified in a underlining class/interface automatically propagates to the derived class (including repeating them in the documentation of the derived class/method).
  • There should be no significant difference in the end user behavior between using conditions and checking in the method body as this is basically a syntax sugar with some advanced benefits for tooling.
  • If a calling code is statically found not meeting the conditions, it results in a warning but the code compiles. For quick demos, prototypes or a code in a middle of refactoring/implementation is generally ok to violate the conditions.
  • Override methods could not strengthen conditions defined on the base class/interface.

From the tooling point of view, I expect:

  • IDE provides a quick way to see combined list of conditions applicable on a method (on a call as well as on the definition).
  • Highlights possible violations.
  • Provides code completion and/or templates for most common checks (not null, greater 0).
  • Statically analyzes the code and provides warning on possible violations, provides options to add checks before calling or other ways of quick fixes.
  • Ensures that the conditions does not results in ArgumentNullException (the condition evaluates a member of a passed object before ensuring it is not null) in a similar way as it ensures the same in body.

Similarly for conditions on properties, fields and other places it might be applicable.

I really love this idea. It's safer, less verbose, more self-documenting and more hassle-free than TDD.

This is growing to be nicer. I think there has been enough elaboration on what is the expectation from the code perspective. But what's expected out of the compiler should also be listed down:

Expectations from the compiler:

  1. The compiler should identify areas where checking is redundant, and strip the code for such directly from the IL during compilation where-ever it can predict with certainty , leaving the JIT compiler with lesser work to do (The scenarios that the compiler cannot detect like Reflection). (Without this I'm going to start hating the overhead of using this, that I'd just stick to code contracts).

To add to what @prasannavl said, it is very easy with Code Contracts to actually reduce the amount of checks at run-time for internal code as Contract.Requires and .Ensures can be omitted from release builds but still offer safety benefits through the use of static analysis. This is important to me and I really don't want to lose this ability.

For my uses of such a feature in the language I would like to have the option for:

  • requires expression else throw new Exception() just translates to if(expression) throw while outputting metadata for static analysis
  • requires expression should throw some exception in Debug builds and be omited from release builds, outputting metadata for both cases
  • ensures expression should always output metadata but should not be enforced in release builds

Note that when I say Debug or Release builds I really mean any specific build that could be configured by the developer, just like with Code Contracts.

If you want fail fast, just don't catch the exception. It will bubble up to the top and crash the process for you. There is no need to put a huge burden on everyone else for what you get for free.

Running each unit test in its own process is... well to put it bluntly just plain stupid. Even if we could convince all of the unit test runners to change to that model the performance penalty would be ridiculous.

How long does it take to start a new process vs execute a single function call? 100 times? 1,000 times? Even if we say it is only one order of magnitude that means my 2 minute test run is going to take 20 minutes.

I've never seen a case where I wanted a program to continue executing after an NRE and seen many cases where the continued execution after the NRE lead to must worse and hard to track down bugs.

I don't believe you.

Unless you are using VB with On Error Resume Next or mindlessly wrapping every line in a try-catch block, there is no way for the program to continue to execute in an unpredictable fashion.

Assuming that you don't screw up the error logging, null reference exceptions are the easiest bug to track down. You know exactly what line the error occurred on so it is just a matter of checking the values on that line for a null.

Your argument on this point seems to boil down to "I don't know how to write a catch block".

@Grauenwolf

Unless you are using VB with On Error Resume Next or mindlessly wrapping every line in a try-catch block, there is no way for the program to continue to execute in an unpredictable fashion.

The existence of the NRE means the program is executing in a fashion not intended by the programmer. Unless of course they wrote a method with the explicit intent of "hey I'm going to throw NREs here by design". In that case sure the program is executing as designed. But I've never come across such a method.

Assuming that you don't screw up the error logging, null reference exceptions are the easiest bug to track down. You know exactly what line the error occurred on so it is just a matter of checking the values on that line for a null.

This is arguing against a point I never made. My argument is about whether or not I want a program to continue functioning after an NRE occurs, not whether or not it's easy to debug one.

But sure tracking down the source of any exception is really easy with a proper stack trace. Fixing the exception though depends a lot on the circumstances of the code in question. Did the value come from a DB call, is it involved in multi-threaded access, is it used inside a static initialization cycle, etc ...

I don't believe you.

Okay.

I absolutely love this proposal. I see immediate benefit in a couple of ways:

  1. Callers know exactly what is being checked thus removing a layer of abstract/tribal knowledge.
  2. Compiler knows exactly what is being checked this can warn developers when they are setting up a potential failure condition without know it. (the method requires x > 1 x < 2 you have not checked x before calling)
  3. Compiler can know that a requirement will always be met therefor can omit the work at the call site improving overall efficiency while writing good code. No longer will good code compete with performance.

LOVE IT!

I like having the C# Compiler take this on as part of the compilation process. But I strongly disagree that fail-fast behavior should be the default. @TedDriggs explained why very clearly. Instead, I would recommend that the default failure is to throw an exception. I would also like to see at least a formal ContractException, and optionally descendants PreconditionException, PostConditionException. And provide compiler options like OnFailure = [throw-exception, fail-fast] so designers can choose the desired behavior on failed contracts, and not be forced into a model by a language designer.

Fail-fast is ridiculous.

Here is a simple application: it is a designer for a paper airplane competition. It's a client side application, and persistence is done by contestants submitting their work to your servers. To do this they need to fill out a form. First Name, Last Name and E-mail is required. However the UI developer forgot to add required validation on the e-mail, but the back end has a precondition that e-mail cannot be null or empty.

Thousands of users try to post their work, but they just want to get it out there, or don't feel comfortable supplying their email, so they omit it. What happens is that the users will lose all their unsaved progress (since persistence is done by submitting their work) and the program will just crash. Before you even get at the issue it has caused severe, user hemorrhaging damage for your application.

If the precondition just failed with an exception this would at least show an error to the user. It is also easily logged, and can even be automatically submitted. Your application is in a state that is unrecoverable for you. No matter what you, or the user does, they _will_ lose their work.

edit: You can of course argue that it was other design issues in this application that lead it to its predicament, but in the end it was the fail-fast handling of the design errors that actually caused the most damage.

You cannot specify fail-fast on a case-to-case basis either because it doesn't matter to the callee; it only matters to the caller, so exceptions cannot be opt-in. Fail-fast can be implemented from exceptions, but not the other way around.

edit2: It also causes havok in multi-user applications.

@GeirGrusom You cannot have contracts on methods that bind arbitrary client-submitted data. As always, when a client is free to send data in a free-form transport method, you will have to validate the data before passing it on into your system. Let's take the example of an ASP.NET MVC site with a SOA architecture. You may put business logic into the domain models and services, and use these from the controllers.

Your v1 of the application has been published, the domain models and services have contracts and the entire system is satisfied. Now a requirement changed, and you require an additional field of data with certain guarantees. You add the contract to the business layer, compile, and the compilation failed. Why? Because your guard code on the client-submitted data did not check the contract prior to passing in the value binding result into the services. This is exactly how you want this to work; compilation essentially "fails" because your system is in a bad state. It _can_ go wrong, and that's just wrong.

This is the essence of a contract, to guarantee your own systems sanity. Arbitrary client input _always_ has to be validated. _Always_. There is never a way around that. Contracts are great and powerful tools but, as always, if you use tools in the wrong way, they're not very helpful at all.

@GeirGrusom, (since you brought up the term) your reasoning about compile-time contracts is ridiculous.

What is being presented is not about input validation. It's about contract validation.

You can't bind a call to x.M(int) if x's type only has a M(string) method.

The compiler will fail fast. You might argue that the compiler doesn't fail fast if you think the purpose of the compiler is to list errors on the source code. I happen to think that the compiler's purpose is to produce an executable. So, if it doesn't, the task failed.

The reason behind the proposal is that you now can say that there's a M(string) where the parameter cannot be null or have an empty string. The perfect compiler would no compile if it could not prove that the parameter meets the contract.

It's not about input validation or transient faults. It's just that it's not the program you intended to write.

Now, that looks very convenient to handle input validation and transient faults. It kind of looks ridiculous that you have to write something like this:

void M(string x)
    requires x != null
{
    Guard.NotEmpty(x);

    ...
}

But that's probably the result of having bad callers or being extra-defensive (I am!).

I don't have a firm opinion about this. I tend to agree with the proposal and not allow the specification of custom exceptions to be thrown on each condition validation but I don't trust myself to write perfect programs.

I would favor a global event handler to override the behavior and a configuration for fail/throw.

@Deathspike - iow, for a bit of clarity, you won't get a runtime error (and lost data) from this, it results in compile errors only.

@GeirGrusom Note that you'd probably end up with exceptions on the server code anyways, just for dealing with deserialization and internal object creation... and it would still cause the submitting client to lose their data, if they didn't catch the returned exception. It still counts as IO, because you have no guarantee the submitter isn't intercepting packets or something.

@paulomorgado either the code fails in compile time, or it fails in run-time. In either case an exception tells you a lot more about what went wrong than killing off the entire app domain will ever do. It also causes code that does not have a exception handler to actually fail-fast; exceptions just doesn't take the kitchen sink as well.

So that we are perfectly clear: during compilation, you think it is OK that if a contract is violated, the process should: Keep finally blocks from running, keep finalizers from running, kill the entire process, write to the system event log and send an error report to Microsoft? Because that is what Environment.FailFast does.

@GeirGrusom Slight modification:

Thousands of users try to post their work, but one of them just wants to get it out there, or doesn't feel comfortable supplying his/her email, so he/she omits it. What happens is that all of the users will lose all their unsaved progress (since persistence is done by submitting their work) and the program will just crash.

I would've written more tests, but the test runner kept crashing!

There seems to be some confusion around the "fail fast" topic.
The recommendation of the author, @stephentoub includes 2 different but related ideas around this:

  1. if the compiler can detect with certainty that a contract would be violated, this would be treated as a compile-time error
  2. most contracts would be detected at runtime and if violated, they would perform Environment.FailFast

I've already opposed item 2, and many have give good examples why this is a bad idea, so I won't repeat them.

Item 1 would be fine with me. However, as @jaredpar mentions, theorem provers are very difficult, so I don't expect the compiler to catch many contract violations. Compiler writers have a different goal than other tool makers. We (developers) expect them to be rock-solid reliable. So things that are fuzzy are best left to supplemental products or add-ons. Compiler writers could better support theorem provers. But trying to do something ambitious like what the Code Contracts static analysis does, seems inappropriate capability to include in the compilation process.

To me, a practical first-step would be to include code contracts syntax to generate run-time validation. Just getting that code into the initial compiled object, and eliminating the rewriter is a gain.

I completely agree, and getting code contracts would be a huge gain, but changing default error handing from exceptions to one step south of setting the computer on fire is a feature I would be very reluctant to use.

Not all developers control the machine the code is running on, and no matter what tools we use bugs is something users always has to deal with, but I don't want to test the users patience just for the sake of purity. Most users do not report bugs. They either drop what they are doing or find a workaround.

There may be cases where FailFast is acceptable, but I don't buy that it should be the default approach, especially since turning exceptions to FailFast is so easy, but turning FailFast into exceptions is completely impossible.

Imagine processes that must run at all times, for example control processes for heavy machinary that implement safety features. FailFast can mean that that controller loses control over that machine because of third party libraries, and there is absolutely nothing the programmer can do to avoid it because it immediately terminates the process. Killing the process is not safe at all, because safety is something that needs to be determined by the application, not the compiler or code contracts.

The replies here all amount to "don't make mistakes" but if that is the solution we don't really need code contracts at all. And not all contracts can be asserted in compile time.

@GeirGrusom, +1

I completely agree with the above. The C# compiler is fantastic and helps with a ton of scenarios - But don't ask the compiler to do your dirty laundry for you (for now) - It'll just end up burning your clothes instead, because, hey it's smarter and figured no one will have to do it at all this way.

PS:
Your dirty laundry => Handling the contract exceptions properly.
Burning => Failing fast in this particular scenario.

Failing fast on contracts is not going to produce rock solid code as many seem to expect. It'll just up with an unreliable compiler which produces code that may or may not crash despite the user's intention because of some third party contract. Simply put, you're going to loose all deterministic value (which is actually quite an unpleasant irony to what we hope to accomplish with static analysis).

@prasannavl, if I recall it correctly (I haven't read the whole thread again), @jaredpar claimed that fail fast would led to better programs, not better code.

Instead of @GeirGrusom's examples, let's take a better one. I'd prefer that a controller for heavy machinery would stop everything on failure instead of keep going. "what's that carbon-based stuff on the way? that wasn't supposed to be there. I'll light up a red light and keep going."

@paulomorgado

if I recall it correctly (I haven't read the whole thread again), @jaredpar claimed that fail fast would led to better programs, not better code.

I don't share this opinion. Because, I think we can safely say that where-ever you can establish that compiler is absolutely sure, static analyzer goes in for the kill at compile time. Perfect. Now where do we get into a scenario of runtime analysis?

When its impossible for the compiler to determine. Now rephrasing that, the input for that method is not absolutely deterministic. Now, you can have a tons of unit test to make sure that it is. But its also a highly likely practical scenario that it is impossible to cover that level of testing with absolution. Now, just a single one of those scenarios could trigger a contract exception from one of the numerous third-party libraries a program might have.

Now, there's a mechanism in place for such scenarios. What is it? Exceptions! What an apt-naming. If a couple those requires that life has to be stop, chain it into fail-fast, but other-wise take appropriate measures and go on. It is just a bad program, if it isn't able to handle things properly.

I would find it a ridiculously stupid system, if my mighty uber cool space-ship is going to self-destruct and go kaboom just because a faucet knob went loose. I would much rather have the space ship, in all its mightiness, just tell me "Hey look, you may have a problem in your faucet", and shut down all related plumbing limited to that area. So I can send in the space-plumber in the next cycle, and voila! Or whatever. The point is, I can now decide based on its seriousness.

Instead of @GeirGrusom's examples, let's take a better one. I'd prefer that a controller for heavy machinery would stop everything on failure instead of keep going. "what's that carbon-based stuff on the way? that wasn't supposed to be there. I'll light up a red light and keep going."

Now, if there's "carbon-stuff" on the way, I say hey, let's stop the machine. (By handling the exception and failing, or simply let it bubble up). What you're saying is, if carbon stuff is on the way in one unit - Bring down the whole damn city! Into darkness, we shall go, instead of just being able to close the affected area down.

Adding on to the space ship analogy, the more apt-thing to do, if the faucet leak is indeed dangerous, would be to interrupt the self-destruct, and say, "Hey you know what, no matter how damaged you are, go and land (or atleast try to) on this deserted zone". So I can at the least salvage what I can of the multi-zillion dollar investment. This is exactly what exceptions let you do.

And fail fast is just "kaboom!". One hell of a faucet leak, really.

The program can't stop the machinery if it crashes. It needs to actually send the shutdown command.

Jonathan Allen
Lead Editor .NET Queue
InfoQ/C4Media
619-933-8527

-----Original Message-----
From: "Paulo Morgado" [email protected]
Sent: ‎5/‎20/‎2015 7:32 AM
To: "dotnet/roslyn" [email protected]
Cc: "Grauenwolf" [email protected]
Subject: Re: [roslyn] Proposal: Method Contracts (#119)

@prasannavl, if I recall it correctly (I haven't read the whole thread again), @jaredpar claimed that fail fast would led to better programs, not better code.
Instead of @GeirGrusom's examples, let's take a better one. I'd prefer that a controller for heavy machinery would stop everything on failure instead of keep going. "what's that carbon-based stuff on the way? that wasn't supposed to be there. I'll light up a red light and keep going."

Reply to this email directly or view it on GitHub.

@prasannavl Either way, whether the space ship explodes or attempts to safely land should not be decided by the maker of the faucet.

Unless those that designed the spaceship that if the faucet fails, that's what should happen.

But it looks like this is not an option and the only option is "the faucet should not and will not fail in any case. if it does, you're on your onw. good luck".

@HaloFour, exactly my point. User's progam is this space ship. Faucets are the third party libraries. By employing a contract which will fail fast on the third party library, it implicitly opens up the user's program to the potential of crashing straight away by a third party library.

@paulomorgado, again, exactly my point. By making it fail fast, one's completely taking the decision off the spaceship. The shapeship has no option but to fail, since the faucet will fail (This is perfect for static analysis, but not for runtime analysis, since the scenario that makes the contract fail may or may not happen - or rather, when it could happen is in-deterministic), and brings the whole thing down. But if merely throws the exception, the spaceship has the option to bubble up and let it fail, or to handle in the way it chooses.

100% agreed about throwing versus failing. Libraries should not decide if an application fails or not unless there's a security concern in play, the app should have the option to manage the failure otherwise. Additionally, development will be frustrating and developers might not understand the condition that caused their app to fail suddenly and without warning.

Maybe a better syntax would use a positive check and spell out the expected action. Sample code:

public int Insert(T item, int index)
    fails (item == null)
    throws ArgumentOutOfRangeException(nameof(index), "Value is out of range.") when (index < 0 || index > Count)
    throws InvalidOperationException("Result was invalid") when (return < 0 || return >= Count)
{
    return InsertCore(item, index);
}

@whoisj

Libraries should not decide if an application fails or not unless there's a security concern in play, the app should have the option to manage the failure otherwise.

Libraries already have this exact level of control. The can validate arguments however they choose and take whatever action they like: throw, fail fast, ignore, etc ... This proposal in no way changes what libraries can do, it only changes the syntax by which they can do it.

@jaredpar I agree, how ever once syntactic sugar is added to encourage fail fast behavior, people will gravitate towards that behavior. This add little to no value to library development and library development drives language adoption. I honestly see little value in the fail fast model for requires/ensures.

Instead, I agree with the many above you are asking for an exceptional model that enables the callers to choose, while still enabling libraries to add failure conditions to their method signatures.

@jaredpar libraries that casually fast fail are typically known as "bad libraries that should be avoided". I don't think it'd be wise to encode that behavior into the language.

It's somewhat unclear how fast fail contracts are superior to exception contracts when exceptions can be caught and optionally fast failed but not the reverse as @GeirGrusom pointed out

I am surprised on how much the discussion drags on such an absurd proposition on a detail. Let's think of the use cases here: Method contracts,as most successful features in c# would benefit from having existing libraries using them. Right now, most checks of this type throw ArgumentException or likewise. Changing this would break most clients. Not using method contracts in those parts of the frameworks would reduce this to an obscure feature used only by some few who happen to follow c# novelties (a minority) and only for new code (another small minority).

I strongly support failing. From my point of view, contracts are a kind of type system extension. So the contract failure should be seen as the error of the same gravity as type mismatch.

The usual method void Print1(string s) makes a strong requirement: whatever comes into it must be a string. Whoever uses this method must supply whatever the compiler can deduce is a string, or otherwise s/he won't get the executable compiled. The same way the declaration void Print2(string s) requires s != null makes a requirement for s to be a non-null string, and whoever uses this method is required to pass a non-null string.

Just as everyone who tries to pass an int to Print1 should not expect this code to work, the same way everyone who tries to pass a null to Print2 should not expect this code to work.

Anyone who uses the contract should consider compiler "infinitely wise". If at some place compiler would be able to prove the contract failure, compiler would be in its full right to fail the compilation. The semantics of the code should better not depend on the compiler's level of sophistication, so a contract violation which went undetected at compile-time but happened at runtime must be handled as close to the compilation failure as possible.

Having contracts as shorthands for if (!condition) throw new Something() IMHO makes the feature too weak and doesn't bring anything new to the language.

It does add. It converts the imperative to declarative, thus allowing good intellisense, self documentation AND the compiler early failing when it can be sure (in dubious cases the compiler could just show warnings).

@vladd,

Sticking to your very own analogy of the type system extension, type system is the very definition of static. (Let's get to the object parts in a bit). So, when it is a static contract, perfect. Everyone's happy, since the compiler fails on build.

With respect to your analogy, what is equivalent of the runtime analysis? Its using the object type. Now, how exactly does the type system handle object? If you pass in a type and try casting it to something else and if it simply failed fast what would happen? A ton of things you currently do in .NET would be impossible, and would be considered as an ridiculous choice. Don't you agree?

It simply throws an exception that can be handled appropriately, and failed if that is indeed the design. This is the exact analogy when it comes to the contracts, runtime analysis failing fast will start causing havoc as the code multiplies!

So, when you said that you _strongly_ support fail-fast, you actually implied that you support not to do so, by your example ;)

PS: Forgive me, this is rather turning into a very dry and ridiculous discussion which is losing its productivity, so had to take my chance at a little fun. ;)

Fail fast means that dynamic typing is no longer safe to use. And I have doubts about using the immediate window as well.

And can you imagine how difficult this is going to make building the library. Every mistake in a post condition becomes an exercise in frustration.

Jonathan Allen
Lead Editor .NET Queue
InfoQ/C4Media
619-933-8527

-----Original Message-----
From: "vladd" [email protected]
Sent: ‎5/‎20/‎2015 2:35 PM
To: "dotnet/roslyn" [email protected]
Cc: "Grauenwolf" [email protected]
Subject: Re: [roslyn] Proposal: Method Contracts (#119)

I strongly support failing. From my point of view, contracts are a kind of type system extension. So the contract failure should be seen as the error of the same gravity as type mismatch.
The usual method void Print1(string s) makes a strong requirement: whatever comes into it must be a string. Whoever uses this method must supply whatever the compiler can deduce is a string, or otherwise s/he won't get the executable compiled. The same way the declaration void Print2(string s) requires s != null makes a requirement for s to be a non-null string, and whoever uses this method is required to pass a non-null string.
Just as everyone who tries to pass an int to Print1 should not expect his code working, the same way everyone who tries to pass a null to Print2 should not expect his code working.
Anyone who uses the contract should consider compiler "infinitely wise". If at some place compiler would be able to prove the contract failure, compiler is in its full right to fail the compilation. The semantics of the code should better not depend on the compiler's level of sophistication, so a contract violation which went not detected at compile-time but happened at runtime contract must be handled as close to the compilation failure as possible.
Having contracts as shorthands for if (!condition) throw new Something() IMHO makes the feature too weak and doesn't bring anything new to the language.

Reply to this email directly or view it on GitHub.

The entire Fail Fast concept feels like this feature was designed for system software development where failing fast is imperative for security reasons. The vast majority of C# code is pure application code, where logging and providing a graceful exit is nearly always ideal over an app just vanishing.

Having the compiler detecting guaranteed bad states is a good idea, but it's a nice to have. Just like having the compiler detect states when the check need not be applied and skipping them is a good idea, but only nice to have.

I wonder, why Microsoft's own Spec# was not mentioned in this discussion? It is well thought thru and developed (they had working compiler), covers all the cases people discuss here, and much more.

Unfortunately, Microsoft has killed this research project in favor of VASTLY INFERIOR Code Contracts.

http://research.microsoft.com/en-us/projects/specsharp/

+1 -- I've been using CC on every project I've worked on for the last few years. I won't code without it. Language integration would be greatly appreciated.

I would also love to see:

  1. assume keyword similar to Contract.Assume, indicating to static checkers that we don't expect an expression to be provable, but it can be taken as a fact.
    void Foo(int value)
    {
        assume _flag > value;
    }
  1. Optional purity checking in members, even if that means strict limitations on syntax. (Although, perhaps the opposite might be better; i.e., all members are checked for purity by default and you must specify impure to cause side-effects; e.g., public impure sealed class Foo.)
    pure void Foo()
    {
        _flag = 1;   // compiler error
    }
  1. Contract abbreviators.
    pure void LessThanCount(int index) requires index < Count;  // no method body declared

    void Foo(int index) requires LessThanCount   // no arg specified; delegate resolution semantics
  1. Iterator block and async method "hoisted" preconditions:
    public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> predicate)
      requires source != null        // these are evaluated when GetEnumerator is called, not MoveNext
      requires predicate != null

@RxDave

  1. impure would be braking change. I'd prefer pure inferring with ability to set it implicitly by pure keyword.
  2. Abbrevations you proposed look weird. You need the argument list matching in this case. requires xxx treats xxx as boolean expression.

In addition, I'd like to have inferring requirements from calls:

public int Foo(int a) requires a > 0 { ... }
public void Boo(string a, int b)
  /* option 1 - specifing required call explicitly */
  requires call Foo(b) // b inferred as b > 0 and added to Boo's requirements
{
  /* option 2 - require in the body */
  required Foo(b); // call to Foo and require b > 0
  var result = required(Foo(b)); // in expressions could be weird
  result = required(Foo(b + 1)); // what to do here? require b + 1 > 0 -> b >= 0 ? seems too complicated
  required Foo(result); // ???
}

requires call xxx(args) is similar to @RxDave 's abbrevation, but make more sense for me.

Rewritten code (async, iterator blocks) should have same semantics as it was written in the original code.
That means, async methods's result on post-condition be of original type, not Task-wrapped. And, of checked on resolition (as does CodeContracts now).
Iterator's checks also should be lazy and shouldn't enforce enumeration. For example:

public IEnumerable<int> Foo()
  from i in result requires i > 0  // smth like this, it's not a strict proposal
{
  var i = 1000;
  while (true)
     yeild return i--; // static checker, of course, could find the error
}

Foo(); // no check at all, it hasn't been enumerated
Foo().Take(100).ForEach(); // no runtime error
Foo().ForEach(); // now it runs to runtime contract error

CodeContracts doesn't do this now. It enumerates it in order to perform runtime check. I've filed this with no response.

@RxDave The CC addition is a collection of related features including:

  • An API in the form of Contracts
  • An IL / PDB rewriter which lowers the API calls into straight IL
  • A theorem prover which attempts to assert the correctness of contracts

This language proposal is not an attempt to integrate all of those features into the language. Instead it is focused on a declarative and succinct form of argument validation.

A side goal of this feature though is to allow for tools like the CC addition to be more easily constructed. Having declarative contracts removes the need to grovel through IL to infer contracts based on heuristics and instead just hands the contract directly to the consumer.

A side goal of this feature though is to allow for tools like the CC addition to be more easily constructed. Having declarative contracts removes the need to grovel through IL to infer contracts based on heuristics and instead just hands the contract directly to the consumer.

Does this mean the plan is to leave how to handle invalid arguments up to the customer? As in the caller can chose fast-fail vs exception handling?

@whoisj

As in the caller can chose fast-fail vs exception handling?

I added some context to that here. Given the length of this particular thread it's pretty easy to miss (took me a minute or so to find it and I know what I'm looking for). so I'll restate the key points here.

I think this feature is more likely to end up with the two varieties:

  • Fail fast: requires <condition>
  • Fail with exceptions: requires <condition> else throw <exception>

The reason being that it's a great back compat story. It allows the unification of all of the existing forms of argument validation into one single feature with a more concise and documentable syntax.

Even if this is the eventual outcome I highly encourage you to embrace fail fast. Yes it will cause short term instability in the code base. In the long run though it will make your code better by removing a whole class of silent failures and the strange bugs that result from them.

@jaredpar I'm cool with fail-fast so long as some kind of logging can occur when it happens. Today, the logging is absent. This is a huge problem for long run, at scale services; as well as widely deployed client applications which utilize centralized logging for OI with intent of improving customer experience.

It would be ideal if there were _uber exception_ that could be thrown and only caught for processing, but not actually caught to prevent application failure. This way the app, service, whatever could log OI and enable developers to actually make their produces better.

Otherwise, I'll assert that we're going to see an awful lot of push back on this feature.

Has there been any discussion regarding what the contract metadata looks like to callers or outside assemblies? Would it be encoded via attributes, and if so, will there either be a form of language to encode arbitrary conditions or some limitation as to what kind of conditions are supported?

I'm curious as I'd like to be able to be able to publish metadata to enable the same static analysis as would be emitted through this feature without actually using this feature.

@HaloFour There has been no discussion in the LDM of these aspects.

@jaredpar,

Even if this is the eventual outcome I highly encourage you to embrace fail fast. Yes it will cause short term instability in the code base.

I don't think this is just a short term instability - I'm more concerned about the obvious fact that it will become a huge security hole.

Consider the following scenario - Say an assembly (an old one with no knowledge of these language contracts) exposes an interface IPlugin which is used to call methods on external assemblies.

Now, I can intently write a contracted implementation with just a requires to crash the entire application. This becomes a security exploit. As I've repeatedly said before, the whole concept of fail fast in contracts is ridiculous except for compile time static failure. The disadvantages and interim problems (practically speaking its going to be a very long interim - couple of years, before everyone adapts, if at all) it causes greatly out-weight the advantages you gain from fail fast contracts during run-time.

I'd rather not see a great ecosystem, crippled by the numerous inconsistencies that this creates.

@prasannavl Fail fast is not a concept that is new to method contracts, it is a mechanism that exists today in .Net in numerous forms. If code can fail fast with the code contracts feature then it can fail today without them.

Yes, but it is NOT a language feature. That makes all the difference. This will enable numerous of these holes to propagate. Today if an external library fails fast, in most scenario it will be documented with explicit detail so that we can pay serious attention to them. With this, the entire .NET programming paradigm becomes a lot more difficult, because almost any method can crash your entire application in run-time, but not in compile time.

A language mechanism should always be open, letting the developer have complete control, whereas an application level mechanism can and should enforce correct paradigm for the application. Bringing an application level mechanism in as a language feature, and calling it a concept that'll make everyone write better code makes no sense to me.

@whoisj - Java has Error (a different Throwable subclass than their Exception), which you're not supposed to actually catch (ie, out-of-memory in Java is an error, not an exception; you can catch it, but you usually have bigger problems at that point).

C# apparently has a special 'corrupted state', which is normally uncatchable. I'm unsure whether contract violations would constitute corrupt state, or something new would be appropriate...

Let me put the concept of fail-fast into a little perspective.

Any good language run-time can and should crash an application only under one condition - corrupted state. I see no exception to this rule. Everything else should always be the responsibility, and choice of the application - If a language interferes there, I don't think labeling it a as a feature would make it right. In opinion, it simply cripples the language. And contract exceptions in most scenarios aren't the ones that corrupts state. So, simply put, the language should get itself out the way. Should never interfere. Failing fast does the exact opposite.

There are a few reasons to fail-fast: corrupted state, out-of-memory, security risks, etc. These are generally extreme cases where bailing out without even logging an event is acceptable. However, in most cases developers have come to rely on exception handling as a trigger for event logging.

I believe @jaredpar will agree with me if I assert that in the vast majority of exception cases there's nothing an app can really do except fail. I believe this is where the misunderstanding is coming from. Poorly written apps might wrap themselves in a giant try/catch block in hopes of just working; this is bad. However, well written apps only handle the exceptions they are expecting, using logging along the way, and failing on the exceptions they cannot handle.

I worry that assuming that fail-fast is a remedy for bad developer behavior is driving this discussion. Devs need control of their code. I personally could not bind against a library which used fail-fast logic. I need logging and OI data to do continual improvement.

@whoisj +1

And thanks for mentioning the obvious factors, I left out like out-of-memory, but they can still be handled even though, practically we never do in most scenarios. The only real un-catchable ones arise due to some form of corrupted state, or security violation. So far the language run-time is spot on.

@prasannavl - contract exceptions generally shouldn't be causing corrupt state (beyond the normal bounds of thrown exceptions), but I would class them as being thrown _in response to_ (effectively) corrupt state.
Pass a null collection instead of an empty one? The method doesn't know if the object reference got lost (or not initialized yet, or whatever), or you deliberate passed it null. The latter is unlikely for such a simple case due to static analysis (code should not be built), so...

@Clockwork-Muse - You are completely correct, except you missed one vital piece of the information. The exception is thrown not in response to a corrupt state, rather its thrown to ensure that it doesn't corrupt the state. (Because as far as the method is concerned, it doesn't know whether it receiving a value that it isn't supposed to (say a null), is really a corrupt state - could just be an invalid value. The only inference it can make is that, this method doesn't corrupt the state with a value that it wasn't designed to work with).

Corrupt state is also a subjective topic, so I see how you can argue for the point you made above as well. However, considering that we are looking into states that will corrupt the whole application, I think its more appropriate to look at the way I described above.

OutOfMemoryException already crashes the application, but you can have critical sections that gets called anyway. FailFast just opts out of everything, inclusing finalizers and finally and kills the process without a trace (unless you have control over the machine it runs on).

FailFast can actually end up corrupting state, such as backup procedures running in the background which can easily make users stop using your application. Discarding a user's work is one of the worst things that an application can do.

It is also the callers responsibility to determine if state is corrupt or not, not the callee.

At the end of the day, applications that reach hundred or millions of people need logging. Despite the best scrutiny and testing in the world, bugs make it into production. This is just a fact. Logging enables developers to collect data on bugs in the wild and act on them. Customer reports are lovely, but hard data collected at the time of failure is crucial. Adopting a fail-fast is the first-class model is going to limit the ability to collect data at the time of failure and cause numerous problems for developer who rely on operation intelligence [OI] to do continuous improvement on their products.

Even Windows doesn't fail fast, except in extreme situations. The NT kernel gives various subsystems an opportunity to store data about state into buffer which are recorded into the DMP file created post crash.

@whoisj

Adopting a fail-fast is the first-class model is going to limit the ability to collect data at the time of failure and cause numerous problems for developer who rely on operation intelligence [OI] to do continuous improvement on their products.

I think there is a misunderstanding of what "fail fast" means here. To me it means that the program as it is intended to function ceases at that particular point. No exception unwind, no recovery attempt, etc ... It would allow for items like logging and data collection. That is inspecting the state of the process at the point of failure, not attempting to continue executing it.

@jaredpar please, answer the following question.
How would you suggest to test contracts with unit tests if they are going to shut the application down?

@jaredpar assuming options like Watson are off the table or insufficient, how would an application perform any logging or stashing of critical state without exception unwind?

While I believe that you and I agree that applications should fail as quickly as possible when an exceptional state is encountered, I believe we're also disagreeing on the mechanism by which applications should do this.

Ideally, there would be a throwable type which could be caught and processed, but not trapped. In this way, applications would be given a chance to do critical operations before exiting but bad practices like squashing all errors would not be possible.

I wonder how much people are confusing the concept of "fail-fast" with the method Environment.FailFast()?

While I do think that it is important that we discuss FailFast I don't want to starve the discussion regarding other implementation details.

Has there been any discussion regarding what the contract metadata looks like to callers or outside assemblies?

Does anybody have any thoughts regarding how this metadata would look? Do we need another thread to discuss this?

Also, potential other contract: ensure a method isn't called.
There are multiple methods where it isn't desirable to invoke them:

// On an immutable implementation of an interface, perhaps
public void Add(T item)
{
    throw new NotSupportedException();
}

instead, a contract to make this visible outside the type would be better:

public void Add(T item)
    requires not invoked or throw NotSupportedException
{
    // Potentially empty body, maybe still throw NSE
}

(or something like that).
Currently one of the ways this is dealt with is by marking such methods Obsolete, but that only works if called directly - I'm assuming contracts would do a little more inspection.

@Clockwork-Muse That sounds like a job for an analyser.

Sorry for the maybe too long post but I could not stand by when such a promising feature risk becoming the source of much confusion, unreliable programs and maybe not that widely used at all.

High levels Goals/features

Before beginning work on this feature it would be really good to summarise what the high level goals should be. Here is a suggestion and unless 1-4 is part of the goals then I think that the feature will be as broken by design as the current implementation with only a marginal amount of libraries actually using it.

Apart from these I agree with the goals set by @mcetkovsky on April 23, with the exception/addition that I only expect very basic static checking by the compiler for example to track not null (Proposal #227) and to leave more advanced checking to other projects/analyzers.

  1. Provide a terse syntax to express which input are valid to on methods/properties and what output can be expected from them.
  2. Embed contract metadata into the final assembly so that _other_ tools (not the compiler) can use it to perform static checking, automatically generate test etc.
  3. Display contracts in intellisense much as the code contracts extension did for VS 2012

Being able to see both arguments requirements and ensures without having to fire up the corresponding documentation on msdn is very good and has helped us prevent a number of bugs while making coding more efficient.
While this is most useful when exploring a new API you are not very comfortable with it is also helps more experienced programmer since I doubt there are anyone who actually know the complete .NET API and can answers exactly what input is allowed or not and what methods can return null or not.

  1. Argument validation in the .NET framework should preferably “utilize” the new contracts (and definitely _should be able to use it_ instead of the current approach) without changing backwards compatibility.
    A very quick review of the current source shows that Contract.EndContractBlock() is quite common and that both Contract.Requires and
    Contract.Ensures occurs at a number of places, but EndContractBlock together with "if ... then throw ..." seems to be the most common usage.

    • Why?

      I really think it would be a good idea to set it as a goal that this should be "doable" within the design chosen.

      That way we most likely get a solution usable by the wast majority of libraries and applications while we would get much improved

      intellisense for the .NET core libraries .



      • This indicates that throwing (custom) exceptions on contract failure should definitely be a must have feature.


      • The current API based contract "solves" the issue mostly by allowing each assembly to choose how handle code contract errors


        (both in respect of what to check) but you can also define what methods should be called when a contract fails in your assembly (you can specify


        "Rewriter Method" which are called instead of those in System.Diagnostics.Contract to verify contracts).



  2. Overridden methods should be able to require less and ensure more
  3. Parts of the current contracts API should preferrably be recognised and processed.
    In my opinion this only makes sense for Requires and Ensures (not Assume/Assert).

The following "old" code

``` C#
public int Insert(T item, int index)
{
Contract.Requires(index >= 0 && index <= Count);
Contract.Ensures(Contract.Result() >= 0 && Contract.Result() < Count);

 return InsertCore(item, index);

}
```

should be treated as this

C# public int Insert(T item, int index) requires index >= 0 && index <= Count ensures return >= 0 && return < Count { return InsertCore(item, index); }

If one use any of the more advanced features of the current code contract rewriter then it should be possible to disable this.

  1. Make it easy to detect and handle contract violations

The code contracts rewriter introduces a assembly private exception at rewrite time which makes catching these exceptions a bit tricky. Being able to reference this exception in code would make it much easier to detect and handle properly. This would not just make it easier to catch or classify exception severity but it would be _easier to configure the debugger to break on these kind of contract failure_.

The best alternatives that comes to mind here seems to be:

  • To have the ability to define your own exception class
  • Have the compiler generate one for you (unless you already reference an implementation)
    ** That way the compiler will work even if there currently is not ContractException in .NET 4.0-4.6 but would still be able to work if one class is introduced in the next version or via a nuget.

    1. Support ensures for conditions to hold when an exception is throw ?

While I must admit that I have only used it in a few placed but it is useful to be able to specify
what happens if an exception is thrown in order to document if a method follow the basic or strong exception grantee (https://en.wikipedia.org/wiki/Exception_safety)

  1. Should it be possible to "disable" verification requires?

I think this only makes sense for "requires" which do not throw any specific exceptions and maybe ensures. They could should still be part of the metada but not enforced.
An alternative/complementary solution solution could be to leave this to 3rd party/external tools, this way a static checker would be able to remove runtime checks for conditions/ensures that it can verify that they always hold. In order to support such tools it might be a good idea to make sure that contract metadata can exist without forcing runtime checks (this is solved by the most trivial implementation where contracts are converted into both metadata and runtime checking).

Background

Why we started to use Code Contracts

Code contracts provides a god way to both document and verify you code at the same time with quite terse syntax (we used runtime checks almost exclusively, but we have also used static checking occasionally to find some bugs) that is easy to read.

The code contracts extension which added contract information in VS 2010 was really useful and since it showed contracts for both our code and the .net framework which was really useful since not all programmers where experts this simple feature improved the quality of the code remarkably well since better and more relevant validation was made, and less unnecessary validation (such as checking return values for null even if method never returns null) was written.

How we use/used Code Contracts

A guesstimate is that 80-90% of all contracts we have written is about null ability of reference types (both as parameters and return values)

  • Contract.Requires(...), with no specific exception type to throw
    These contracts are almost exclusively for checks in internal/private method, and cases where we do not care if we get a NRE or a contract failure.

    • We started to use it just in debug and internal tests builds, but since the overhead often is negligible we also include it in release builds for several projects.

      These requires can most often be verified by the static analyser so we originally decided that we could remove them from Release builds.

    • For our core algorithm/calculation library we still use it with the static checker to check for/detect for bugs (from time to time), and occasionally but very seldom for other projects

      (mostly null-ability related, but we also found some index related bugs when manipulating containers/strings or number ranges)

    • Contract.Requires(...)

      Contract.Requires(...)

Now we have replaced most of these (and some of the Contract.Requires) in several of our projects with `if (...) then throw ...` and `Contract.EndContractBlock();` so that we can disable code contracts to improve build time somewhat while still getting the same behaviour.

  • Contract.Ensures(...)
Mostly used to say if a method returns null or not or if the return value itself has any specific properties (>0 etc)

Why we don’t use it as much

  1. The Code Contracts Extension did not work as well for VS 2012 as it did for VS 2010, and it did not work at all for 2013 so
  2. It requires the post processor which never quite kept up with the language/compiler.
    The rewriter has -not always stable- when new language features comes such as _async_ there has been several bugs which required us to remove contracts since the rewriting introduced bugs or missed to rewrite some methods.
    It has also generated incorrect pdb files (with references to the wrong files) which have made other tooling/libraries to fail.
  3. Compilation time – generation of contract assemblies and the rewriting takes much more time than the actual compilation.

_It should be noted that_ all of the reasons can be traced back to code-contracts, despite being part of .NET 4.0+ (and silverlight) API, _has never been treated like a first class .NET citizen_ but has relied on externals tools by a "3rd party" supported by a microsoft research team who has .

Moving requires/ensures from the rewriter into the compiler and adding back intellisense would solve these issues and make code contracts much more usefull again.

Exception vs Fail Fast

Just as @whoisj and many other has pointed out that any reasonable good/robust application already handles exceptions which can be treated as "fatal", and does so using exceptions so that the application still is in control.

I urge you to go with exception as the default implementation (and maybe enable break on throw for that exception type by default) for a number of reasons:

  • Consistency
    This is how both the runtime (NullReferenceException, BadImageException, OutOfMemoryException) and the BC/.Net framework (ArgumentException and derived types for argument validation/“requires”) works.

    • Just because a requires/ensures is not meet does not mean that the application is corrupt (yes there probably is bug somewhere or invalid data in a database, just as if we had gotten a NRE)

      It does however mean that it can not continue executing that code and end up in a valid state (e.g it would be really bad to use something like the "On Error Resume Next" approach in VB), so the _right thing to do in order to prevent the application from entering an unknown state is to thrown an exception_.

  • In .NET 4.0 there was a “fail fast” mechanism where an unobserved task did bring down the application. _This was fixed/changed in .Net 4.5_ where this is no longer the case (see https://msdn.microsoft.com/en-us/library/jj160346(v=vs.110).aspx) . We should not make the same mistake again.
  • Programmers should already have the basic skills to manage/handle exceptions which is something you should be able to expect since it is a very widespread paradigm for error management among many different programming languages while debugging memory dumps of crashed processes is more advanced from my point of view.
  • Exceptions gives the programmer control of when and where to handle contract failure and how to do it. You also have the complete stacktrace it is easy to log and file bug reports or fail fast at a location of your choosing.
  • Using the code contracts API which throws exception we have had good success with isolating and finding bugs by using the contract exception (contains both the condition as well as the stack trace)
    while still providing the end user with a god user experience-.
  • You don't really expect a a bug in the File or Directory class or any other of the library classes using Contract.Requires today to actually tear down your whole application without notice?

One problem with how the current code contract reqwriter handles exceptions is that since it generates the exception class itself in the rewritten so you can not catch ContractException or test
for the type (when categorising exception as fatal or not) as easily as for other exception types.

Extensibility

Based on the discussions in this thread it does not seems like a bad idea to give each assembly some control over how contracts are handled within it, but it not trivial.

Possible approaches:

This approach could be usefull for specifying a custom ContractException as well as to specify a per assembly extension point to execute when that assembly causes a contract failure.

The downside is probably that you would probably want to have this control as the caller normally and not as the called (which is the simplest approach).

Eg: If a method in class A (assembly A) calls class B (assembly B) which results in a contract failure then it would make more sense if for assembly A to handle this than for assembly B to try and handle it.

Possible optimizations

It should be possible for the compiler or JITter skip generation of "require checks" which does not throw a custom exception if the arguments first use is to a method with exactly the same requires (ex: a overriden method which starts by calling the base class implementation).

The reverse should work for ensures if a method returns the result of the "base" implementation.

@Daniel-Svensson That is an absolutely fabulous post, thanks a bunch for making it. If I had the power to do so I would archive this thread and start fresh with your post at the top as we obviously are going to need an intermediary flush before our duty here is done.

This kina suggests to me, that there needs to be a better way of manipulating the IL

This is a great direction for the future of C#. A comment or two:

  • Regarding the exception vs. failsafe abort discussion, it seems pretty clear that some flexibility is needed here. In a testing scenario, the violation of a contract should result in the failing of the test; an immediate abort of the program doesn't make sense here (further tests need to run), and an exception doesn't work well either, as it may be caught within the program's normal exception handling and swallowed. The existing Code Contract approach seem good: add an event hook that allows users to trigger test failure for their test framework, etc.
  • I really do hope you guys consider implementing invariants as well. They are immensely useful in capturing truths about classes, and even if they're not trivial to write correctly they're well-worth the effort in many cases.

One possibility for flexibility would be to allow the method contracts to be written in parts. What does that mean? Well, basically think of them as multi-part clauses (like for-loops) where you have a statement, a condition, and an action.

<keyword> { <condition> } else <fails, throws, or returns> <value>

Example:

public string Substring(string value, int index, int count)
    expects { !String.IsNullOrEmpty(foo) } else { throws new NullArgumentException($nameof(foo)) }
    expects { index >0 && index < value.Length } else { throws new ArgumentOutOfRangeException($nameof(index)) }
    expects { count + index < value.Length } else { throws new ArgumentException($nameof(count)) }
{

}

Or (c-style!)

public int Substring(string value, int index, int count, out string result)
    expects { !String.IsNullOrEmpty(foo) } else { returns -1 }
    expects { index >0 && index < value.Length } else { returns -1 }
    expects { count + index < value.Length } else { returns -1 }
{

}

Additionally, if the conditions are compiled as anonymous methods which are tagged somehow, and we require they always return a bool, then we could have a helper class which allows for testing these conditions without having to call the method.

if (Contract.Validate(Substring, "hello world", 0, 6))
{
    return Substring("hello world", 0, 6);
}

Is this racy? Well not in this example because they're all literals, but normally yes. The values of the inputs could easily change between the if and the method invocation, but this is always a risk we play. As always, we'd need to do synchronization and flow control of our data to insure state as we execute.

@whoisj

That's pretty much exactly what is in the proposal, except you added some more braces:

public int Insert(T item, int index)
    requires index >= 0 && index <= Count else throw new ArgumentOutOfRangeException(nameof(index))
    ensures return >= 0 && return < Count
{
    return InsertCore(item, index);
}

But again I don't think it's the contract that should be defining what happens when the check fails. I'd probably want exceptions in debug mode and while running tests and then perhaps fail-fast in production, assuming the checks are eliminated by the compiler. Not to mention writing that into the contract doesn't cut down on the code necessary to describe the contract. For anyone using one of the various existing argument validation libraries it actually forces the developer to write considerably more code.

Significantly more important than how the contracts are enforced are how the contracts are expressed. Without static analysis this feature is completely pointless so figuring out how those expressions can be utilized by the compiler and analyzers to prevent a failure from making it to runtime. The fact that this conversation hasn't even been started let alone a proposal put forward is discouraging.

@HaloFour with the exception that my proposal requires that the developer be explicit about expecting a fail-fast result from a failed contract. The original proposal is to default to fail-fast, thereby encouraging fail-fast which in my opinion would be a disaster given the decade plus of habits the community has built up.

Without static analysis this feature is completely pointless

Completely disagree. Static analysis can add much, but having the conditions present in the method signature is imminently useful. This is along the same lines as the nameof() operator vs using a string literal. Instead of assuming the comments docs are correct (if they're even present), we can rely on the method signature to understand how the parameters will be evaluated.

Agree with @HaloFour that the contract shouldn't be defining what happens when it fails, although see Code Contract's option of specifying which exception to throw _if_ exceptions are set up. In other words, the contract doesn't specify whether exceptions are thrown (that's a global setting outside of the source code), but the contract does specify which exception type to throw if one is to be thrown.

On the other hand I strongly disagree with the idea that contracts are completely pointless without static analysis; runtime checks already provide a lot of value, not to mention compiler/JIT optimization based on the the contracts. Static analysis seems to be an extremely difficult field with competing methods, and it's likely outside the language or compiler's scope to choose one or implement it. In other words, if C# standardized a way of _expressing_ contracts, 3rd-party tools could provide the static analysis.

Since the Code Contracts project is showing signs of life again and this thread is mostly used toilet paper maybe it would be better to implement this as a Roslyn extension in the Code Contracts project.

Contract doesn't specify whether exceptions are thrown (that's a global setting outside of the source code), but the contract does specify which exception type to throw if one is to be throw.

Hey I could live with that, though I'd like the option of returning a value instead of throwing or failing.\

Since the Code Contracts project is showing signs of life again and this thread is mostly used toilet paper maybe it would be better to implement this as a Roslyn extension in the Code Contracts project.

Link?

I think most people would consider returning to be non-exeptional behavior in C# and VB.

I'm on my phone but if you search github for it, should be found under an organization.

I think most people would consider returning to be non-exeptional behavior in C# and VB.

Exactly. Non-exception by design. Still, exposing the parameter evaluation logic is useful, but in many cases a simple return value indicates success model is very useful.

@whoisj

The contracts are already in the method code. For the vast majority of the developers who aren't reading the code (either because they can't or because they won't) that won't be a benefit to them. Having the method carry the metadata about the contract and having the compiler/analyzer report, "dude, this method call will fail," is where the majority of the value of this feature lies. Without the metadata and analysis this is just an overly verbose and less powerful argument validation method than already exists through various libraries.

As for treating contract violations in a different manner, I don't prefer a resurrection of HRESULT, which is effectively what you'd need for any method that can fail for more than one reason. But I wouldn't care if contracts can express and expose such functionality rather than throwing/failing/whatever. The metadata/analysis end of it could remain the same either way.

@whoisj

Since the Code Contracts project is showing signs of life again and this thread is mostly used toilet paper maybe it would be better to implement this as a Roslyn extension in the Code Contracts project.

Link?

https://github.com/Microsoft/CodeContracts
https://twitter.com/samharwell/status/617181885584687104

Though it isn't the original team anymore.

@rhnatiuk Valid point, strange people are ignoring it. I mean, it's basically implemented, here is a tutorial:

http://www.codeplex.com/Download?ProjectName=specsharp&DownloadId=84056

It has some nice samples, demonstrating various contracts:

image

It would be great if the team could comment on how Spec# is different from what is suggested. Will the new implementation be feature-complete, how will it evolve etc. Will loop and class invariants be supported in future etc.

https://en.wikipedia.org/wiki/Loop_invariant
https://en.wikipedia.org/wiki/Class_invariant

@dsaf No idea. But it seems to be hopeless: I do not believe that large and complex things, like contracts, can be designed by community, and this thread is proof of that - way too long, way too many conflicting opinions, completely insane looking proposals, etc. It is shame, that Microsoft killed Spec# when it was basically done.

@rhnatiuk
_> It is shame, that Microsoft killed Spec# when it was basically done._

It happened to multiple experimental forks though and it can be hard to predict what should be API/syntax:

1) Axum:
-- language - killed;
-- API (Orleans) - failed;
2) Cw:
-- language (query syntax) - failed;
-- API (LINQ) - succeeded;
3) Polyphonic C#:
-- language (async/await) - succeeded.
(correction - no relation between them)

Design by contract is different though, because it had been used by a few mature general-purpose languages already e.g. D. Well, at least the "safer" API contracts approach had now been tested and largely dismissed as impractical and inferior. I guess Roslyn will make managing the compiler code complexity easier and now might be the best time to merge Spec# into the main language.

Even the Rust community is interested in learning from Spec#: http://www.reddit.com/r/rust/comments/2ys3ft/what_about_design_by_contract_like_spec

Perhaps @mike-barnett could shed the light on feasibility of merging Spec# into Roslyn and why they had went the API way in past.

@dsaf I don't see the relationship between Polyphonic C# and async/await.

@gafter Sorry, it's a mistake. I was confused by it having an async keyword.

@dsaf @gafter

#1165 I do think that we should bring those ideas from Polyphonic-C# and Cω into C# :smile:

Technically, there is no reason contracts couldn't get added to Roslyn. We took the API route for Code Contracts because we had already created our own language with contracts (Spec#) and it somehow failed to take over the world. We wanted to see what kind of adoption we would see if we made it available in any .NET language. Of course, we had to give up quite a bit in that process (e.g., the non-null type system).

@mike-barnett, how something not released (not even alpha!) can "take over the world"? :) What Microsoft did with contracts by introducing them via API was utterly pathetic, was defying the whole point of code contracts (i.e. contracts being part of the exposed API; in MS implementation nothing in signatures hints what are the contracts), and thus useless. Spec# was much closer to bring real contracts, than anything else MS ever made.

Discussion

Excellent ideas, excellent discussion.

Type System, Reflection and Lambda Expressions

Each precondition, postcondition or effect, or constraint, can be viewed as equivalent to the body of a lambda expression with at least the parameters of the method, input as well as output, each such lambda expression returning Boolean. It seems that there are a few more expression tree node types to obtain the entirety of the semantics, the topics pertinent to LINQ as well as to Roslyn.

LINQ and Roslyn arrived after the type system, reflection system as well as generics system. Though the discussion, thus far, is about C#, we can also envision versioning the .NET CLI, which could provide non-null, versioned type system, reflection system as well as lambda expressions.

public partial class Type
{
  public ReadOnlyCollection<LambdaExpression> GetGenericParameterConstraints() { ... }
  public LambdaExpression GetInvariant() { ... }
}
public partial class MethodInfo
{
  public ReadOnlyCollection<LambdaExpression> GetRequires() { ... }
  public ReadOnlyCollection<LambdaExpression> GetEnsures() { ... }
}
public partial class LambdaExpression
{
  public ReadOnlyCollection<LambdaExpression> GetRequires() { ... }
  public ReadOnlyCollection<LambdaExpression> GetEnsures() { ... }
}

(Note: Such functionality could also be implemented as extension methods.)

Parametric Polymorphism

Topics include the structure of parametrically polymorphic lambda expressions for generic method definitions or type definitions, or the semantics of the requires or ensures on generic method or type definitions, the interplay between parametric polymorphism and design by contract, the use of parametrically typed parameters, elements, parametrically polymorphic method calls or the type parameters in requires or ensures statements.

As lambda expressions are indicated as of use for the parametrically polymorphic methods and types, LambdaExpression could be parametrically polymorphic:

public partial class LambdaExpression
{
  public bool ContainsGenericParameters { get { ... } }
  public bool IsGenericLambdaExpression { get { ... } }
  public bool IsGenericLambdaExpressionDefinition { get { ... } }
  public ReadOnlyCollection<LambdaExpression> GetGenericParameterConstraints() { ... }
  public LambdaExpression GetGenericLambdaExpressionDefinition() { ... }
  public LambdaExpression MakeGenericLambdaExpression(Type[] typeParameters) { ... }

  public ReadOnlyCollection<LambdaExpression> GetRequires() { ... }
  public ReadOnlyCollection<LambdaExpression> GetEnsures() { ... }
}

Lambda Expressions and Sets

Preconditions, postconditions or effects, or constraints, constraint logic programming systems, can utilize lambda expressions returning type Boolean. I'd like to present a computational approach to sets, where parameter constraints, aforementioned, can be viewed as resulting in described subsets.

A view of a computational set theory:

public partial class Set
{
  // see also: Expression<Func<object[], bool>>
  private Expression<Func<object, bool>> m_lambda;
  private Func<object, bool> m_compiled;

  public Expression<Func<object, bool>> Expression { get { return m_lambda; } }
  public bool Contains(object value) { return m_compiled(value); }
}

shows a fundamental relationship between set theory and the theory of computation. Preconditions or parameter constraints upon methods' parameters can be viewed as intersecting with, subsetting, the Cartesian products of methods' parameters' sets.

The topics pertain to language features, specifically multiple types or methods which differ in the constraints on their parameters, as the expressiveness of constraints systems move towards that of the languages, as per lambda expressions (see also: C++ templates system).

Compile-time and Runtime Method Binding, Type Binding and Type Inference

I would like to express interest in compile-time and runtime method and type binding, compile-time and runtime type inference, including for multiple types and methods which differ in terms of constraints on parameters.

Code Contracts Engine API and Logic Programming

I would like to express interest in the interoperability between runtime and static program analysis and logic programming.

I would like to express interest in API to running code contracts engines, resembling System.Diagnostics.Debugger for scenarios including programming language design, IDE features or IDE extensions such as language services (e.g. logic programming languages). Such API could resemble that of knowledgebases, possibly utilizing Expression, of use, for instance, for expanding the logical rule systems of the engines. We can envision various Visual Studio 2015 features for logic programming languages. The System.Diagnostics.Contracts.Contract static methods could expand to include methods more resembling a logic programming API.

Scopes

Resembling InvocationExpression or inline expansion scenarios, scoping:

{
  requires
  ensures
  ...
  {
    requires
    ensures
    ...
  }
  ...
  {
    requires
    ensures
    ...
  }
  ...
}

The System.Diagnostics.Contracts.Contract static methods could expand to include scoping.

using(var scope = Contracts.Scope(/*...*/))
{
   ...
}

Sorry, I was being glib there. It is difficult to get people to use a new language or even an experimental compiler for an existing language (since Spec# was a strict superset of C# 2.0). We also did not have the resources to keep updating it to remain a superset as C# evolved. We totally understood the limitations inherent in a library (API) solution as opposed to a language extension, but engineering is nothing other than dealing with such tradeoffs. We produced as much tooling as possible to make up for the limitations: for instance, if you install the Code Contracts Editor Extensions, then the contracts do indeed show up in the method signatures shown by Intellisense. We are sorry if you found our work to be pathetic.

@mike-barnett , please do not misunderstand me - I did not mean that as an offense! Developers at Microsoft did great job on tooling and related things supporting CodeContracts!

What I call "pathetic" here, is that Microsoft had absolutely fantastic idea with Spec#, working proof of concept, and even some public releases, and then Microsoft (organization, I believe in this case!), because of politics, schedule, whatever-other-irrelevant-wrong-reason-we-bring-here, dumped all that, and came up with half-thought and half-baked CodeContracts via API that no one uses, because they do not solve any real problems people have, or solve them in a completely wrong way.

I think Microsoft and its R&D fell into the same trap they very frequently fall into with its products and technologies (I hope you believe me on my word here - the list would be too long to write): do something that technically CAN be done, without really thinking what is the real PROBLEM, what is NEEDED to solve it, and what HAS to be done.

Code contracts serve two masters, so to speak. First of all, they must be clearly visible and understandable by developers (without the need to read whole function body, searching for contracts, or relying on tooling, e.g. Intellisense in your example), and only then they must be used by compilers and runtime environments to verify the code. CodeContracts via API serve only the second one, while Spec# was doing both excellently.

...if you install the Code Contracts Editor Extensions, then the contracts do indeed show up in the method signatures shown by Intellisense...
...
First of all, they must be clearly visible and understandable by developers...

Visual Studio is lacking a proper custom attributes support, it's true and a general issue #711.

I wonder if ensures could be made to work naturally with async method results.

It would be really nice if this could be worked in but it may be a bit awkward. I assume the ensures would only be expected to hold in situations where the task completed successfully but that seems like it could get complicated. This is definitely something that we should be thinking about though.

Would it be feasible to just try this feature out? Code Contracts is starting to release early 2015 support so couldn't a roslyn extension be used to replace ccrewrite to try variations of this syntax out? Is this a crazy idea? I know it would be a lot of work but when you think about a future where this language feature actually exists Code Contracts should have some kind of interoperability with it anyway.

@aarondandy We don't have a prototype of this feature at this time, but we'd welcome the opportunity to review a contributed prototype. Otherwise I'm afraid you'll have to wait quite some time for us to prepare one.

Will this suggestion integrates with APIs that exist in the framework today like Contract.Requires(...) and Contract.Ensures(...) or we will have to modify the code for the compiler to understand the contracts?

Sorry if it was answered before but it's a huge thread so I might have missed that. :)

@eyalsk

Sorry if it was answered before but it's a huge thread so I might have missed that. :)

The answer is in the first two paragraphs of the opening post :).

Will this suggestion integrates with APIs that exist in the framework today like Contract.Requires(...) and Contract.Ensures(...)

A proper flexible language-level support was proposed, not an API integration.

or we will have to modify the code for the compiler to understand the contracts?

Yes. It's a transition from "API + specific IDE Add-in" to "language + any modern IDE". I would imagine something that works in VS Online, VS Code and MonoDevelop in a performant way is the target.

@jaredpar said:

This language proposal is not an attempt to integrate all of those features into the language. Instead it is focused on a declarative and succinct form of argument validation.

A side goal of this feature though is to allow for tools like the CC addition to be more easily constructed. Having declarative contracts removes the need to grovel through IL to infer contracts based on heuristics and instead just hands the contract directly to the consumer.

@dsaf, haha.. I'm funny but thanks!

I know it's a proposal and that syntactically it makes sense to have the contracts as part of the method's signature but it looks awful especially for many statements.

Many contracts can easily clutter the code and make it unpleasant to read/maintain it but I would imagine that you will allow us to put them on interfaces so it won't be much of a problem and this is one reason why it makes sense to put them at the signature level as opposed to the body of the method.

Syntax

Interfaces

On the topic of interface syntax, the method body syntax includes convenience for accessors. For readability, the new syntactic elements could be the first statements in scopes either syntactically or stylistically (see also: https://github.com/DotNetAnalyzers).

public interface Interface
{
  public void Function(...)
  {
    requires ...
    ensures ...
    ...
  }
  public float Scalar
  {
    get
    {
      requires ...
      ensures ...
      ...
    }
    set
    {
      requires ...
      ensures ...
      ...
    }
  }
}

@AdamSobieski can something be added between { and requires or ensures? This doesn't feel right to me regardless.

There already is a precedence for this kind of syntax in generics.

Syntax

Interfaces

The new statements could be throughout scopes but, beyond the readability topics, topics would include compiling, running (see also: F11, https://msdn.microsoft.com/en-us/library/y740d9d3.aspx), semantics as well as the processing of the statements statically, processing the statements' expression trees. Often, statements appearing before are to construct the parameters, expressions which compile to Booleans.

"This method call [Contract.Requires(bool)] must be at the beginning of a method or property, before any other code." (https://msdn.microsoft.com/en-us/library/dd412847(v=vs.110).aspx)

"This method call [Contract.Ensures(bool)] must be at the beginning of a method or property, before any other code." (https://msdn.microsoft.com/en-us/library/dd412868(v=vs.110).aspx)

Virtual Methods on Interfaces

Also interesting is what else could be in the { } on interfaces, e.g. default methods or, instead of abstract methods, virtual methods, on interfaces.

Assuming we get contracts at all, let alone on interfaces. Perhaps the syntax should be more like:

public interface Interface
{
  public void Function(...)
    requires ...
    ensures ...;
  public float Scalar
  {
    get ensures ...;
    set requires ...;
  }
}

@whoisj

Makes sense, otherwise if we get both interface contracts as well as default methods you'll end up with confusing syntax. The where clause for generic constraints provides some precedence to that form of syntax.

What about the syntax from #129?

@AdamSpeight2008 I don't know to what part in the syntax of traits you refer to but one thing is clear it isn't crystal clear as having requires and ensures.

Can you give an example of how the syntax of traits will have a play here? :)

Discussion

@stephentoub presented:

public int Insert(T item, int index)
    requires index >= 0 && index <= Count
    ensures return >= 0 && return < Count
{
    return InsertCore(item, index);
}

@whoisj presented:

public interface Interface
{
  public void Function(...)
    requires ...
    ensures ...;
  public float Scalar
  {
    get ensures ...;
    set requires ...;
  }
}

Regardless of the syntax, the new features as well as expanding the expressiveness of C# are exciting.

Scopes

On the topic of scoping (https://github.com/dotnet/roslyn/issues/119#issuecomment-126948282 § Scopes), we can consider the fuller set of new syntactic elements.

public void Function(...)
{
  requires ...;
  ensures ...;
  assert ...;
  assume ...;
  ...
  if (...)
  {
    assert ...;
    assume ...;
    ...
  }
  else
  {
     assert ...;
     assume ...;
     ...
  }
  ...
}

@eyalsk Traits are independent of how they are validated. So not tied to a particular implementation. Eg it could use Code Contract method, Asserts, test harness.
Validated at compile time, or runtime or both. Its internal are in { } so can be collapsed, colorised (Imagine if a trait isn't satisfied having a red cross aside it.)
Premise the predicate is considered a requires, if not inside of a ensure block. _Reducing the visual repeatition of Requires and Ensure)_

``` c#
public int Insert(T item, int index)
trait
{
// Anonymous trait
range: 0 <= index < Count;
ensure { range: 0 <= return < Count;}
}

{
return InsertCore(item, index);
}

``` c#
public int Insert(T item, int index) 
trait [+]
{
    return InsertCore(item, index);
}

@AdamSpeight2008 Okay, thanks for the clarification, yeah this certainly works but it bothers me that it's designed to do more than just contracts.

I don't like regions at all so I don't really like the fact that you can collapse it.

With CodeContracts I can do something like this.

[ContractClass(typeof(ITemplateIteratorContract))]
public interface ITemplateIterator
{
    char Current { get; }

    int Index { get; }

    int Length { get; }

    void Back();

    bool Next(int steps = 1);
}

[ContractClassFor(typeof(ITemplateIterator))]
internal abstract class ITemplateIteratorContract : ITemplateIterator
{
    public abstract char Current { get; }

    public int Index
    {
        get
        {
            Contract.Ensures(Contract.Result<int>() >= -1);

            return default(int);
        }
    }

    public int Length
    {
        get
        {
            Contract.Ensures(Contract.Result<int>() >= 0);

            return default(int);
        }
    }

    public abstract void Back();

    public bool Next(int steps = 1)
    {
        Contract.Requires(steps > 0);

        return default(bool);
    }
} 

This allows me to keep the code clean especially in places where I have many contracts, I kinda like this approach but it can be a lot more terse and powerful with language support.

@eyalsk I agree I'd want to see a robust implementation of the current contract feature set before looking to extend it to scope level. There is some value there but not as much as with the application to class members (where it helps document and constrain the API).

I must admit I was never a fan of the ContractClassFor construct, obviously there are limited options when dealing with Interfaces and without drastic (and unlikely) language support changes something like this will likely be the implementation going forward.

I could see a Trait style class being a backing model for this kind of thing (and make reuse/meta data reading easier), but it would seem to imply that the existing contracts namespace would be essentially abandoned. I'm not necessarily against that, but it implies a lot more work.

Language Support for Program Analysis

Language support for static and runtime program analysis could be the next LINQ and could expand to include some logic programming topics (https://github.com/dotnet/roslyn/issues/119#issuecomment-126948282 § Code Contracts Engine API and Logic Programming) (https://github.com/dotnet/roslyn/issues/119#issuecomment-126948282 § Scopes) (https://github.com/dotnet/roslyn/issues/119#issuecomment-132294418 § Scopes).

Syntax

Postfix Attributes

Some of you interested in metadata might enjoy the idea of postfix attributes. Runtime serializable expressions, expression trees, could also be constructor parameters of attributes. In theory, the postfix attributes, with expressions for constructor parameters, could reference the syntactic elements of interfaces, classes, fields, properties, methods or parameters.

@Lexcess Yeah ContractClassFor is kinda ugly. :)

@AdamSobieski How does your suggestion relate to this proposal? :)

Virtual Methods on Interfaces

Virtual methods on interfaces (https://github.com/dotnet/roslyn/issues/119#issuecomment-132176669 § Virtual Methods on Interfaces) (https://github.com/dotnet/roslyn/issues/119#issuecomment-131911526 § Syntax § Interfaces) (https://github.com/dotnet/roslyn/issues/258) (https://github.com/dotnet/roslyn/issues/73) could also be of use for traits (https://github.com/dotnet/roslyn/issues/129) (https://github.com/dotnet/roslyn/issues/73) and mixins (https://github.com/dotnet/roslyn/issues/73).

@AdamSobieski I got you, thanks.

Discussion

@stephentoub presented:

public int Insert(T item, int index)
    requires index >= 0 && index <= Count
    ensures return >= 0 && return < Count
{
    return InsertCore(item, index);
}

@AdamSpeight2008 presented (https://github.com/dotnet/roslyn/issues/129):

public int Insert(T item, int index) 
trait
{
   // Anonymous trait
    range:  0 <= index  < Count;
   ensure { range:  0 <= return < Count;}
}

{
    return InsertCore(item, index);
}

Syntax

Method Delegation

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    : <<{1}>>, <<{2}>>
  {
    return InsertCore(item, index);
  }

  ...
}

Optimizations

Such a syntax can facilitate compiler or runtime optimizations beyond:

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
  {
    <<{1}>>;
    <<{2}>>;
    return InsertCore(item, index);
  }

  ...
}

Example 1

public class List<T> : IList<T>
{
  ...

  int Function(T item, int index)
  {
    requires 0 <= index & index < this.Count;
    ensures 0 <= return & return < this.Count;
    return default(int);
  }

  public int Insert(T item, int index)
    : Function(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Example 2

public class List<T> : IList<T>
{
  ...

  int Function1(T item, int index)
    : Function3(item, index), Function4(item, index)
  {
    ...
  }

  int Function2(T item, int index)
    : Function5(item, index), Function6(item, index)
  {
    ...
  }

  int Function3(T item, int index)
  {
    ...
  }

  int Function4(T item, int index)
  {
    ...
  }

  int Function5(T item, int index)
  {
    ...
  }

  int Function6(T item, int index)
  {
    ...
  }

  public int Insert(T item, int index)
    : Function1(item, index), Function2(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Inheritance Example 1

public class List<T> : BaseClassList<T>, IList<T>
{
  ...

  public int Insert(T item, int index)
    : base.Function(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Type-casting Example

public class List<T> : IList<T>, IInterface<T>
{
  int IInterface<T>.Function(T item, int index)
  {
    requires 0 <= index & index < this.Count;
    ensures 0 <= return & return < this.Count;
    return default(int);
  }

  ...

  public int Insert(T item, int index)
    : ((IInterface<T>)this).Function(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Type-casting Inheritance Example

public class List<T> : BaseClassList<T>, IList<T>, IInterface<T>
{

  ...

  public int Insert(T item, int index)
    : ((IInterface<T>)base).Function(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Static Method Examples

Static Method Syntax

public static class Extensions
{
  public static int Function<T>(this IList<T> list, T item, int index)
  {
    requires 0 <= index & index < list.Count;
    ensures 0 <= return & return < list.Count;
    return default(int);
  }
}

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    : Extensions.Function<T>(this, item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Extension Method Syntax

public static class Extensions
{
  public static int Function<T>(this IList<T> list, T item, int index)
  {
    requires 0 <= index & index < list.Count;
    ensures 0 <= return & return < list.Count;
    return default(int);
  }
}

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    : this.Function<T>(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Inheritance Extension Method Syntax

public static class Extensions
{
  public static int Function<T>(this IList<T> list, T item, int index)
  {
    requires 0 <= index & index < list.Count;
    ensures 0 <= return & return < list.Count;
    return default(int);
  }
}

public class List<T> : BaseClassList<T>, IList<T>
{
  ...

  public int Insert(T item, int index)
    : base.Function<T>(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Type-casting Method Syntax

public static class Extensions
{
  public static int Function<T>(this IList<T> list, T item, int index)
  {
    requires 0 <= index & index < list.Count;
    ensures 0 <= return & return < list.Count;
    return default(int);
  }
}

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    : Extensions.Function<T>((IList<T>)this, item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Type-casting Extension Method Syntax

public static class Extensions
{
  public static int Function<T>(this IList<T> list, T item, int index)
  {
    requires 0 <= index & index < list.Count;
    ensures 0 <= return & return < list.Count;
    return default(int);
  }
}

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    : ((IList<T>)this).Function<T>(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Lambda Expressions

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    : ((IList<T> x1, int x2) =>
    {
      requires 0 <= x2 & x2 < x1.Count;
      ensures 0 <= return & return < x1.Count;
      return default(int);
    })(this, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Interfaces, Virtual Interface Methods

[...]
public interface Trait<T>
{
  [...]
  int Function(T item, int index)
  {
    requires 0 <= index & index < ((IList<T>)this).Count;
    ensures 0 <= return & return < ((IList<T>)this).Count;
    return default(int);
  }
}

public class List<T> : IList<T>, Trait<T>
{
  ...

  public override int Insert(T item, int index)
    : ((Trait<T>)this).Function(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Inheritance Example 2

public interface IList<T>
{
  ...

  int Insert(T item, int index)
  {
    requires 0 <= index & index < this.Count;
    ensures 0 <= return & return < this.Count;
    return default(int);
  }

  ...
}

public class List<T> : IList<T>
{
  ...

  /* See also: http://research.microsoft.com/en-us/projects/contracts/userdoc.pdf, Chapter 3 */
  public override int Insert(T item, int index)
    : base(IList<T>).Insert(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Inheritance Example 3

public interface IList1<T>
{
  ...

  int Insert(T item, int index)
  {
    requires 0 <= index & index < this.Count;
    return default(int);
  }

  ...
}

public interface IList2<T>
{
  ...

  int Insert(T item, int index)
  {
    ensures 0 <= return & return < this.Count;
    return default(int);
  }

  ...
}


public class List<T> : IList1<T>, IList2<T>
{
  ...

  /* See also: http://research.microsoft.com/en-us/projects/contracts/userdoc.pdf, Chapter 3 */
  public override int Insert(T item, int index)
    : base(IList1<T>).Insert(item, index), base(IList2<T>).Insert(item, index)
  {
    return InsertCore(item, index);
  }

  ...
}

Notes

Reusability

The syntax presented by @stephentoub, with a different style of static methods and processing, can facilitate reusability:

public static class Extensions
{
  public static bool Range<T>(IList<T> list, int x)
  {
    return 0 <= x & x < list.Count;
  }
}

public class List<T> : IList<T>
{
  ...

  public int Insert(T item, int index)
    requires Extensions.Range(this, index)
    ensures Extensions.Range(this, return)
  {
    ...
  }

  ...
}

Method delegation with calling static target methods facilitates reusability.

public void Function(string x1, int x2)
  : Extensions.NotNull(x1), Extensions.Range(this, x2)
{
  ...
}

Scoping

The syntax can provide expressiveness to refer to a delegating or calling method.

A return keyword would seem to require either inline method expansion semantics on the target methods (otherwise a return keyword would be regarding the target methods as opposed to the delegating or calling methods) or new syntax for referring to delegating or calling methods from target methods including utilizing new functionality, e.g. System.Diagnostics.Contracts.Contract.Result<T>(/*...*/).

Parametric Polymorphism

Method delegation could expand parametric polymorphism, expanding the expressiveness of methods, including the method MakeGenericType.

public override Type MakeGenericType(params Type[] typeArguments)
  : Extensions.ArrayCount(typeArguments, ...), ...
{
  ...
}

See also templates:

public override Type MakeGenericType(params object[] arguments)
  : Extensions.ArrayCount(arguments, ...), ...
{
  ...
}

nit:

ensures 0 <= return & return < this.Count;

you are using bitwise & over and over in these, I'm assuming you mean logical && instead...

@bbarry , thanks for the comment.

The & is a logical as well as bitwise operator. & is described as evaluating both operands regardless of the first operand's value. && evaluates utilizing short-circuit evaluation where the second operand is not evaluated if the first is false. X1 && X2 is equivalent to if(X1) { return X2; } else { return false; }.

To the matter of ensures semantics: ensures X1 && X2, ensures X1 & X2 or ensures X1, X2. In the notes about reusability, the example with the syntax presented by @stephentoub , we see that the compiler or runtime could obtain expression trees from functions such as Extensions.Range.

Envision & and && and various possible compiler optimizations, including runtime compiler optimizations, of large logical expression trees including those from scenarios resembling example 2 (see also: https://github.com/dotnet/roslyn/issues/119#issuecomment-126948282 § Lambda Expressions and Sets). In such optimization processes, separating the clauses is convenienced by & or semantics. The topics also pertain to the semantics of collections of expressions, sets or sequences.

Reading the current discussion, there is one thing that's unclear to me:

Will it be valid to strengthen postconditions (by adding more ensures clauses) or weaken preconditions (by omitting/retracting some requires clauses) when specializing a virtual method?

@TChatzigiannakis , thanks for the question. Your question is applicable to each syntax.

“If a subclass overrides the behavior of a method of a base class (or implements and interface), the preconditions it specifies can only be equivalent to or weaker than the base class.” [2]

“If a subclass overrides the behavior of a method of a base class (or implements and interface), the postconditions it specifies can only be equivalent to or stronger than the base class.” [2]

[1] [Code Contracts User Manual](http://research.microsoft.com/en-us/projects/contracts/userdoc.pdf)
[2] [Code Contracts and Inheritance](http://stefanoricciardi.com/2009/07/17/code-contracts-and-inheritance/)

Logic Programming

There are exciting possibilities pertaining to interoperability between runtime and static program analysis and logic programming (https://github.com/dotnet/roslyn/issues/119#issuecomment-126948282 § Code Contracts Engine API and Logic Programming).

The following example indicates the method delegation syntax and static methods with a context-related or logic engine parameter.

public static class LogicProgramming
{
  public static bool Function1(this LogicEngineContext engine, Variable x1, Variable x2)
    : Extensions.Extension1(engine, x1), Extensions.Extension2(engine, x2)
  {
    ...
  }

  public static bool Function2(this LogicEngineContext engine, Variable x1, Variable x2)
    : LogicProgramming.Function1(engine, x1, x2), ...
  {
    ...
  }
}

I rather like this idea; although I feel that it would be nice if the syntax highlighting of C# made the contracts appear much fainter so that they don't "overpower" the method signature/body when glancing through code.

By fainter I mean, the way in which unused namespaces are sort of ghosted.

But yeah; a big +1 for this feature :)

@kruncher Either that or maybe introduce a generally more granular syntax coloring configuration.

We need the support of at least runtime enforced contracts (simply turn into checks throwing exceptions) to clean shit (Contract.Requires, Contract.Enruses) from our code. requires also will be better replacement for if (...) { throw ... } construction which used in most projects. So, I think, Roslyn team should give high priority to (simple) contracts support in C# 7. Also, I think, it's Code Contracts team must change their tools to support this language feature, not language feature must be backward-compatible with existing Code Contracts tools.

I've made a simple proof of concept implementations which allows part of the proposed syntax to be tried out. It is availible at https://github.com/Daniel-Svensson/roslyn
Currently I have no plans to take this further, but feel free to use it as a base for further exploration of the syntax.

The concept implementation is quite basic and it is not a complete implementation or a specification how I think it should be. For example

  • It does not allows "return" to be used in _ensures_, you must use Contract.Result() instead
  • It relies on the code contract rewriter to implement logic such as what is a valid pre/post condition etc.
  • It requires code contracts to be used in all projects using the syntax or the Contract calls will be optimised away
  • It does not allow you to throw exceptions (since the implementation relies on code contracts you would only partially be able to specify the exception arguments
  • It does not allows a distinction between "release" and "debug" contracts (such as the proposed "assert")

The concept allows you to use "requires expression" and "ensures expression" syntax between the method declarations and it's implementation. The syntax is translated to simple calls to Contract.Requires(expression) and `Contract.Ensures(expression).

Example:

The following code

``` C#
private static void Test(int a)
requires a >= 0
requires a <= 10
{
Console.WriteLine(a.ToString());
}

will be translated to

``` C#
 private static void Test(int a)
 {
            Contract.Requires(a >= 0);
            Contract.Requires(a <= 10);
            Console.WriteLine(a.ToString());
 }

I am thinking about changing how the rewritten code looks like in order to support throwing exception or supporting "result" in ensures but I have not decided if I will do that or not.

@Daniel-Svensson thank you! it's definitely a start even though it is just a proof of concept, good job! :)

I'm really looking forward for this feature to be implemented into C# 7.

If I write something like this,

void Test(string a) requires Regex.IsMatch(a, "...")  { ... }

Do I get a compile-time error if I pass an _string literal_ that doesn't match?

And also, do I need to repeat the requires clause for each of these methods,

void F(int a) requires a > 0 { ... }
void G(int b) { F(b); }

or G would infer them by usage?

Joe Duffy talks about Midori's team experience of designing and using contracts.

http://joeduffyblog.com/2016/02/07/the-error-model/

@omariom I was just going to mention this. as it seems a like a lot of what people are wanting is / was implemented within this C# variant (Midori)

@omariom Great read! thanks.

@jaredpar is there anything new in this space? will we ever see this as part of the language? I know it won't be part of C# 7 but is there any progress in this area? like discussions? different way to tackle this? what are your thoughts?

  • I think, contracts should follow same resolution as nullability checking. Nullability checking for non-nulable classes can change a bit (I think I should mention @gafter here)-

    • Warnings should be shown in compile-time, if compiler cannot determine whether the caller is fully complied by the contracts through static analysis. Some if condition checking in the caller's side should be enough to satisfy the compiler static analysis and remove warnings in this case.

    • If static analysis can determine a sure contract violation (for example in the cases of literals or constant values), errors should be shown at compile-time error.

    • Otherwise ok. :smile:

  • Invariants should be implemented for records, if not for all the classes. As records are immutable, an invariant is just contacts attached to the constructor.

Mads Torgersen - MSFT at https://blogs.msdn.microsoft.com/dotnet/2016/08/24/whats-new-in-csharp-7-0/

Nothing on design by contract this time around, and I’m not sure if there will be anytime soon. We’ve looked a lot at contract-style features, and they tend to get quite verbose and complicated. I am not sure that they would carry their weight in added value as language features. But maybe, one day, if we can get the design right.
While people do use CodeContracts for many things, the 90% case seems to be dealing with null. We are eager to tackle the language integration of that in a more type-system-oriented way, allowing you to distinguish between nullable and nonnullable reference types and track correct usage based on that. I hope this can happen in the next major release of C#.

@mcetkovsky Thank you for that, range checks are also valuable so it's not only null checks but they are right in that DbC tend to be verbose and complicate things.

Maybe someone can narrow the use-cases and come up with a better solution.

We just need extension support by compiler. Then Code Contracts team will apply language extension step-by-step and after their work will complete, Roslyn team will may start discussion about including that code to compiler.

I lend my I support the final solution Midori came up with:

Joe Duffy said about contracts in Midori:

We ended up with a single kind of contract: one that was part of an API’s signature and checked all the time. If a compiler could prove the contract was satisfied at compile-time – something ... it was free to elide the check altogether. But code was guaranteed it would never execute if its preconditions weren’t satisfied. For cases where you wanted conditional checks, you always had the assertion system...

Perhaps we could introduce a syntax for the developer to indicate that he/she wants the checks to be debug-only? For example, prefixing with @:

int NextPowerOfTwo(int value) @requires value < 0
{
}

This particular example would kind of line up with how you prefix commands with @ in Batch scripting and Makefiles to 'quiet' the normal echo you get from the command. (Although C# is a very different language from both of those.)

Alternatively, we could introduce an attribute such as [DebugOnlyContract], but that would take away much of the convenience of using this feature for debug checks I guess.

I would personally prefer the more verbose (but readable!) attribute over the obscure syntax.

@kruncher The meaning would be clear after the first time you learn what it means, so that would only be an issue for newcomers.

I guess another option though would be to simply have a debug contextual keyword like this:

int NextPowerOfTwo(int value) debug requires value > 0
{
}

Would be less typing than adding an attribute, and admittedly more readable than the @ syntax for newcomers, but it would require an extra few keystrokes.

What is the advantage in having a debug only contract; surely a contract is a contract?

@kruncher We may not want extra checks we know are true in performance-sensitive code. For example, it may prevent the JIT compiler from inlining a particular method. For example if you go to corefx where the BCL lives, you'll see that basically all of the checks of the internal/private methods are Debug.Asserts.

Hmmm, would it be useful to have a build option to simply disable all contracts for internal/private methods (i.e. an option that can be configured on/off for 'Debug', 'Release' or 'Custom')?

@kruncher Hmm... might be a good idea, actually. A problem though is that you might want to enable the contracts for only certain internal/private methods.

In the Code Contracts implementation there is a explicit option for only checking 'public surface contracts' which exhibits this behaviour. The clear win here is that if a contract is asserted on private methods the Code Contracts implementation will require another assertion or assumption it on any callee meaning that if all callees have already validated the input you don't take the perf hit on checking again.

I'd say this behaviour is the same (or a slight subset of) the statement: _"If a compiler could prove the contract was satisfied at compile-time – something ... it was free to elide the check altogether"_ from earlier in this thread.

@Lexcess

meaning that if all callees have already validated the input you don't take the perf hit on checking again.

Would not always work. For example, consider this code sample:

T Single<T>(IList<T> list)
{
    if (list == null || list.Count != 1) throw something;

    return SingleInternal(list);
}

T SingleInternal<T>(IList<T> list)
{
    Debug.Assert(list != null && list.Count == 1);

    return list[0];
}

A bad implementation of IList<T> could return different values of Count for the 2 invocations, even though it was not actually modified. So the compiler could not prove that the list.Count == 1 constraint would be satisfied, and would have to check the Count twice.

I see what you are saying but I think it is a separate point. I think the point that you could have untrustworthy implementations (note that the MS Code Contracts has contracts for interfaces like IList so you could solve this) is different from needlessly reapplying the same checks all the way down a call stack.

It'd be interesting to see what the Code Contracts does around non-predictable return types such as your example and arbitrary yield returns. I've often found it's analyser was quite good at unravelling such problems.

For that example Code Contracts seems to take the "if it hurts don't do that strategy" and assumes you are not trying to intentionally harm the stability of your system. It does this by requiring methods used in a contract to be [Pure] but it does not enforce that the method or property is actually "Pure".

as I explained in dotnet/csharplang#1198, this syntax will be shorter:

public <0,Count>int Insert(T item, <0,Count>int index)   {
    return InsertCore(item, index);
}

@MohammadHamdyGhanem

It's also not particular generic. What if you wanted to convey a constraint that doesn't involve ranges?

Or even within ranges, disjointed ranges?

e.g. value must be between 0 and 100, or between 200 and 300.

What you ask about is a complex condition already, and regular If statements can do it better.
If you want to complicat things, I suggest to use regular expression syntax such as:
[reg exp]int x;
This can allow writting constrains on string data types as well.
If we think at the Data Type level, any thing is poossible!
Thanks.

@MohammadHamdyGhanem If statements only give you runtime checks, not compile-time safety.

@yaakov-h
So what? This ia already the case with any assignment that evaluates in run-time.
int i = x/somefunction();
is not runtime safe.
We csn handle exceptions easily , but business logic errors will pass undetected unless writing extra validation code, which we do more often.
In data binding, if a value raises an exception, the bound control will reject it automatically preventing the user from entering bad values without writing extra code.

Also: Method Contracts are not run time safe, and will raise exceptions. This is why we use try catch.

So, there are 4 possible ways to do it:

  1. Method contracts. It is verbose but can write generic conditions.
  2. Constrained Data Types: It is a compact syntax and can be extended to use regular expressions.
  3. Validation attributes: Like those used in ASP.NET MVC. It has no effect on the c# syntax if self, and we can use parameters that indicate the error message or other actions. But using attribute for individual method parameters and return value can make the code long and ugly!
  4. Writing an analyzer, which I cannot exactly understand !

Sure we all want some syntax to simplify validation codes.
We can make compromises. For example:
If the constrained types can cause problems, they can only be used to define properties and methods (parameters and return values). They are nit true types but just syntax annotations to tell compiler how validate values, so the compiler can convert then to contracts under the hood.
Attributes also can be used with properties and methods to supply error message and additional options to tell the compiler what to do when the data are not valid.
This approach says that contracts can still be used in more complicated situations (but I see that it is easer to use regular code statements)

In previous research we have shied way from invariants in part because it's actually incredibly difficult for developers to predict the effect it will have on the code. Typically invariants checks are validated to be true on every exit / entry point of a given method. Given that can you point out all of the places on the following method where that happens?

Maybe invariants could be qualified like private and protected, applying them to all methods at the same or higher visibility than the invariant.

Thus, a private methold could violate a public invariant, as it's always called in a context with higher visibility which has to ensure the invariants on it's boundaries anyways.

This allows to refactor common code into private methods while still upholding the invariants to the outside.

Was this page helpful?
0 / 5 - 0 ratings