Nunit: Test Dependency Attribute

Created on 3 Nov 2013  Â·  59Comments  Â·  Source: nunit/nunit

Hi,

I have a web app with extensive automated testing. I have some installation tests (delete the DB tables and reinstall from scratch), upgrade tests (from older to newer schema), and then normal web tests (get this page, click this, etc.)

I switched from NUnit to MbUnit because it allowed me to specify test orders via dependency (depend on a test method or test fixture). I switched back to NUnit, and would still like this feature.

The current work-around (since I only use the NUnit GUI) is to order test names alphabetically, and run them fixture by fixture, with the installation/first ones in their own assembly.

feature normal

Most helpful comment

@aolszowka
This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Nobody has assigned it to themselves, which is supposed to mean that nobody is working on it. Smart of you to ask, none the less! If you want to work on it, some team member will probably assign it to themselves and "keep an eye" on you, since GitHub won't let us assign issues to non-members.

I made this a feature and gave it its "normal" priority back when I was project lead. I intended to work on it "some day" but never did and never will now that I'm not active in the project. I'm glad to correspond with you over any issues you find if you take it on.

My advice is to NOT do what I tried to do: write a complete spec and then work toward it. As you can read in the comments, we kept finding things to disagree about in the spec and nobody ever moved it to implementation. AFAIK (or remember) the prerequisite work in how tests are dispatched has been done already. I would pick one of the three types of dependency (see my two+ years ago comment) and just one use case and work on it. We won't want to release something until we are sure the API is correct, so you should probably count on a long-running feature branch that has to be periodically rebased or merged from master. Big job!

All 59 comments

This bug duplicates and replaces https://bugs.launchpad.net/nunit-3.0/+bug/740539 which has some discussion.

While dependency and ordering are not identical, they are related in that ordering of tests is one way to model dependency. However, other things may impact ordering, such as the level of importance of a test. At any rate, the two problems need to be addressed together.

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture) with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with [DependsOn("TestInThisFixture")]

What does MbUnit do if you set up a cyclic "dependency"?

On Sun, Nov 3, 2013 at 12:25 PM, ashes999 [email protected] wrote:

I like the MbUnit model a lot:

  • Dependency on another test suite: annotate test (or test fixture)
    with [DependsOn(typeof(AnotherFixtureType))]
  • Dependency on another test: annotate test with
    [DependsOn("TestInThisFixture")]

—
Reply to this email directly or view it on GitHubhttps://github.com/nunit/nunit-framework/issues/51#issuecomment-27653262
.

@CharliePoole if you create a cycle or specify a non-existent test method dependency, MbUnit throws a runtime exception. Since depending on a class requires the type, that would be similar (depending on a non-test) or a compile-time error (that type doesn't exist).

Any update on this issue? I chose MbUnit because of the ordering. Now it's on indefinite hiatus, I need to look for an alternative. It would be nice if NUnit can support this essential feature in integration testing.

Sorry, no update yet, although I have also come from MbUnit and use this for some of our integration tests. If you just want ordering of tests within a test class, NUnit will run the tests in alphabetical order within a test fixture. This is unsupported and undocumented, but it works until we have an alternative.

I thought this was coming in v3?

Yes, but as yet it isn't being worked on. After the first 3.0 alpha, we'll
add further features.
On Jun 13, 2014 8:28 PM, "fraser addison" [email protected] wrote:

I thought this was coming in v3?

—
Reply to this email directly or view it on GitHub
https://github.com/nunit/nunit-framework/issues/51#issuecomment-46044623
.

For 3.0, we will implement a test order attribute, rather than an MbUnit-like dependency attribute. See issue #170

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

Hi,

On Thu, Aug 7, 2014 at 6:10 AM, Ashiq A. [email protected] wrote:

See my comment in #170. Ordering is very limited and prone to maintenance (unless you use multiples of ten so you can insert tests in the middle without reordering everything). MbUnit has arbitrary dependency graphs, which I (or maybe I should say "we" since I'm not the only one) really need.

Yes. The idea to use ordering is based on an unstated assumption: that
it will hardly be used at all. Trying to control the runtime order of
all your tests is a really bad idea. However, in _rare_ cases, it may
be desirable to ensure some test runs first. For such limited use, an
integer ordering is fine and the difficulty of inserting new items in
the order might well serve as a discouragement to unnecessarily
ordering tests.

Note that this isssue relates to the ordering of Test methods, not
test fixtures. Issue #170 applies to test fixtures and not methods as
written, since it uses a Type as the dependent item. That said, the
examples in #170 seem to imply that ordering of methods is desired.

Basically, we decided that #170 requires too much design work to
include in the 3.0 release without further delaying it. We elected -
in this and other cases - to limit new features in favor of a quicker
release. Assigning #170 to the "Future" milestone doesn't mean it
won't happen. Most likely we will address it in a point release.

The use of an OrderAttribute was viewed as a way of quickly giving
"something" to those who want to control order of test method
execution. We felt we could get it in quickly. In fact, we may have
been wrong. Thinking about it further, I can see that it may introduce
a capability that is difficult to maintain in the face of parallel
test execution. In fact, a general dependency approach may be what we
need. For the moment, I'm moving both issues out of the 3.0 milestone
until we can look into them further.

Depending on an alphabetic order is a crutch, and a pretty weak one considering this could change at any time.

Indeed. We have always advised people not to use that for exactly that
reason. In fact, it is not guaranteed in NUnit 3.0.

Charlie

I have not used MbUnit in years but would like to add to this discussion if my memory serves me correctly.

Assuming [Test Z] dependson [Test A]. I then run [Test Z]. It appears that NUnit evaluates the result of [Test A] first. If there is no available result then NUnit will automatically run [Test A] before attempting to run the requested [Test Z]. NUnit will only run [Test Z] if [Test A] passes. Otherwise, [Test Z] will be marked as inconclusive and indicate its dependency on the failed [Test A].

These might provide some insight:
https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?spec=svn3066&r=1570

https://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/DependsOnAssemblyAttribute.cs?r=1570

@circa1741: We will work on this in the "future" by which I mean the release after 3.0. Full-on dependency is really a pretty complex beast to implement and we are already taking on a lot in 3.0.

Ordering of tests is a bandaid if what you want is true dependency, but it's pretty easy to implement.

By doing ordering (#170) in 3.0 we do run a risk: some users will treat it as the answer to their dependency problems and come to depend on it in the wrong context. Still, it seems better than doing nothing.

I'd like to find the time to write an article on the various kinds of dependency and ordering and how they differ in usage and implementation... maybe... ;-)

Correction: after I wrote the preceding comment, I noticed that #170 is also scheduled for 'future'.

We'll continue to discuss whether it's possible to include some ordering feature in 3.0 as opposed to 3.2, but at the moment they are not planned.

(Samples taken from http://blog.bits-in-motion.com/search?q=mbunit)

When writing integration tests, it is sometimes useful to chain several tests together that capture a logical sequence of operations.

MbUnit can capture this chaining either with dependencies:
dependson
Also allows [DependsOn(typeof(ClassName.MethodName))]

Or with explicit ordering:
order

Thanks for the example code. It gives something to aim for. Your first example is relevant to this issue. The second is exactly what we are planning for #170.

I have an idea that is more of a twist for dependency and ordering.

The discussion, so far, regarding dependency is "go ahead and run Test H (and possibly 8 other tests) only if Test 8 passes." In other words, there is no point of running Test H because if Test 8 fails then I know that Test H will also fail.

How about a dependency when a test fails?

Scenario:
I need a Smoke Test that covers a lot of ground. So, I am planning a Test Fixture that is an end-to-end test that has basic coverage of many of the SUT's features. The tests on said test fixture will use Test Ordering and are "not independent." The test order will be Test A then Test B then Test C, etc.

Now, because the tests are "not independent" I know that if Test C fails then all the following tests will also fail. Therefore, I need more tests to run in order to get a bigger picture of the Smoke Test.

I need to be able to configure to run Test 1 if Test A fails, Test 2 if Test B fails, Test 3 if Test C fails, etc.

My Test 3 is designed to be independent of Test B so if this fails then I have a better understanding of why Test C failed earlier. As it turns out my Tests 4 (for Test D), 5 (for E), 5 (for F), etc. all pass. Then I now understand that only the feature that was covered by Test C is the issue.

Why not run Tests 1, 2, 3, etc. instead? Well, because since those are isolated and independent tests then I am not doing Integration Tests. Again, I need a Smoke Test that covers a lot of ground.

Maybe something like:

  • [DependsOnPassing("Test so and so")]
  • [DependsOnFailing("Test blah blah blah")]

This will allow finer control in my automation test design.

How about something like these attributes instead:

  • [DependsOnPassing("Test so and so")]
  • [RunOnFail("Test blah blah blah")]

Please note to which test these are attached to. These attributes should be useable in different levels (and in any): assembly, test fixture, test.

[DependsOnPassing("Test E")]
Test F

  • Test E will automatically be executed if its result is unknown.
  • Only then will Test F be determined if it should run or not.

[RunOnFail("Test N")]
Test I

  • If Test I fails then Test N will automatically be executed.

Test N

  • Test N may be executed on its own.
  • But it will also automatically be executed if Test I fails.

Test E

  • Test E may be executed on its own.
  • But it will also automatically be executed if its result is unknown because Test F depends on this test passing first.

Copied from #1031, which obviously duplicates this. Hopefully the example helps and the keywords direct others here...

I have a couple of cases where I would like to mark a test as a prerequisite of another test. It would be nice to have an attribute that indicated tests that were prerequisites of the current test.

In the case where routines depend on one another, it is possible to know that a given test is going to fail because a sub-routine failed its test. If the test is a long running one, there really isn't any point in running the test if the sub-routine is broken anyway.

Contrived Example:

public static class Statistics
{
    public static double Average(IEnumerable<double> values)
    {
        double sum = 0;
        double count = 0;
        foreach (var v in values)
        {
            sum += v;
            count++;
        }
        return sum / count;
    }

    public static double MeanVariance(IEnumerable<double> values)
    {
        var avg = Average(values);
        var variance = new List<double>();
        foreach (var v in values)
        {
            variance.Add(Math.Abs(avg - v));
        }
        avg = Average(variance);
        return avg;
    }
}

[TestFixture]
public class TestStatistics
{
    [Test]
    public void Average()
    {
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var avg = Statistics.Average(list);
        Assert.AreEqual(4.5, avg);
    }

    [Test]
    //[Prerequisite("TestStatistics.Average")]
    public void MeanVariance()
    {
        //try { this.Average(); } catch { Assert.Ignore("Pre-requisite test 'Average' failed."); }
        var list = new List<double> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
        var variance = Statistics.MeanVariance(list);
        Assert.AreEqual(0, variance);
    }
}

Given the example, if the test Average fails, it makes sense to not bother testing MeanVariance.

I would conceive of this working by chaining the tests:

  • if MeanVariance is run, Average is forced to run first.
  • If Average has already been run, the results can be reused.
  • If Average fails, MeanVariance is skipped.

Are there any previsions about when this feature will be available? I haven't found this information in others threads, maybe it is a duplicate question.

It's in the 'Future' milestone, that means after 3.2, which is the latest actual milestone we have. However, we are about to reallocate issues to milestones, so watch for changes.

In the spirit of extending NUnit to cover testing needs other than just unit tests, I'll weigh in on this conversation. The conversation so far has been a step in the right direction, but I feel there is a key aspect missing.

Unit tests are inherently flat (i.e. setup, run one test, teardown), and should not have any dependency on any other test having been run. Integration testing adds the idea of ordered testing (or dependent tests for some definition of dependent) in which say the setup connects to the system, test A creates some object, test B modifies the object created by A, test C deletes the object created by A, and the tear-down disconnects from the system. This last example seems to be what most people are talking about in this thread.

Consider UI testing of a website though, we have a page that can navigate to several other pages, and some actions can be performed on that page, which may affect the usage of other pages. In the unit testing structure, I need to navigate to page A, create object A.1, navigate to page B create object B.1. That's one test, which reimplements the test of creating object A.1 (from a different page). Now consider I need a test that modifies B.1 to point to A.1, I have to redo everything I just did to write this test. If we modify this example to fit the integration test analogy, I have to do slightly less work for the last step, in that I can rely on B.1 being in the same class and thus can do some sort of [DependsOn("Create_B1_Object")] attribute; however, if all the testing for page A is in a separate class, how could I link the two together?

The point here is that UI testing often involves a sort of tree structure for your test suite. All child tests under node B require that B completes successfully, and B depends on node A completing successfully, but neighbors to B do not depend on B being run at all. The tree structure allows you test all the functionality of page A independently of anything else, but since the layout and objects available on page B rely on some functionality of page A, it can depend on all those actions having been done already (by means of PageATests.CreateObject_A1 having been run as a parent).

In the tree structure, there is nothing that means you cannot have flat (unit) testing, or ordered (integration) testing, you can have the best of all of them, but you add on the ability to do UI testing.

Just my $.02 for what it's worth

Exactly my thoughts. You'd want series of tests of which each test may have a dependency on the other. Series may or may not have a dependency on one another. More or less what is currently possible with MSTest Ordered Tests, other that MSTest also allows continuation of tests even if failure occurs.

A perfect example are UI tests which are often (always?) order and context sensitive.

Other than MSTest there is no actively developed testing framework that allows developing such types of tests.

@fgsahoward I'm in complete agreement with you at least at a high level - as a careful reading of past discussions will show.

I say "careful" because I've disagreed with many specific details of proposals that have been made. As you can see, it is now listed as a feature on our backlog. In my view, we'll have to be careful when we implement it for two main reasons:

  1. The feature is often requested by people writing unit tests.
  2. Some folks (usually the same folks) mean something different by dependency than you do.

I'm generally of the opinion that true sequencing of tests, as a series of steps, is essential for many kinds of testing, incuding those you mention. I think we need to make that available in a specific way, that minimizes the chance folks will use them for other purposes. That probably means a different attribute being used to annotate the class or namespace that represents the overall test as well as another (different) attribute annotating the steps within that test.

IOW, I don't want to stick this feature on top of TestFixture and Test.

The second issue will probably need to be dealt with by a dependency attribute of some kind, possibly including an indication of the relative "strength" of the dependency. IOW, if what you mean by dependency is "It's a waste of time to run B if A has failed, because it will certainly fail" then running test B or not can be at the option of the framework.

In fact, what you have described is what I call ordering. Ordering can be stricter than dependency. As a gui developer (my main focus for many years) I don't want to specify dependencies and have the framework figure out the order. I want to specify the ordering explicitly.

Another way I have talked about this in the past is to call what you are talking about "strict dependency" and the other kind "indirect dependency." That hasn't turned out to be very clear for most people. Perhaps when we produce some test implementations it will get clearer. :-)

@Sebazzz Thanks for the tip about MsTest I may see if we can make that fit our needs in some fashion until this feature has been developed.

@CharliePoole I completely agree, there needs to be a very clear distinction between the types of tests you are about to start writing. I would not want all these different features rolled into one attribute with different configurations. I was more or less suggesting that the underlying implementation could use a tree structure for all types of tests, the attributes simply determine the ordering and dependencies.

I think it is very clear that no one would want to muddle the lines between the different styles of tests, so that you don't end up with a hack up of testing styles used for unit testing. Thank you for your notes, and I look forward to the solution you all come up with in the future. :)

@oznetmaster Continuing the discussion from #170...

We have had extended discussions on the nunit lists in the past about what dependency means. Your comments would have fit in there quite well and describe what I have often called "strict dependency."

The thing is that we are driven by users and users who ask for dependency often mean something much less precise than what you describe. I did an analysis once and came up with three primary definitions of dependency. Like much of my past writing, it got lost in moving the project from one place to another, but if we are getting close to implementing something, I should try to recover it.

Basically - and possibly too simply - I found that people saying Test B depends on Test A meant one of three things:

  1. Test A establishes pre-conditions that are needed for Test B to run or to succeed.
  2. If test A fails, then test B will fail, so there is no need to run it. (different from 1 because the correlation is not causality, but due to both tests requiring the same feature in order to work.)
  3. If test A fails, then that's all we need to know - test B will provide no useful information.

A key difference with types 2 and 3 is that the framework may choose to run them anyway if that's more convenient. With type 1, the framework should not run the Test B.

Dependencies in case of failure rather than success are a nice extension.

When we had this discussion, everyone with a view thought that there view was "the" definition of dependency and everyone else had to use a different word. :-) I'm less concerned with knowing the right definition than in giving each group what they are asking for.

I fully agree with all of that.

However TestB runs after TestA only establishes an ordering, exactly as if TestA had an order of 1 and TestB had an order of 2. There is no other implied dependency in the two tests. TestB will run independent of TestA suceeding or failing, and there is no implicit assumption that TestA "sets up" TestB.. This is different then any of the three cases you listed. This is why it is really an ordering semantic, not a dependency one. It is exactly the same as using ordinals for ordering, but much more intuitive and easier to maintain and is self documenting.

"TestB runs after TestA" is ambiguous. After TestA starts? finishes? succeeds?

However, I get what you are saying. It's possible to use a dependency syntax that simply translates into the sort of ordering we are doing. I suppose that could be a fourth meaning of "depends on" if we wanted to go there.

Are you suggesting we change what we are doing in this PR? Or do something else in future?

@oznetmaster In the end we will likely have a max of one "OrderAttribute" (or similar) and one "DependsOnAttribute." We have to either pick a clear meaning for each one (if we have both) or provided some sort of property that allows the user to select from several meanings. Would you agree?

Seems reasonable to me. If we go with a dependency tree for all of them, then a single attribute would work, with different conditions for what its dependency means.

It seems that ordinal ordering is equally ambiguous. What does ordering 1, 2, 3, actually mean. Do they start in that order? Do they wait for the earlier ones to finish before starting?

I think that this PR could do it all. Both ordering and dependency require building a logical dependency tree. That is the first step.

What is done with that tree could be two PR's, or this single one. I know you like smaller PRs, but I do not have a good grasp of how big doing it all in one PR would be.

Issues of cross TestFixture dependency (or TestFixture dependecy) also come into play, as do cross test assembly dependencies. I am not suggesting we do them all, but we should at least consider them. This applies to simple ordering as well.

Except this isn't a PR. Not just a semantic quibble: no work has been done on it nore is it even scheduled or assigned. OTOH, the other one is something we have code for right now.

I think this is a pretty big deal unless some preliminary work is done separately first. We need to have an arbitrary way to specify dependencies between work items and schedule them accordingly. That's not how we do it now. We have no notion of items waiting for other items to finish. Issue #1096 could lead up to that. It's currently scheduled in milestone 3.4.

Of course, we could do some kind of adhoc dependency without that, but we would probably end up redoing it in the end. IMO, a generalized dependency graph between work items is what we really want to have. Then it becomes pretty easy to implement various user layers on top of it.

BTW, it feels like we often get into broad discussions like this at the point when we are trying to bring home a release target. I wish we could do it at the beginning of an iteration!

I am very interested in working on this item. I would start by proposing a "syntax" for specifying dependency. Test ordering (non-dependent) would be part of it. I would also like to make it work with the same syntax on TestFixture as well as Test.

As has been noted, this has been around for a long time. Is it still too soon to actually start on this?

No, not too soon to figure out the syntax anyway. I suggest you create a Specification on the dev wiki. I have some ideas that I would like to contribute... as very briefly outlined in one of the comments above. I think the key distinction is between "hard" dependencies, which NUnit must follow, and "softer" ones, which are basically just hints to the framework.

I have a pretty good idea how to implement this as well, including using it to drive the SetUp and TearDown phases discussed in issue #1096.

It would mean changing the test "dispatching" from its current "push" mechanism to a "pull" one.

"Create a Specification on the dev wiki"? No idea how to do that :(

Not sure why you refer to test dispatching as a "push" mechanism. Workers pull items from the queues when they are ready to execute them. I'm planning to work on #1096 and I welcome any suggestions.

Can you edit a wiki? I'll create an empty page in the right place if you like. Otherwise, it can be text here, of course, but doing it on the wiki would give us a head start on documenting it later.

CompositeWorkItem basically iterates across its children, and "pushes" each one to be executed.

I see an implementation of dependency which builds a linear "queue" of every work item in the test, sort of like a scheduler queue in an operating system, which specifies the conditions under which each item can be allowed to run. The dispatcher removes the top "runable" work item from the queue to run (if parallel, then each work "thread" would remove the next runable work item that can be parallelized). When a work item completes, it then toggles the "runability" of other items in the queue, perhaps even discarding those that will now never be run due to their dependency.

It may even be possible to express parallelizability as a dependency condition.

I see #1096 as part and parcel of the same process. Once multiple work items are created, they would each be assigned a dependency property which will control when they are executed.

I have no idea if I can edit a wiki. Never tried, and have no idea how to. Can you give me a starting "push"? :)

@oznetmaster, editing the wiki is pretty easy,

  1. Pick a page you want to add a link to your new wiki page from, probably the Specifications page,
  2. Edit the page by clicking the button
  3. Add a link by surrounding text with double square brackets like [[My NUnit Spec]]
  4. Save the page
  5. Viewing the new link, it is red indicating the page does not exist
  6. Click on the red link, it will take you to a create new page
  7. Edit the page as you would an issue using GitHub markdown and save

@oznetmaster What you describe is pretty much how the parallel dispatcher already works. Items are assigned to individual queues depending on their characteristics. Currently, it takes into account parallelizability and Apartment requirements. All items, once queued, are ready to be executed.

It was always planned that dependency queues would be added as a further step. I plan to use #1096 as an "excuse" to implement that infrastructure. Once it's implemented, it can then be further exposed to users as called for in #51. I'll be preparing a spec for the underlying scheduling mechanism (dispatcher) as well and I'd like your comments on it.

@oznetmaster I created an empty page for you: https://github.com/nunit/dev/wiki/Test-Dependency-Attribute

Are we committed to calling the attribute DependsOnAttribute? How about something more general like "DependenciesAttribute"?

It is nigling, I know :(

You should write it up as you think it should be. Then we'll all fight about it. :-)

So I have :)

Suggestion: add a section that explains motivation for each type of dependency. For example, when would a user typically want to use AfterAny, etc.

As a developer, it's always tempting to add things for "completeness." Trying to imagine a user needing each feature is a useful restraint on this tendency. Unfortunately, users generally only tell us when something is missing, not when something is not useful to them.

For what its worth, my input:

I'd rather not define a dependency _per test method_, that is becoming rather tedious (and hard to maintain) if you have more than a few tests. Instead I want to establish an order between test fixtures. This comes from the following case we currently have: We are using MSTest for ordered tests currently. Except that it is MSTest, it works great, because with the test ordering I can express two thing about a test: Certain tests _may not execute before_ another test. Other tests have a _dependency_ on another test and may only _be executed after_ the other test(s) have been executed.

Let's say that the integration test:

  • Uses a test database with several user accounts in it
  • The first few tests execute some tests using the test data, and also create test data themselves to be used in a later test. Note we have a _dependency_ relationship here. Some tests may not execute if earlier tests fail.
  • Then some browser-automation tests happen. They should be executed as late as possible, because they take a lot of time and we want to have feedback from earlier (faster) tests first.
  • Finally, some logic is tested which deletes an entire user account. Note we have a _must not execute before_ relationship here: If this test were to be done before the other tests, the other tests would fail.

With MSTest I can express this case fine: Each 'ordered test' in MSTest can contain ordered tests themselves. Also, ordered tests can have a flag set to abort if one of the tests fail.

              /      |         \
   DomainTests  BrowserTests  DestructiveTests
    /   |  \       /  |   \      |   \ 
   A    B   C     D   E    F     G    H 

For example, MyPrimaryOrderedTest has the fail 'abort on failure' set to false. There is nothing preventing BrowserTests to execute if DomainTests fail. However, DomainTests itself has the flag set to true so test C is not execute if A or B fail. Note that A till H can either be an ordered test definition itself or a test fixture.

To be concrete, if was thinking of a interface like this to express the test fixture ordering:

interface ITestCollection {
    IEnumerable<Type> GetFixtureTypes();
    bool ContinueOnFailure { get; }
}

This is much more maintainable (and obvious) as having dependency attributes on every fixture and scales much better as the number of fixtures increase.

Note for test ordering _within_ fixtures, I would simply use the existing OrderAttribute for that. I think test methods should not have inter-fixture test dependencies, because that makes the test structure too complex and unmaintainable.

For test ordering between fixtures I have set-up a prototype, and I have found that expressing dependencies between fixtures by using attributes becomes messy, even only with a few tests. Please also note that the prototype wouldn't allow ordering beyond the namespace the fixture is defined in because each fixture is part of a parent test with the name of the namespace. I would need to implement my own ITestAssemblyBuilder to work around that but NUnit is hardcoded to use the current DefaultTestAssemblyBuilder.

Update from my side: In the mean time I've managed to implement test ordering without the need to fork NUnit. It is "good enough" for me, so I use it now. It is already a lot better than the fragile state of many MSTest ordered tests.

Out of curiosity -- any chance the Dependency feature is planned for the next release?

No plans at the moment. FYI, you can see that here on GitHub by virtue of the fact that it's not assigned to anyone and has no milestone specified.

For normal priority items, like this one, we don't usually pre-plan it for a particular release. We reserve that for high and critical items. This one will only get into a release plan when somebody decides they want to do it, self-assigns it and brings it to a point where it's ready for merging.

In fact, although not actually dependent on it, this issue does need a bunch of stuff from #164 to be effectively worked on. I'm working on that and expect to push it into the next release.

Relevant: https://stackoverflow.com/questions/44112739/nunit-using-a-separate-test-as-a-setup-for-a-subsequent-test

Got referred here from there. I'm a firm believer that any fault should only ever cause one unit test to fail, having a whole bunch of others fail because they're dependent on that fault not being there is.. undesirable at best, and more than a little time consuming trying to track down which of the failing tests is the relevant one.

Edit: Just looking at my existing code, I've got a 'Prerequisite(Action action)' method in many of my test fixtures that wraps the call to action in a try/catch AssertionException/throw Inconclusive, but it also does some cleanup stuff like 'substitute.ClearReceivedCalls' (from NSubtitute) and empties a list populated by 'testObj.PropertyChanged += (sender,e) => receivedEvents.Add(e.PropertyName)'; otherwise past actions potentially contaminate calls to 'substitute.Received..'

Might be necessary to also include some sort of 'Cleanup' method in the dependency attribute to support things like this.

@Flynn1179 - I agree with you when it comes to _unit_ tests. However, NUnit is also a great tool for other kinds of tests. For example, we use it for testing embedded firmware and are really missing this feature...

@Flynn1179 Completely agree with you. There are techniques to prevent spurious failures such as you describe that don't "depend" on having a test dependency feature. In general, use an assumption to test those things that are actually tested in a different test and are required for your test to make sense.

It was a goal of NUnit 3 to extend NUnit for effective use with non-unit tests. We really have not done that yet - it may await another major release. OTOH, users are continuing to use it for those other purposes and trying to find clever ways to deal with the limitations. Here and there we have added small features and enhancements to help them, but it's really still primarily a unit-test framework.

Personally, I doubt I would want to use dependency as part of high-level testing myself. Rather, I'd prefer to have a different kind of fixture that executed a series of steps in a certain order, reporting success or failure of each step and either continuing or stopping based on some property of the step. That, however, is a whole other issue.

@espenalb I'd be interesting to know what you feel is needed particularly for testing embedded firmware.

We are actually very happy about what NUnit offers.

We use a combination of FixtureSetup/Setup/test attributes for configuring the device (Including flashing firmware)

Then we use different interfaces (serial port, jtag, ethernet) to interact with the devices, typically we send some commands and then observe results. Results can be command response, or in advanced tests we use dedicated hardware equipment for measuring device behavior.

The NUnit assertion macros, and FluentAssertions are then used to verify that everything is ok.
By definition, these are all integration tests - and by nature a lot slower than a regular unit tests. The test dependency issue is therefore sorely missed - it is no point in verifying for example sensor performance if the command to enable sensor was rejected. The ability to pick up one test where another completed is therefore very valuable.

With the test dependency attribute, we would have _one_ failing test, then _n_ ignored/skipped tests where the skipped tests could clearly state that they were not executed because the other tests failed...

Another difference from regular unit testing is heavy use of the log writer. There is one issue there regarding multithreading and log which I will create a separate issue for if it does not allready exist.

Bottom line from us - we are _very_ happy with NUnit as a test harness for integration testing. It gives us excellent support for a lot of advanced scenarios by using C# to interact with Device Under Test and other lab equipment.

With ReportUnit we then get nice html reports and we we also get Jenkins integration by using the rev2 nunit test output.

we also get Jenkins integration by using the rev2 nunit test output.

@espenalb - Complete aside, but the Jenkins plugin has recently been updated to read NUnit 3 output. 🙂

Can someone give a small update about the status this feature? Is it planned?
In my department we are doing very long-running tests, which logically really depend on each other.
Some kind of "Test-Dependency" would be really interesting and helping for us...

I heard that you are in general planing to "open" NUnit for "non-unit test" tests as well (which is basically the case for us...). I think this attribute would be one step torwards it :-)

This feature is still in design phase, so other than using external libraries there is no built-in support currently.

I am sorry to pull up an old thread but during the course of working though NUnit with a friend we stumbled into a case where if we had such a feature we could start to create integration tests (I realize NUnit is a Unit Testing Framework, but it seems like we could get what we want if we had Test Dependency).

First here's an updated link to the proposed spec (the link from CharliePoole here https://github.com/nunit/nunit/issues/51#issuecomment-188056417 was dead): https://github.com/nunit/docs/wiki/Test-Dependency-Attribute-Spec

Now for a use case; consider the following toy Program and Associated Tests

namespace ExampleProgram
{
    using System.Collections;
    using NUnit.Framework;

    public static class ExampleClass
    {
        public static int Add(int a, int b)
        {
            return a - b;
        }

        public static int Increment(int a)
        {
            return Add(a, 1);
        }
    }

    public class ExampleClassTests
    {
        [TestCaseSource(typeof(AddTestCases))]
        public void Add_Tests(int a, int b, int expected)
        {
            int actual = ExampleClass.Add(a, b);
            Assert.That(actual, Is.EqualTo(expected));
        }

        [TestCaseSource(typeof(IncrementTestCases))]
        public void Increment_Tests(int a, int expected)
        {
            int actual = ExampleClass.Increment(a);
            Assert.That(actual, Is.EqualTo(expected));
        }
    }

    internal class IncrementTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 1);
            yield return new TestCaseData(-1, 0);
            yield return new TestCaseData(1, 2);
        }
    }

    internal class AddTestCases : IEnumerable
    {
        public IEnumerator GetEnumerator()
        {
            yield return new TestCaseData(0, 0, 0);
            yield return new TestCaseData(0, 2, 2);
            yield return new TestCaseData(2, 0, 2);
            yield return new TestCaseData(1, 1, 2);
        }
    }
}

As an implementer I know that if any Unit Tests around Add(int,int) fail there is absolutely no point in running all the additional tests around Increment(int) other than noise. However there does not appear to be a way (short of Test Dependency) to specify this to NUnit (at least in my searches).

Doing a lot of research online it seems like others have worked around this by using a combination of factors (none of which are explicitly clear that Increment(int) depends on Add(int,int)) such ways include:

  • Using Categories
  • Using a Naming Convention To Control the Ordering of Tests

Neither of these seem to scale well, or even work for that matter when you use other features such as Parallel and all require some external "post processing" after the NUnit run has completed.

Is this the best path forward (if we were to use pure NUnit)? Is this feature still being worked on? (In other-words if a PR were submitted would it jam up anyone else working on something related?)

There is lots of good discussion in this thread about cyclic dependencies and other potential issues with this feature, it is obviously not easy to fix otherwise someone would have done it already. I am sure adding Parallel and TestCaseSource into the mix also increase complexity. I intend to dig more at some point, but before doing so wanted to make sure that this was not a solved problem or plans-in-the-works.

@aolszowka
This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Nobody has assigned it to themselves, which is supposed to mean that nobody is working on it. Smart of you to ask, none the less! If you want to work on it, some team member will probably assign it to themselves and "keep an eye" on you, since GitHub won't let us assign issues to non-members.

I made this a feature and gave it its "normal" priority back when I was project lead. I intended to work on it "some day" but never did and never will now that I'm not active in the project. I'm glad to correspond with you over any issues you find if you take it on.

My advice is to NOT do what I tried to do: write a complete spec and then work toward it. As you can read in the comments, we kept finding things to disagree about in the spec and nobody ever moved it to implementation. AFAIK (or remember) the prerequisite work in how tests are dispatched has been done already. I would pick one of the three types of dependency (see my two+ years ago comment) and just one use case and work on it. We won't want to release something until we are sure the API is correct, so you should probably count on a long-running feature branch that has to be periodically rebased or merged from master. Big job!

This remains an accepted feature, at least as far as the issue labels go. @nunit/framework-team Am I correct there?

Yes - as far as I'm concerned!

Was this page helpful?
0 / 5 - 0 ratings