Junit5: New annotation "@Requirement"

Created on 25 May 2019  路  12Comments  路  Source: junit-team/junit5

_On the last JUG I met @sormuras and he motivated me to participate for the first time by presenting a feature-request. So here it is._

Motivation:
When we write tests we do this mostly because of two reasons:

  1. Avoid technical errors (like NPE)
  2. Ensure our code fullfills the defined requirements / avoid functional errors

But as of today (_as far as I know_) the only way to "link" a test to a requirement is by using it's (displayed) name, for example putting the id of the requirement (_I'll call this "req-id" in the following lines_) in front of the name - like req123_testSomethingInThisMethod. I think you all agree that this is not a good way to show that this method is a test to ensure the requirement with the id req123.

Suggestion:
Therefore I would like to suggest do add a new annotation @Requirement.

This annotation should be used to show that a particular test is written to ensure the specific requirement. The annotation takes the req-id as a string parameter, e.g. (@Requirement("REQ-123")). In the test result this req-id is then published as an attribut of the test case. This allows tools which parse the result to show if all tests annotated with a given req-id have passed and therefore the requirement is fullfilled.

I don't see the need of making this annotation repeatable on a method. In my opinion each test method should only verify one aspect. I also don't see the need of using this annotation on class level as I in my opinion a test calls - especially for small methods - may contain technical and functional tests.

Maybe this can also be taken into the considerations about a standard test result format (https://github.com/ota4j-team/opentest4j/issues/9)

Lookout:

To be honest only printing an additional attribut containing the id of a requirment is only one part of a useful funcitonality to check if all requirements are fullfilled. As without a list of definied requirements the report only containts information about tests which are annotation with a requirment but the report (tool) doesn't know if there were tests to all requirements. While I'm quite sure the suggested annotation should be a feature of JUnit I'm not sure if the "linking functionality" is seen as one by the JUnit-Team. I'll make a short description of what I mean, then maybe it's more clear:

To compare a list of requirements to the list of test results a comparator needs an input of this list. So I see the need of some module which takes this list (e.g. a file, service call, etc.) as an input and compares it with the test results and then creates a report which shows the reader which requirements don't have test at all, or which tests of a given requirement have passed/failed.

But maybe this is just a second step and - as mentioned - maybe not even in focus of JUnit.

Jupiter programming model reporting

Most helpful comment

Just to inform here (and as I'm the opener of this issue i'll close it). We released JUnit _Pioneer version 1.1.0 with an @Issue annotation.

All 12 comments

FWIW Spock has an @Issue annotation.

@Bukama Have you considered using @Tag() instead?

Thank you for your response.

According to the JUnit documentation @Tag exists to only run specific tests. So in this the given context I could use a @Tag to only run the tests for a specific requirement - but that's not what I suggested.

Spocks @Issue annotation is really what I suggested at least for marking a test that it belongs to a specific issue. Didn't find anything about using the annotation in reports yet, but to be honest I didn't had much time to investigate that yet. The "problem" of using the @Issue annotation is - well it's Spock and not JUnit and therefore would force everyone writing their tests in Groovy and a specific way (if I understand the documentation correctly).

For myself I know Groovy because I've written some Jenkins pipelines using it, but none of my colleagues knows it. Yeah it's not that hard to learn as it's close to Java but I would be happy if my colleagues at least would write more tests but I'm sure they will do even less when they have to learn a "new" language and way for it. Sure this is a "personal dilemma".

I was only looking to Spock for inspiration. You could define your own @Requirement annotation for now. We'll look into a new reporting format in #373. I think it might be a good idea to include all (or whitelisted) annotations in there.

Thank you. I am preparing my workarea so this should be ready before I go to hospital in a few days. When back home I'll try to start working - which will be slow as I never had worked with gradle and kotlin (aside from the feature relevated things like annotation processor). But there will always be new challenges :)

I do think that this annotation is useful, especially whenever using requirement/test management tools, so you can easily trace the automated tests to the requirements they cover.
Whatever annotation is used for this purpose, it is important that it is persisted in the generated JUnit XML report, whose format is still to be decided.
If I may, I would also suggest another annotation (e.g. "Automates") - I can create another thread for it - having the aim of identifying some existing "test case" in the test management tool that is being automated by a certain JUnit based test. What do you think?

The fact that we're coming up with lots of different names tells me that, we'll ultimately need a mechanism to use your own annotations for this. 馃

@marcphilipp , well perhaps. I do see however the need behind the @Requirement annotation as the most important though, so if it easy to support at a first stage and make something more generic afterward and "pluggable", better.

@marcphilipp I agree with @bitcoder.

In junit-pioneer/junit-pioneer#135 I wrote something that may pertain to the discussion here:

[Gathering all test results for an issue and report them] sounds much more interesting! I'm imagining a dynamically generated test tree where each issue is a node and each test belonging to that issue is a subnode. By storing the results of the original tests when they were run, the generated tests could trivially pass or throw the same exception, effectively giving a second view on the same test results.

Just to inform here (and as I'm the opener of this issue i'll close it). We released JUnit _Pioneer version 1.1.0 with an @Issue annotation.

@Bukama thanks for the feedback; how will that be persisted in the XML results report? Or it won't?

@Bukama thanks for the feedback; how will that be persisted in the XML results report? Or it won't?

It won't. The extension passes the results to registered services. I started to create an IssueReport for that, but just set up the repo for this, because I was unsure how we (the pioneer team) will name the "API classes". Now that the extension is released I'll move on with it (most probably on Friday/weekend).

Was this page helpful?
0 / 5 - 0 ratings