Junit5: ConsoleLauncher returns 0 although selected class could not be loaded

Created on 17 Feb 2018  路  27Comments  路  Source: junit-team/junit5

We run a set of tests in a CI pipeline and call our test classes like this:

java -classpath junit-jupiter-api-5.0.1.jar:junit-platform-console-standalone-1.0.1.jar org.junit.platform.console.ConsoleLauncher --select-class xy.Test

If class xy.Test cannot be found on the classpath an error message appears but ConsoleLauncher's return value is 0! Since our CI system runs unattended the return value is the only important return value!
As I have seen this behaviour got updated in JUnit 5.0.0 M2 but I regard this as a mistake: If I define a class by --select-class and the class cannot be found then something has gone wrong!

As I countermeasure I hacked (by means of introspection) org.junit.platform.commons.util.BlacklistedExceptions by overwriting blacklist's field with OutOfMemoryError (=default) and PreconditionViolationException (=case where class could not be found).

(If the standard behaviour shall not be changed...) I think there should be a better way to get this behaviour!

(Originates to a post on Stackoverflow.)

Platform reporting up-for-grabs

Most helpful comment

No offence, but adding PreconditionViolationException to BlacklistedExceptions does not sound like a solution to me but a hack that works only by coincidence.

I think we could sanity check selectors and introduce a --strict mode. The sanity checks could verify the following:

  • --select-class: class exists
  • --select-method: method exists
  • --select-package: package exists

etc.

@junit-team/junit-lambda Thoughts?

All 27 comments

Thanks for creating the issue! There are use cases when build tools or IDEs pass in test classes and discovering an empty test plan is not considered an error.

Do you execute test classes one by one? If so, we could add a --fail-if-no-tests option (similar to what the Maven Surefire plugin has) to ConsoleLauncher.

During the build step we build up a list of a test classes to be executed. During the test step we call ConsoleLauncher passing over each class to test by preceding --select-class to it.

You can define the classes / methods to be execute in several ways - looked up by classpath or explicitly specified. From my point of view I consider it an error if the explicitly defined test artifacts cannot be found. This means (IMHO) I would expect a return value != 0 when:

  • --select-class: if class could not be found
  • --select-method: if method could not be found
  • --select-package: if package does not exist

About your suggestion:
--fail-if-no-tests would not really help in this situation since there may be classes found and some not.

Possible solutions:

  • If you don't want to update the current behavior a --strict(or --not-lenient) switch may help.
  • Since I call ConsoleLauncher on Java level I could live with a solution where I configured the BlacklistedExceptions via static method. For this I needed opened this class and accessory method(s).

Now I'm curious: Why do you build up the list of test classes yourself instead of using JUnit's classpath scanning?

I think this has historical reasons where ConsoleLauncher was not available yet.
We have different types of tests which get categorized by a shell script. Thus I wanted to stick to --select-class-way. (Otherwise selection would occur at two different places.)

_Coming back to my suggestion:_ What return value would you expect when running?
ConsoleLauncher --select-class non.existing.Klass

For comparison: If you compile unknown class you get return value = 1

javac non.existing.Klass
error: Class names, 'non.existing.Klass', are only accepted if annotation processing is explicitly requested
1 error
echo $?
1

Same if you try to delete a non-existing directory or file in Bash (rm nonExisting).

What about extending the proposed --fail-if-no-tests to an option that takes an integer as a low boundary. If the number of found tests is below that number, the returned value is set to non-zero.

Something like: --assert-minimum-tests-found 123 or --assert-maximum-tests-failed 0

This could be applied to more/all numbers that a gathered by the test execution summary:

Test run finished after 2376 ms
[        70 containers found      ]
[         1 containers skipped    ]
[        69 containers started    ]
[         0 containers aborted    ]
[        69 containers successful ]
[         0 containers failed     ]
[       225 tests found           ]
[        14 tests skipped         ]
[       211 tests started         ]
[         2 tests aborted         ]
[       209 tests successful      ]
[         0 tests failed          ]

Could be guared by the following strict assertions:

--assert-containers-found 70
--assert-containers-started 69
--assert-containers-successful 69
--assert-containers-skipped 1
--assert-containers-aborted 0
--assert-containers-failed 0

--assert-tests-found 225
--assert-tests-started 211
--assert-tests-successful 209
--assert-tests-skipped 14
--assert-tests-aborted 2
--assert-tests-failed 0

Thanks @sormuras for your thoughts.
If I compare this variant with my current workaround (sorry to tell) I still prefer my one: Having PreconditionViolationException added to org.junit.platform.commons.util.BlacklistedExceptions exactly does what I want: Having a return value != 0 when the class could not be loaded.
There are only two things ugly:

  • I have to use Java introspection.
  • It is not possible to apply this on the command prompt.

What is the problem if PreconditionViolationException have been added to org.junit.platform.commons.util.BlacklistedExceptions per default (as OutOfMemoryError has been)?

I could use --fail-if-no-test but think --assert-minimum-tests-found is even better. I image a scenario where some tests are always run (e.g. tests to verify some conventions) and it is known that they can be discovered. Yet, it happens from time to time that a dev makes mistakes in configuring the discovery for specific tests. In such a use case the --fail-if-no-test would not help to detect the mistake and --assert-minimum-tests-found would be handy

Remotely related to #542 and marking this issue as "up-for-grabs" too.

@pfuerholz or @robstoll or s/o else want to file a PR for an --assert-minimum-tests-found x option?

Btw. I think you may drop the Jupiter API jar from your command's classpath in the initial issue description. The standalone jar contains it.

I'm not convinced --assert-minimum-tests-found is the way to go. How would this work? Someone would set a minimum number in some build script and never update it again?

I would say depending on the needs that would suffice and one would increase it from time to time if there are major changes in the discovering mechanism. For instance, if one switches from --scan-classpath to --include-...
Yet, I can also imagine that someone wants even more control/assurance and would like to know if certain new tests have been included. In such a case it would not be enough and we would need something entirely different. Could be a mechanism to check if certain Tests/Packages (maybe working with the UniqueId) are included or not.

I think --assert-minimum-tests-found might be useful when test cases get looked by means of --scan-class-path. In this case I rely on ConsoleLauncher so it finds all the tests I want to get executed. In contrast to this in my case I tell ConsoleLauncher by means of --select-class exactly which test classes to execute. If the defined class cannot be found I thought it to be common practice that ConsoleLauncher returned an exit != 0. Don't you think? (This is even something that can be very easily implemented!)

If the defined class cannot be found I thought it to be common practice that ConsoleLauncher returned an exit != 0. Don't you think?

I'm not convinced that such a use case is all that common.

I imagine people would typically implement a custom TestExecutionListener if they want to know about specific outcomes.

For your particular use case, you could implement a TestExecutionListener that checks that at least one test class was executed and have that TestExecutionListener registered via Java's ServiceLoader mechanism automatically. You could then bundle that in a stand-alone JAR and include that JAR in the classpath when executing the ConsolerLauncher from the command line.

Another option would be to enable the _summary_ details output of the ConsoleLauncher and then simply grep the SYSOUT output. That's very common for scenarios like the one you describe.

@sbrannen Have you seen my findings (see a few posts above) that if you try to remove a non-existing file or javac a non-existing class the return code will be non-zero? Hence when you select code to test by:

  • --select-class
  • --select-method
  • --select-package

I would expect return code to be non-zero if class/method/package could not be found!
Realizing this change is (at least for --select-class) really easy: Just add PreconditionViolationException to the org.junit.platform.commons.util.BlacklistedExceptions and your are done! (This is exactly what I have done in a quirk.)
If the default behavior cannot be changed a --strict flag (or something similar) could be introduced...
(Still don't understand why this idea is not taken up!)

No offence, but adding PreconditionViolationException to BlacklistedExceptions does not sound like a solution to me but a hack that works only by coincidence.

I think we could sanity check selectors and introduce a --strict mode. The sanity checks could verify the following:

  • --select-class: class exists
  • --select-method: method exists
  • --select-package: package exists

etc.

@junit-team/junit-lambda Thoughts?

I think providing such an opt-in feature via an explicit "strict" mode should be fine.

Personally I would do it the other way round, Be strict and provide an option to be loose. I think it is better to have an aha-junit-has-implement-a-check moment than an aha-junit-does-not-care moment -- well yes, I have chosen biased terms, maybe its just a personal taste, as long as it is documented properly it should not matter.

Personally I would do it the other way round, Be strict and provide an option to be loose.

Well, that would be a breaking change.

So regardless of whether it may be preferential to be strict by default, that ship has already sailed: the JUnit Platform has not performed such a check since its initial release. Consequently, we generally have to err on the side of caution with regard to backwards compatibility.

unless you declare it as bug in which case you can fix it ;)

true... but I'm not _convinced_ that it's a 馃悰 .

Tentatively slated for 5.2 M1 for _team discussion_

Thanks for taking my point serious! If you change the default behavior or adding a "strict" mode does not matter me.

Team Decision:

  • The Launcher cannot reliably decide whether a selector was resolved into a TestDescriptor that was actually executed. The sanity checks proposed above would only work for Java classes, but you could use a ClassSelector for a Javascript class etc.
  • We will implement --fail-if-no-tests to check whether number of found tests is greater than 0.
  • For any additional validation logic, please implement your own TestExecutionListener and register it (see https://junit.org/junit5/docs/current/user-guide/#launcher-api-listeners-custom)

Thanks for your information though I am still a bit puzzled by your explanation:

  • You are on the way to use JavaScript (incl. other scripting languages) in annotations as I have seen here
  • Do you want JUnit to test JavaScript code? (IMHO there exist already some tools for this.)

If you wanted let {{--selectClass}} be followed by a JavaScript clause it would be even more helpful to ensure that its evaluation returns a class/method which can be found on the classpath. In the case JUnit shall run JavaScript-code there would still be the case that the code-to-test cannot be found! (Showing an error when the code-to-test cannot be found on the classpath (in Java) or in the filesystem (for JavaScript) would be valid either way...)

What you are doing in this case is on you. (We can live with the status quo...) My consideration is that swallowing an error case (here: when not finding the artifact to test) is bad habit.

Do you want JUnit to test JavaScript code? (IMHO there exist already some tools for this.)

No, that was not Marc's point.

The topic of _this_ issue is the ConsoleLauncher which is part of the JUnit Platform.

The ConsoleLauncher has nothing to do with the JUnit Jupiter programming model: they are separate subprojects of "JUnit 5".

The _Platform_ is generic and can support testing frameworks written in various programming languages and test formats.

But... somebody _could_ introduce a TestEngine that runs on the _JUnit Platform_ that executes tests in JavaScript (for example to test JavaScript with JavaScript).

Third-party test engines already exist for Kotlin, Scala, Groovy, etc.

When Marc wrote, "but you could use a ClassSelector for a Javascript class etc.", that was just an example.

He could just as easily have said Ruby, Groovy, Kotlin, Scala, or any other language that runs on the JVM.

The point is that the ConsoleLauncher will never have intimate knowledge of the test engines currently being executed on the JUnit Platform. Consequently, the ConsoleLauncher "cannot reliably decide whether a selector was resolved into a TestDescriptor that was actually executed."

Does that make more sense now? 馃槈

_in progress_ by @gaganis in #1399

Was this page helpful?
0 / 5 - 0 ratings