Right now, there are some test applications and unit tests in the Godot tools application. For example:
./godot-tools -test string
This is great but you have to run each test individually. In addition, there are tests not even listed in the tools application under the help. Some of the unit tests are broken and some are more general demo applications. I think it would be nice to have a separate application built that only runs the unit tests and can run all of them with one call. This would provide an easier way to write tests and possibly identify problems with Godot's growing codebase.
There are a wide variety of C++ libraries that handle unit tests. Some of the more popular ones are:
The benefit of using a library like this is that they're widely used and can be very powerful. The downside, is that it's yet another dependency that needs to be installed by developers.
My proposal would be to use a unit testing library that is a header only. This would mean it could be included in the Godot source code in the drivers
folder. I've been playing with Catch (Youtube demo), a library that has these features and seems to be stable and maintained. It's licensed under the Boost Software License.
So, what do people think?
Godot is really missing unit tests and that's why many stuff are broken without no one noticing only after much later. I understand why programmers run away from automated tests (they're boring and seems to be waste of time).
Unit testing would also be great for documentation and both could be done simultaneously, as it pretty much involves figuring out how a certain function works. Also, this would be a start to have higher level automated tests, maybe even those that simulates clicks on a screen.
I'm not familiar with any C++ unit testing framework, but this Catch seems very good.
there are a lot of tests for Godot functionality, but unit testing is not
magical. If a bug was not found, it simply was not found.
On Tue, Oct 20, 2015 at 2:29 PM, George Marques [email protected]
wrote:
Godot is really missing unit tests and that's why many stuff are broken
without no one noticing only after much later. I understand why programmers
run away from automated tests (they're boring and seems to be waste of
time).Unit testing would also be great for documentation and both could be done
simultaneously, as it pretty much involves figuring out how a certain
function works. Also, this would be a start to have higher level automated
tests, maybe even those that simulates clicks on a screen.I'm not familiar with any C++ unit testing framework, but this Catch seems
very good.—
Reply to this email directly or view it on GitHub
https://github.com/okamstudio/godot/issues/2641#issuecomment-149640274.
I agree, it's not magic, you need people to find and fix bugs. However, I still think it would be nice to at least improve the unit tests that are already there and make them easier to run.
Ahh you mean the ones with -test. Those are more useful for porting Godot
to new platforms than as unit tests.
I believe actual Unit tests should be done using script..
On Tue, Oct 20, 2015 at 2:47 PM, Patrick [email protected] wrote:
I agree, it's not magic, you need people to find and fix bugs. However, I
still think it would be nice to at least improve the unit tests that are
already there and make them easier to run.—
Reply to this email directly or view it on GitHub
https://github.com/okamstudio/godot/issues/2641#issuecomment-149645044.
I believe actual Unit tests should be done using script..
Do you mean with GDScript or something else?
yup, gdscript, since the API in C++ an GDScript and pretty much the same
On Tue, Oct 20, 2015 at 3:51 PM, Patrick [email protected] wrote:
I believe actual Unit tests should be done using script..
Do you mean with GDScript or something else?
—
Reply to this email directly or view it on GitHub
https://github.com/okamstudio/godot/issues/2641#issuecomment-149665687.
I was thinking C++ might be better since it could link against and test functionality in the editor and supporting tools as well. It would provide the ability to test against the entire codebase instead of just the GDScript set.
GDScript really has access to pretty much all the codebase
On Tue, Oct 20, 2015 at 4:39 PM, Patrick [email protected] wrote:
I was thinking C++ might be better since it could link against and test
functionality in the editor and supporting tools as well. It would provide
the ability to test against the entire codebase instead of just the
GDScript set.—
Reply to this email directly or view it on GitHub
https://github.com/okamstudio/godot/issues/2641#issuecomment-149679382.
if you want to do unit tests, maybe the 1% that gdscript doesn't have
access to can be done in C++, but it's pointless to do everything in C++
On Tue, Oct 20, 2015 at 4:42 PM, Juan Linietsky [email protected] wrote:
GDScript really has access to pretty much all the codebase
On Tue, Oct 20, 2015 at 4:39 PM, Patrick [email protected] wrote:
I was thinking C++ might be better since it could link against and test
functionality in the editor and supporting tools as well. It would provide
the ability to test against the entire codebase instead of just the
GDScript set.—
Reply to this email directly or view it on GitHub
https://github.com/okamstudio/godot/issues/2641#issuecomment-149679382.
OK fair enough, you're the expert on the codebase. :)
Given that https://github.com/godotengine/godot-tests is now a thing, should we open issues about this/move this issue there?
Yes.
How about testing GDScripts and make tools like UnityTest Tools (https://www.assetstore.unity3d.com/en/#!/content/13802) and integrate it into the editor ?
Hi !
Is godot-tests still the way to go to write unit tests ?
I am currently working on a really small pull-request, but not having a test suite make me afraid of breaking something :fearful:
I really like the idea of @reduz to write unit tests with GDScript by the way :)
So did something changed in the meanwhile? Is there any unit testing/integration testing framework being used?
Having unit tests, integration tests and the like would greatly help normal developing (being more certain that what you're writing is not going to break something now or in the future), stability of the program, would avoid a great deal of regressions and also would improve the general quality of the code.
Moreover i would not suggest using scripting for unit tests; those should have the less possible layers in between the test itself and what you're trying to test (normally a function), because otherwise you're testing more than one thing.
While this might seem a time saver, in writing tests, you're potentially hiding bugs that cannot be triggered from the scripting interface.
Tests from the scripting interface are actually integration tests because they put in communication two different modules/layers; there you can test the communication layer and the result of the module communication, knowing (from unit tests) where is the problem when something breaks.
@Smjert no, nothing changed. Godot does not have unit tests.
I still think it's a valid point, as tests can catch regressions quite well.
I also agree. It's certainly not easy to do, especially not when having to retro-fit it to an application or library that wasn't written with tests in mind, but I believe doing it incrementally is possible and it wouldn't take long for us to start reaping the benefits.
Unit tests and integration tests tend to target different classes of bugs, and in my experience you ideally want to have a mixture of both types of tests (but the majority should be unit tests). Integration tests tend to be more enticing since they can often simulate actual user interactions, which is really all we care about, but they have a few downsides:
In contrast, unit tests should in general be very fast and cheap to run, and when they fail it should be fairly straightforward to determine what went wrong. A good unit test suite, plus a handful of integration tests to cover higher-level interactions and integration points, can often provide a great safety net against regressions and allow writing new features and refactorings with confidence.
And as for impact: besides just actually preventing regressions, it might make it easier for new contributors to get started and learn the codebase. Personally, for the (very few) times that I've sent a PR so far, I've been a bit worried that the changes might cause a breakage unknown to me, and/or mildly frustrated that, after the feature or fix gets merged, it sometimes immediately gets broken accidentally. Having an automated test suite (and automatically running it before merging PRs) seems like it could make things more stable and less anxiety-inducing.
I agree, it would be very good for the long term sustainability of Godot. It would be good to start with something like the math library, then maybe the string library, etc. Then to nodes, then eventually one day on to more complex things like physics and rendering.
edit: It looks like some of these tests are already there. It would be great if this was all integrated into the CI system somehow if not already :)
I opened a PR with the intent of inciting further discussion. I'm just tickling the String class here, but I'm already running into some issues. Feedback welcome.
I definitely like the idea of it being written in GDScript so that just about anybody can contribute. I think the main portion of making the tests actually mean anything is to automate it somehow as part of the godot-engine PR/CI system. Meaning:
1) Person makes a PR
2) Project is built to various platforms (like it currently does)
3) These tests are automatically run
4) If any test fails it will show as an error (like it does when a build fails)
5) If all tests pass and all builds pass it will show as passing (like it does when a build succeeds)
Or, my secondary preference would be having a system that builds once or twice per day and runs these tests, and bisects the repo if an error is found.
I may be wrong, but I believe this isn't currently being done, right?
If I'm right, then I think this would make the godot-tests project mean a lot more than it does now. If it's not automatically run on a regular basis, then right now errors are only caught when a person feels like checking it.
Finally, whenever a bug is found, (when it makes sense) a test needs to be added to this project as a requirement of their godot-engine PR. Obviously, some bugs (especially UI) require more than a test to reproduce and wouldn't necessarily make sense to add a test for.
A couple of years ago I made a PR to also allow for correctly displaying camel cased exported variables from GDScript (as opposed to just snake cased ones) in the inspector. That code for that was correct, but it broke something else and had to be reverted. It could have easily been found and fixed before the merge if the correct tests had been in place and an automated system like this. :)
Definitely agree that, ideally, tests would be executed as part of the normal PR workflow (I don't know if it's already being done, but I'd guess not, judging by the Travis CI output). Writing some higher-level (integration-level) tests in GDScript is probably a good idea, but I'm not sure if it'd be a good fit for unit-level testing, since we'd essentially be exercising the entire script parsing/execution subsystem for every test. This would not only have a larger scope, but also likely run slower, and having fast tests would be especially important if they were incorporated into the PR workflow and (for example) were required to pass before merging.
I suspect that such a workflow would be easiest to set up / trigger if the tests were actually part of the godot
repo itself, instead of being siloed under godot-tests
(curious what the reasoning behind that was?).
I think we may need to fork the discussion here.
This issue, and much of the discussion herein as been focused on unit testing. I strongly agree with those arguing that unit tests belong in the same repository as the source, are developed alongside the code by core developers, and can only be properly realized when added to the existing continuous integration systems.
That said, I believe that the core team has made two decisions thus far. First, that they welcome tests written in GDScript. Second, they have established a separate repository for those tests. This appears to be the beginning of a quality assurance discipline. If it quacks like a duck ...
Unit testing and quality assurance are two very different things. I might summarize the most important differences as:
If I'm reading the situation correctly, I might suggest that the quality assurance discussion be continued here. This issue should then be reserved for further unit testing discussion.
My 2 cents here
I strongly agree with those arguing that unit tests belong in the same repository as the source
I also consider really important to have the tests and the code of a project in the same repo.
From my point of view splitting the things into two repositories means more work for contributors and a reviewers because we end up with two PR deeply related.
Long story short nobody will bother adding tests :(
On a more practical subject, the Godot-Python project have a good number of unit tests that come as a regular Godot project
The idea is simple: the project only contains a main scene with a single node attached to a script. The script's _ready
function then bootstrap on the test framework (pytest in my case), once the test framework has done it job, OS.set_exit_code(result)
and get_tree().quit()
get called to close the project.
Good thing about this:
So Godot still doesn't have any (unit or integration) tests?
Is this discussion dead? It seems like there's some resistance to adding 'appropriate' unit testing (NOT GDScript, NOT a separate repo). I'd like to contribute to this project but sending a PR with no test coverage is like standing up from the toilet before wiping; I'd rather not do it.
It's not about resistance. It's more about not having a clue about how to
do it.
So far, I have not heard one single proposal (as in, specifically what
would you like to test and how) explaining how a unit testing system for a
game engine like this would work.
Rendering and physics are not deterministic to begin with, so unit testing
that is kind of pointless. What else would you like to test?
On Wed, Aug 22, 2018 at 2:37 PM Timm Brooks notifications@github.com
wrote:
Is this discussion dead? It seems like there's some resistance to adding
'appropriate' unit testing (NOT GDScript, NOT a separate repo). I'd like to
contribute to this project but sending a PR with no test coverage is like
standing up from the toilet before wiping; I'd rather not do it.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot/issues/2641#issuecomment-415115716,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF-Z2-FOGgoRPvC369DOXctv86EqkV2cks5uTZbPgaJpZM4GR308
.
Basic, discrete behavior. An example is: I want to change pushed vs hover (base_button.cpp). Since there are no existing unit tests that exercise get_draw_mode() (that I could find), it's speculation why the implementation exists as it does. I would have to rely on commit messages, and there's no guard against future changes that would potentially introduce defects, aside from knowing if I'm breaking expected behavior except through manual discovery.
In the case of get_draw_mode() example, behavior should be defined to a specification in the form of a set of unit tests, which can also help document where code is difficult to self-document, and becomes part of and grows a regression strategy. I want to write the specification on this function so it doesn't break in the future, ideally becoming part of the CI build pipeline and PR process later down the road.
It could be as simple as including a lightweight test framework such as Catch and include a few tests as pavement to start off with. Current issues should be a prime candidate on 'what to test', where reasonable. If it's as simple as throwing up a PR with an example, I'm happy to do so.
I'm not advocating for unit testing graphics layer stuff (non-deterministic is another discussion) and this firmly belongs in the realm of functional/acceptance or manual style testing.
Another example would be the ResourceSaver
. I've run into an issue with it recently (#18801). A unit test could be used to document the issue, and when it's fixed, make sure it doesn't occur again.
@timmbrooks Given the size and complexity of the project, having and maintaining unit tests for every single of those functions that will never change would take more work than maintaining the project. I also don't think contributors would want to contribute if this was a requirement for every single thing.
Also, at this point, we are still changing core APIs pretty often, so most work spent on creating tests would be wasted after some months.
@cmfcmf that does seem like a regular bug though. If it makes you feel better, having bugs re-ocuring is very rare so far. Most of these happen because of the huge amount of core API changes we did for 3.0
@reduz My perception is very different. I don't want to contribute without having some means to protect code changes in place. Based on some of the dialog here, I think others may tend to agree.
I also can't agree these functions will never change (base_button.cpp). The illusion of 'maintaining the project' seems to lean more towards break/fix than towards new features.
People are asking for unit testing. I don't see how you can get out from under the mountain of issues that exist if you don't stop the bleeding. Why would you not want to have regression for any of the currently tracked issues as a starting point? I'm not advertising writing tests just for the sake of writing tests.
@reduz I think you have a very good point reg. how the frequency of specification changes for large parts of the project might increase the burden of having automated tests, potentially making the tests much more of a hassle than they are worth. I do believe that one could cherry pick certain parts of the system reg. writing tests, namely those parts where the tests would give the greatest value in regards to debugging, quality and saving time and where the specification has been stable and is expected to continue being stable. But depending on approach it might not be that easy to figure out which tests would fit that description.
I have two main proposals for more concrete kinds of tests that I believe might potentially be well worth it, and that also addresses this point. Tests for both would be written in GDScript. Common for both proposals is that their value is dependent on having support for as many platforms as possible reg. the test framework/harness.
Deterministic automated tests for functionality where implementation may differ depending on context like platform and driver, or where behaviour may (wrongly) change depending on context.
These kinds of tests would fit very well with CI support that runs the tests automatically as part of pull requests on all relevant platforms. For instance, if you are developing some new functionality or fixing a bug, it can be laborious and error-prone to test the changes across all relevant platforms/drivers, especially if you want to test combinations as well. If you would like to test on both Windows, Linux, macOS, Android, iOS, Web (Firefox), Web (Chrome), etc., it would likely be too laborious to test on all the combinations that you would like to test for, and creating a short and quick automated test would likely be well worth it (assuming the required test infrastructure is up and running), even if you delete the test immediately afterwards.
Furthermore, it ought to be fairly straight-forward to write guidelines for this case, since you would just require that in order to include an automated test for a given part of the system, the test has to either cover functionality that is implemented in at least two different drivers or the like, or there must be a risk that the implementation might well work in one context and not in another. The tests would also be required to be deterministic.
This is inspired by another game engine: https://github.com/enigma-dev/enigma-dev . See this image for an example (the screenshots were AFAIK automatically generated by a "test"):
While such "tests" would not be fully automatic, and would require developers to look at the generated screenshots, I think they could otherwise be very useful, especially in regards to different contexts. For instance: If a developer implements some functionality in both GLES2 and GLES3, it might be very nice to have all the combinations of GLES2/GLES3 and the different relevant platforms (Windows, Linux, macOS, Android, iOS, Web (Firefox), Web (Chrome), etc.) yield screenshots automatically, and have them show up in some way that would be easy to access, view and compare from the relevant pull request, such that the developer could look through the screenshots and see if there are any obvious regressions as well as see whether the new functionality seems to work as desired across all platforms.
These screenshots would be somewhat limited in what could be "tested", though potentially very well worth it as far as I can see, including in regards to discovering regressions early.
See more examples in some of the merged pull requests reg. rendering here:
https://github.com/enigma-dev/enigma-dev/pulls?q=is%3Apr+is%3Aclosed
One drawback to this is that it would require somewhat more infrastructure in some regards, despite sharing a lot with a general test harness/framework. For instance, it would require being able to specify somehow which "tests" should run for which combinations. An example of this would be "tests" for particles, which you would like to run with GLES3 and not GLES2, and (currently?) not on web browsers. Also, a way to view the screenshots for the "tests" for the two commits to be compared (one usually being the head of the master-branch) would be needed. Furthermore, some way to keep the number of screenshots low would also be needed, so as to not waste lots of storage on automatically generated screenshots.
I believe this might also be very helpful if implementing support for a new platform or driver, for instance a new rendering driver based on Vulkan.
I've made a small testing framework for testing errors/regressions in HTTPClient/HTTPRequests.
You can have a look at it here:
https://github.com/Faless/gd-tests
The tests are based on a simple base class that allows to easily write tests that runs across multiple frames. Any GDScript exposed function/behaviour could potentially be tested with it.
The system could be rewritten as a C++ module to allow accessing non GDScript exposed functions (while still allowing to write GDScript tests for those functions that are exposed).
Any feedback is welcome.
Anyone here seen this? https://github.com/bitwes/Gut
To come back to the subject of testing C++ code, someone told me about doctest as a faster alternative to Catch2. It might be worth a look too :slightly_smiling_face:
Just looked into doctest, seems really much better for compile times.
I have not cited this information so take it with a pinch of salt.
First pull request with a demo implementation provided - not production ready but demonstrates doctest as third party tool which could be used to test engine.
Can also be ported into gdscript to allow native unit testing.
Catch2 would add a lot of time to the build process and I don't believe that would be welcome so I chose to use doctest.
Anyone here seen this? https://github.com/bitwes/Gut
See my proof-of-concept discussed at godotengine/godot-tests#8 on how to use Travis CI and GUT together and some workflow ideas around that.
I've been using GUT with great success, caught various engine-induced regressions even when the plugin was meant to be used for creating project-specific tests. 🙂
As we're in the process of moving feature proposals to godot-proposals, it would be nice to take the opportunity to reopen this proposal there with a summary of the key takeaways from the long discussion (and taking into account past and current PRs that aim at improving the testing situation).
If that could be done by a contributor that was present at past Godot Sprints where we discussed this IRL, that'd be even better :) (CC @RevoluPowered @Faless @vnen maybe?)
Superseded by https://github.com/godotengine/godot-proposals/issues/918.
Note that the unit testing we're interested in here is to cover the core API of Godot from an engine development point of view.
For unit testing of game code/scenes, it's advised to go with GDScript-based solutions like https://github.com/bitwes/Gut
Further down the road, we also plan to use a GDScript/scene-based framework to test the Node API, but that likely needs a new proposal in itself, separate from the C++ unit testing framework.
For those stumbling upon this issue from a search engine, note that unit testing using doctest has been integrated into the Godot development process.
Most helpful comment
@Smjert no, nothing changed. Godot does not have unit tests.
I still think it's a valid point, as tests can catch regressions quite well.