Requesting a feature
You can execute tests concurrently without restrictions
Although you guys encourage people to make test cases atomic, there are instances where having multiple test cases that execute in a specific order is needed. For instance: first "Creating a user", then "Removing a user", etc.
If you keep them in order in your fixture, it works well as long as you execute them in one browser, but if you execute them with concurrency, there's no guarantee that the tests will be executed in that specific order.
Can you give us an option to prevent test cafe from running some tests in parallel?
I was thinking maybe a fixture option like
fixture(Fixture name)
.page(page)
.parallel(false);
Or something along those lines. The idea is that that specific fixture should be run serially.
If that's not feasible, maybe we can specify the order and prevent a test case X to enter the pool until some other test case Y is done, that way we can force test case order execution.
I hope that makes sense.
Thanks a lot!
Tested page URL:
Test code
I looked up how other test frameworks do it. Nightwatch runs tests in alphabetical order apparently and will run them in the order they were specified in a test file. I recall selenium webdriver was similar. I wonder if testcafe is the same.
https://stackoverflow.com/questions/32703989/how-can-i-run-nightwatch-tests-in-a-specific-order
@charlieg-nuco, AFAIK Nightwatch and Selenium don't have a feature that is similar to the concurrency mode. In the regular mode, TestCafe executes tests preserving the order they were described in fixture files.
I have also run into cases where I would like something like this. My tests are usually grouped into suites (groups of files, each file with its own fixture). Each suite must run in order, but different suites can be run parallel to each other.
I think a good way to accomplish this would be to allow the user to specify atomic groups of files that can not be run in parallel. For example, if I say run all tests is tests/**, but pass an atomic block of ['tests/a.js', 'tests/b.js'], one browser window might run all of the tests in a.js and then b.js while the other browser windows run the remaining tests in whatever way is most efficient.
This might be a little awkward to specify on the command line API, but I think this is a more advanced case and it's fine to require that you use the node API if you need this functionality.
We have the same issue. We have an online file system of sorts and are testing the trash functionality. There are tests for deleting and restoring single files etc. and there is also a test to empty the entire trash. The problem is that without being able to control the order in a fixture, the empty entire trash test will randomly interfere with the other tests causing them to fail.
@alexschwantes thank you for sharing your use case. Currently, you can execute tests that can't work in the concurrency mode in a separate TestCafe session.
Thanks @AndreyBelym. We run a continuous integration server that then reports all the results together via a junit report plugin. Can the different testcafe sessions join the results or do you mean to just run testcafe twice separately? And if so then how do you prevent the first session from running a specific test that you want to run in the second session without hardcoding files that you want to run?
@alexschwantes
I think you can try to use a programming interface to run tests separately. Please see the example in this answer on StackOverflow
In this case, you need to write some custom code to join the reporter results or implement your own reporter somehow.
As for hardcoding the files, you can use the metadata mechanism, which is more suitable for this purpose.
Thanks @AlexKamaev.
Yes I can see that writing my own harness with the programming interface to filter tests with the new metadata functionality and run them separately and then join the results could be a way to do it. But it gets messy very quickly. Even just updating the correct values in the junit results for number of tests run and the time it took etc. Additionally it doesn't solve the issue others have described where it would be useful to simply run tests sequentially in a fixture. But it does come part of the way.
@alexschwantes
I understand that it would be better for you to have the functionality out of the box than using workarounds. However, I cannot give you any estimates on when the feature will be implemented, so at this moment I recommend you use the workaround described above to achieve the desired behavior.
I'm having the same issue. I know Protractor shards (their term for concurrency) tests at the _file_ level. Ie. tests in a single file run in one browser instance, and in the order they are written in the file (though this can also be randomized). Having some kind of flag that would allow concurrency at the file level would likely solve this issue in Testcafe... 馃
I agree with @qualityshepherd having concurrency at _file_ level gives you better control and you are assured that tests are guaranteed to execute in certain order inside a file and multiple files are executed in parallel.
So if user wants to execute tests serially, they just need to put it in a file
Most helpful comment
I'm having the same issue. I know Protractor shards (their term for concurrency) tests at the _file_ level. Ie. tests in a single file run in one browser instance, and in the order they are written in the file (though this can also be randomized). Having some kind of flag that would allow concurrency at the file level would likely solve this issue in Testcafe... 馃