As a end to end testing framework, it would be nice if Protractor provide a way to execute failed tests only from test suite(s) run.
Sounds nice - this would require integration with the test runner (Jasmine, Mocha, etc) and I'm not sure how it would know which ones failed before (it would have to store state somewhere). It's not likely that this will be added soon, but it's a neat idea.
Maybe it could use the previous xunit report or any other report format as input to know which tests failed in the previous execution...
I have the same problem with some tests, they pass most times, but fail sometimes, so it would be nice feature to have protractor retry those failed tests.
have the same problem. I'm looking forward this feature!
Do anybody knows when it will be ready-to-use?
+1
+1
Can you clarify which of the following use cases is the one you want?
1) Automatically rerun failed tests as part of the initial protractor run to deflake. i.e.
> protractor config.js --retry-failed
running...A,B,D passed, C,E failed
running C,E again...C passed, E failed
running E again...E passed
2) Manually rerun failed tests after edits. i.e.
> protractor config.js
running...A,B,D passed, C,E failed
> vim test.js // try to fix C and E
> protractor config.js --run-last-failed
running only C,E...C passed, E failed
> vim test.js // try to fix E
> protractor config.js --run-last-failed
running only E...E passed
the first one, so "1) Automatically rerun failed tests as part of the initial protractor run to deflake"
Also it would be nice if there would be a configurable value for the maximum retry count.
e.g.: in your example there was a 2 re-run, if you set the limit to 1 then the last one won't run.
I hope it was clear.
@hankduan @iamisti The first one for sure! In my case, when running tests on a CI server, having to redo an entire build/push because of a test timing out is frustrating.
+1 for first one
+1
+1
+1, any updates on this?
Not right now, but considering the big feedback, I'll see if I can do something about it in the near future
Please do it as soon as possible. This is almost the most important feature that we're missing from it.
+1
+1
+1, must have feature. have a suite of 1300+ TCs, 15~20 are failing and in teamcity and causing build failure, all are passing if running individually.
:+1:
Hi @hankduan , could you please let me know possible ETA for this feature. I need to decide whether i should have a workaround implemented or wait for this implementation from protractor team.
Likely some time between 1-2 months. I'm trying to get to it as soon as possible, but there are a few other pressing issues that are ahead of this.
We desparately need a way to retry tests in our pipeline, as we have reached critical mass of 100+ tests where likely one will fail. Is there an alternative workaround for retries, or is this the only way? @hankduan
You would need write a script to retry from a top level. (i.e. examine the output which test passed failed, and retry those --using grep). This is probably the approach I'll need to take (well, without the script part) when I do start investigating a way to do this, as there is no way for protractor to tell jasmine or mocha to retry a test directly.
hmm ok I was actually thinking about doing something like this myself. I might hack something together as a proof of concept to solve our immediate needs.
@ericedem
or at least you can share your changes and we can help to you. I don't have too much experience with that, but I can review your code, maybe have suggestions, etc...
+1
+1
the majority of the time all of our tests pass. then at random a test will fail. not always the same one, and by manually checking, it will pass. so a feature that would re-run those failed tests during the CI to not have the deploy/build fail would be great!
+1
+1
+1
@hankduan and @juliemr, is this something that is in the near roadmap for Protractor?
Has anyone of this thread implemented such a logic in a script that they can share. +1 is an obvious on this one. Since E2E test tend to be more brittle than unit test, not having to re-run them all is a must.
Thanks you for your ongoing communication with your user community. Other open source project should take notes from you guys.
Hi folks, here's our current status update. We realize that this is an important feature request for many of you, but it's also a really tricky one for us. We're thinking about it, but there's a lot of other features we'd also like to add so this might not be in the near future.
Ideally, the logic to retry only failed tests would be implemented somewhere by the test framework (jasmine, mocha, cucumber, etc). Since it is not, Protractor would have to write its own shim for each different framework. This worries me, since it seems like a maintenance nightmare and it's breaking all sorts of walls around encapsulation.
@juliemr thanks for letting us know. I just wanted to follow up an let everyone know that our team implemented a shim on top of protractor that parses output leveraging process id tags. Given that we are using the sharding feature for our tests we can leverage per spec id tags (i.e. #A2) from the output. We restrict our test files to use a minimal amount of actual tests, which we were doing anyway because having too many tests in one file was making saucelabs debugging very difficult.
Essentially we parse the output for /\[launcher\].*(#\w+.)*failed / Then we rerun whatever spec file the id corresponded to in the next run: protractor ... --specs=$listOfSpecFiles.
So far this has been working, but it is obviously not ideal if protractor output changes, and certainly not mergeable, but was pretty easy to do from outside of protractor and for our use case. Maybe if protractor provided this sort of functionality for the sharding case, rather than trying to rerun individual tests, you could do it. That would abstract the need for understanding individual frameworks, so you can just rerun the spec files as you please, which may be enough for some people.
It would not be a bad idea to extend Jasmine as Protractor is already overriding some of the standard behavior of the framework (Example: resolving on expect). This is an interesting attempt forking Jasmine-to-WebDriverJS to add rdescribe and rit into jasmine to have an option to retry failed tests.
Mini update. I took a quick look at this and it's much more complicated than I had originally expected when it comes to the details of implementation.
On the Jasmine side: Jasmine runner isn't designed to be called more than once. https://github.com/jasmine/jasmine-npm/blob/master/lib/jasmine.js#L145. If you call the execute() twice, the second time would not run any tests (I'm guessing it somehow keeps track of what was run before although I haven't investigated too much). Part of this has to do with the fact that Jasmine first loads all the test files, and then runs the specs as a whole.
On the Protractor side: It is also not designed with retry in mind and there are many edge conditions like:
Folks,
I'm trying to pinpoint when this intermittent situations happen and here
are my findings so far:
On Mon, May 4, 2015 at 8:43 PM Hank Duan [email protected] wrote:
Mini update. I took a quick look at this and it's much more
complicated/hackier than I had originally expected when it comes to the
details of implementation.On the Jasmine side: Jasmine runner isn't designed to be called more than
once.
https://github.com/jasmine/jasmine-npm/blob/master/lib/jasmine.js#L145.
If you call the execute() twice, the second time would not run any tests
(I'm guessing it somehow keeps track of what was run before although I
haven't investigated too much). Part of this has to do with the fact that
Jasmine first loads all the test files, and then runs the specs as a whole.On the Protractor side: It is also not designed with retry in mind and
there are many edge conditions like:
- if 1/5 tests fails the first time, Protractor will print an error
message for the 1 failed test, even if the test passes on a retry. Ideally,
we want to hide the error message, or only print it with --verbose, if the
test eventually passes.- The reporters also needs so restructuring to note the fact that
there has been retries.—
Reply to this email directly or view it on GitHub
https://github.com/angular/protractor/issues/1190#issuecomment-98894008.
@gkohen
We are heavily utilizing Angular Material which has rich animations and
transitions. It seems to be more of an issue because of asynchronous nature
of these renderings.
I'm not using Angular Material, but I do use a few transitions here and there and most of the random test fails happen in these places - with animations.
I have a minimal browser.sleep just in case, but even they won't be enough sometimes.
@ericmdantas browser.sleep may look tempting but it would make the tests more fragile, slow and unreliable. protractor has to behave in a more "natural" way with animations and transitions.
@alecxe, definetely. But, at least for now, it's either that or some tests wouldn't pass at all.
On top of excellent solution proposed by @ericedem, we built something that works for us.
Instead of relying on mocha's in-built reporters (which can change in future and also needs quite a bit of logic to figure out what failed looking at mocha's reporter), we created a tiny custom reporter that writes all failed tests in a text file. The parent process then reads that file and creates another protractor process with --specs=$listOfFailedSpecs
+1
+1
@hankduan @juliemr as far as I can see we should implement it somewhere around here:
Mocha: https://github.com/SeleniumHQ/selenium/blob/master/javascript/node/selenium-webdriver/testing/index.js#L128-L146
Jasmine: https://github.com/angular/jasminewd/blob/jasminewd2/index.js#L81-L102
Cucumber: haven't looked it up (I would even say it wouldn't make sense to repeat steps)
I've got a simple tweak working. We just need to repeat selenium-webdrivers control flow as often as we want to rerun flaky tests. Fortunately this wouldn't require any additional changes in the reporters as we don't trigger any information or events up the chain. For Mocha we would probably need to fork the adapter and make an own project (like jasminewd) as I don't think that the maintainer would like protractor specific stuff in their code.
What do you guys think? I would be happy to contribute because I also need it for a project.
Please let me know.
Feel free to create a PR. I haven't looked at this for a while, but from the last time that I took a look, I couldn't find any good solution unless we integrate directly with the frameworks (jasmine/mocha/etc) themselves.
Inspired by Eric's solution i've created a rudimentary wrapper to rerun failed spec cases. It's very much beta at the moment, but hopefully it would be helpful until we can find a better way to get this into protractor core.
+1
+1
+1
+1
Tested out the wrapper by @NickTomlin -- so far a great workaround.
I recently heard about how Walmart labs build a way for flaky tests to be re-run here https://medium.com/@geek_dave/zombies-and-soup-e346f0c8064f
Whatever the community in Protractor could do to implement this, would greatly help people who want to do continuous delivery.
I will try @NickTomlin 's solution as well
@tonytamsf I remember reading that article a while back. I don't remember them having a plugin when I read it though.
Seems like we should be able to just create a magellan plugin similar to the Nightwatch one and use Magellan as the Test Runner?
+1 this is really kind of a necessity...
+1
+1
+1
+1
+1
+1
+1
+1
Any details on the progress on this improvement?
The communication could be a little bit better, since this thread getting more and more popular.
+1, I need this for times when webdriver timeouts for no particular reason. Very frustrating.
+1 It would be really nice to have this feature!
+1
We need this so much. My team is almost giving up e2e tests
Not sure who is interested, but you can contact me at [email protected] if you would like to do this for a Continuous Integration environment, like Jenkins using Protractor. The idea behind it, is that you keep track of failures using:
var failures = jasmine.getEnv().currentSpec.results_.failedCount;
for each spec individually. You can use environmental variables to keep track of the failures for an individual spec, but since each node process is broken up individually for each test, I suggest to use a memory storage mechanism like Redis in order to keep track of failures across all tests or you can just use a write/read to a file.
At the end of all the tests running, you use the File System Library to write out environmental variables to a properties file, that then are "injected" into the rerun of the same job, but with different individual tests running.
I am pretty happy with it and it does take some time to set up, but it's worth it. If I get enough people, I'll write a guide, or maybe even a module.
Thanks @NickTomlin for this!
https://github.com/NickTomlin/protractor-flake
I'm using grunt-protractor-runner to kickoff protractor from Grunt, so now I'm trying to hack the two ideas together into a single grunt task.
@jeffsheets were you able to run protractor-flake with grunt-protractor-runner? How can we integrate both? Thanks!
@rohitkadam19 sorry but I never did get that running, and have since moved on to a different project. Would be great if someone could get that hooked up though.
+1
We need this so much. My team is almost giving up e2e tests
@premkh9, are you using gulp? If so I suggest using protractor flake mentioned above:
https://www.npmjs.com/package/protractor-flake
I've actually had to make a script that finds all the failures and re-runs them because there is sometimes one or two that fail but it was just a matter of communication with a service like browserstack or saucelabs taking too long or some other factor that wasn't a real positive failure.
Protractor-flake resolved this issue for me.
@gkohen, can you please help me how to implement protractor flake in protractor configuration file. I am not using gulp.
+1
if u guys are using Mocha, it already has retry
@longlho
Mocha's retry is mostly good, but there are still some issues:
beforeEach, including both things like browser.wait timeouts and expect. I probably don't use expect at all in beforeEach, but I use browser.wait a lot in beforeEach to make sure each test starts with the same common conditions. It would be great if it can support that.can someone please help me how to implement protractor flake in protractor configuration file. I am not using gulp.
+1
+1
@juliemr @NickTomlin - Its really hard to configure protractor flakes from following documentation. Can you please explain in detail, It would be really helpful to many of them
@premkh9 if you have an issue please raise it on the protractor flake repository. Protractor flake is not an official angular/protractor project.
Re running a failed tests are very useful feature for the frame work perspective. I tried flake but It did not help. Can you please suggest any other way to achieve this.
+1
For folks having trouble setting up protractor-flake, I've added it to my protractor-example tests on GitHub. Take a look at the flake file, which I use to kick off tests... eg. ./flake conf.js.
You could also always use it directly... for example, assuming you have protractor-flake installed as a dev dependency (--save-dev), you could then run it with node_modules/.bin/protractor-flake -- conf.js.
We could do this in jasminewd - no way to do this directly in protractor though. If you need this for mocha or cucumber you'll have to ask them for this feature.
Jasmine wd issue: https://github.com/angular/jasminewd/issues/73
+1
+1
+1
+1
Hello,
I have just found a library. I think it solves our problem.
Protractor-retry
Hello,
I have just found a library. I think it solves our problem.
Protractor-retry
"Windows as an environment to launch & use this package is unfortunately not yet supported."
+1
This is an older issue, but if you are looking for this, you can achieve this using different npm packages, I'm using:
https://www.npmjs.com/package/protractor-flake
I've changed some stuff because I'm using typescript, and when a test fails protractor shows the trace for typescript files, and flake looks the trace in order to rerun failed tests, so you need to point to transpiled JS files in order to be able to rerun failed tests, but should be pretty straightforward the implementation.
Also works out of the box with windows/mac/linux we are using it in a cross-platform development team (=
Most helpful comment
Not sure who is interested, but you can contact me at [email protected] if you would like to do this for a Continuous Integration environment, like Jenkins using Protractor. The idea behind it, is that you keep track of failures using:
var failures = jasmine.getEnv().currentSpec.results_.failedCount;for each spec individually. You can use environmental variables to keep track of the failures for an individual spec, but since each node process is broken up individually for each test, I suggest to use a memory storage mechanism like Redis in order to keep track of failures across all tests or you can just use a write/read to a file.
At the end of all the tests running, you use the File System Library to write out environmental variables to a properties file, that then are "injected" into the rerun of the same job, but with different individual tests running.
I am pretty happy with it and it does take some time to set up, but it's worth it. If I get enough people, I'll write a guide, or maybe even a module.