Maybe this is already possible? But I'm not seeing it in the Docs or through Googling.
Basically.. I have a suite of tests for a method, and I utilize beforeEach() for calling the method for each test case. The problem is, I need to manipulate things before certain test cases run and before the beforeEach() fires.
It would be _really_ nice to have something like:
import mything from 'my-thing'
import {expect} from 'chai'
describe('my test suite', () => {
let opts
before(() => {
opts = {
foo: 'foo',
bar: 'bar'
}
})
beforeEach(() => {
mything(opts)
})
it('Should do the default thing', () => {
expect(defaultThing).to.have.happened
})
it('Should do something different w/ different foo', () => {
expect(differentFooThing).to.have.happened
}).before(() => {
opts.foo = 'differentFoo'
}).after(() => {
opts.foo = 'foo'
})
it('Should do something else w/o bar', () => {
expect(noBarThing).to.have.happened
}).before(() => {
delete opts.bar
}).after(() => {
opts.bar = 'bar'
})
})
So...
A) Is there a way to accomplish this functionality in a test suite already?
B) if (A) { How can I do it? }
C) if (!A) { Does this sound like a probable feature that could be implemented? }
For B, how about...
describe("overridable behavior", function() {
var overridable = "default"
beforeEach(function() {
doStuffWith(overridable)
})
it("should be default", function() {
assert(default.stuff)
})
describe("overrides behavior for some few (even just one) test", function() {
var original = overridable
before(function() {
overridable = "test-specific"
})
after(function() {
overridable = original
})
it("should be overridden", function() {
assert(testSpecific.stuff)
})
})
})
(If this is common, there may be ways to simplify it by bundling the pattern into a function. It's JS after all.)
That solution admittedly isn't pretty, but then, my gut instinct is that the behavior in question may be a code smell no matter how it's implemented ("we need to do this before each test, but wait, we need to start making exceptions..."). On the other hand, it is actually the sort of thing you'd use if you wanted to share similar behavior between entire suites and just configure it differently for nested suites (for simple cases like this anyway -- for more complex cases we may need to revisit making this behave more like RSpec at some point, we have a couple issues open about that) -- the weird thing here is wrapping just one test.
On the other hand, if you needed to customize it not just for a few tests but for every test (or very nearly), it might be more elegant to forego hooks and do something like this:
describe("individually customizable setup", function()
function setUpOneTest(customization) { doStuffWith(customization) }
it("uses one version", function() {
setUpOneTest("one")
assert(one.stuff)
})
it("uses another version", function() {
setUpOneTest("another")
assert(another.stuff)
})
})
(The weakness of this approach, obviously, is cleanup afterward even if the test throws. If the cleanup is the same no matter what it can still be put in afterEach. If not... well, I think we've got another feature request open somewhere about registering a one-time post-test cleanup function.)
Thanks @ScottFreeCode for the work-around. I'll try that. Most likely looks like it would work.
As for the "code smell" is there some technical reason why just chaining a .before() off of a given .it() definition wouldn't be possible, or would break some other use case?
I doubt it would be impossible, but it might be a lot less trivial than meets the eye in terms of design choices and implementation, especially given Mocha's existing paradigms, design and internal details (including, among other things, separation of Mocha's interfaces -- which can be swapped out with custom/third-party ones -- and internal objects, plus the fact that some of the interfaces' test-creating functions currently return the internal test object for some obscure reason). In terms of feasibility it's less hairy than some things we'd really like to change, although it might have to wait for a major release (due to compatibility with the obscure fact those functions are already returning the test object). And we'd certainly like to encourage people to do setup and cleanup on a more per-test basis to begin with; but -- if you want my ballpark estimate of the idea -- I expect the most bang for our buck we'd get out of fitting the concept of per-test setup/cleanup into Mocha's hook system would be the aforementioned "setup in the test and register a guaranteed-to-run cleanup function from within the test" idea (because this would allow avoiding variables scoped outside any one test without requiring any more drastic changes to how Mocha envisions tests and hooks).
(I'd also be curious whether there's an existing solution in RSpec-like test runners out there. We have a complaint every once in a while that this or that detail differs from RSpec-established expectations; and while there's always the possibility of standing out in a good way as offering something others don't, reinventing the wheel usually stands out in a bad way. But I didn't happen to get into RSpec when I dabbled in Ruby, and it's been a while since I compared Mocha with other JS test runners in a really in-depth way. This might not matter that much; but if anybody reading this happens to know, do chime in please!)
As for code smells, I wasn't so much thinking of anything on Mocha's side as thinking that one-off pre-emptions of behavior (setup in this case) that's supposed to apply to all the things (tests in this case) in a group (suites in this case) might be a telltale sign that the group isn't as well defined/organized as it ought to be -- a possible "X/Y Problem", if you're familiar with that concept. I might be entirely off-base about that -- I've just got a general abstract idea rather than your real use case, and as alluded to in my previous reply there are lots of well-defined ways to customize and/or override behaviors. Plus, as with all smells, they're not sure indicators of a problem, just hints to reexamine the thing and make sure it's right (and perhaps look at what makes the rightness most obvious if it is).
Both of those assessments are just my two cents' worth, of course, and merely at a glance too.
I also found myself wanting this feature today.
why couldn't your just do it like so?
describe('foo', function(){
beforeEach(() => {
});
it('boring test - only gets a beforeEach', () => {
});
describe('special tests in here', function(){
before('***this runs before the above beforeEach and only for the test below***', () => {
});
it('special test - gets a before and beforeEach yay', () => {
});
});
});
I'm running into this myself right now; I have a code smell in my tests that has recently emerged due to a need where I have to run some rollbacks after running certain tests on my test database (in order to avoid dependencies on db state changes between tests).
What has happened is basically the following:
describe('as user type foo', function () {
it('do a GET thing')
it('do a different GET thing')
describe('pretest prep', function () {
let originalStatus: UserStatus
before(async function () {
// get the user model for the user we will modify, then...
originalStatus = user.status
})
it('modify user status', async function () {
// test wherein user.status gets changed in an API, the API response verified for structure, etc.
})
after(async function () {
// we get the user model again, and then...
user.status = originalStatus
// and the user model changes are flushed back to the database, to prevent database changes cross-contaminating other tests
})
})
})
This then results in an odd structure to tests (anything that tests modification to data has to have a describe wrapper structure like above). Not sure where to go with it honestly, but I recognize that a test-specific before function is basically what I need and would help immensely.
@damianb That isn't a code smell in and of itself. Some tests need setup, some tests need cleanup, some tests need both. Suites allow you to group tests logically and/or functionally. It's not unusual nor incorrect to have a suite with a single test in it.
Fair enough. It certainly does make it less pleasant to look at the test results afterwards, but that's a nit that I'll have to get over. Kinda wish I could set something on the describe to just make it not appear in results and pretend that the it() is higher up but that's probably not worth trying to implement.
In fact, it might make sense to put your code which causes the DB to be modified in a "before all" or "before each" hook, then only perform assertions within the test itself. Each suite can then describe the assumptions the tests make:
This style has the added benefit of making it easy to add further assertions based on a set of assumptions.
Fair enough. It certainly does make it less pleasant to look at the test results afterwards, but that's a nit that I'll have to get over. Kinda wish I could set something on the describe to just make it not appear in results and pretend that the it() is higher up but that's probably not worth trying to implement.
A big secret is that you don't need a top-level suite, because one exists already:
it('something')
describe('something else')
it('another test')
But given the assumptions your tests are making, it seems reasonable to rename the suite to something like "when user is modified", or put the other two in a suite called "when user is read" or something like that. The fact that you need a describe means the tests make an assumption; document it in the suite title.
Is there some technical reason for not supporting before() and after() within an it()?
yes. before, in particular, cannot be run since the test is already underway
Most helpful comment
why couldn't your just do it like so?