(..or maybe Mocha 4. I don't know.)
@mochajs/mocha + everyone,
I mentioned this in the mochajs/maintainers room on Gitter, but since it appears people aren't using Gitter much, I'll repeat it here:
Mocha's old. What's cool about that is that we know what's wrong with it. And indeed, it has problems which make certain issues difficult to address. The major issue is "plugins". Others include (but are not limited to):
node-ableIt's my opinion that any attempt to address these problems in an iterative fashion is a fool's errand. Components are too tightly coupled; each item above, if taken in the context of the current codebase, is a major undertaking. I propose we rewrite Mocha from scratch.
Mocha should made of plugins--all the way down. It should come with a default interface and a default reporter, but little else--Mocha's business is _running_ tests and _reporting_ the output. This is what it does well, and this is what the core should be.
From the current version of Mocha, we'd retain:
.mocharc file(s) instead of mocha.opts. We would retain the mocha.opts functionality in a plugin; package mocha-opts-plugin, for example.--watch, which should be handled by a plugin or another executable entirely. node should be handled by node itself--either by executing mocha with node, or using node-able tests. mocha-cli, as not everyone uses it.For libraries and tools which consume Mocha (I'm thinking stuff like JetBrains Mocha reporter, Wallaby.js, mochify, karma-mocha, any Grunt/Gulp plugin which executes _mocha), we must keep lines of communication open to ensure a smooth transition. Ideally, these tools should use the resulting programmatic interface instead of forking processes!
We should supply a Yeoman generator for Mocha plugins, which would provide starting points and boilerplate for Browserify (since certain plugins will need to run in a browser).
If you guys are buying what I'm selling, I think the best thing to do is _just start coding_. The direction is clear, and the requirements are known. Let's create a prototype and take it from there. When the dust settles, we can address specific areas of concern, and begin to deprecate whatever needs deprecating. And start documenting. An upgrade to v3 shouldn't require the user to do much more than install an extra package or two.
cc @segrey @mantoni @ArtemGovorov @vojtajina @maksimr @dignifiedquire
The interface APIs. Users should not have to modify their tests.
Is this a hard requirement for a new major release? For example, would you consider dropping support of this and playing with context. It would be awesome if instead of:
it('foo', function(done) {
this.timeout(10000);
});
we could do:
it('foo', (test) => {
test.timeout(10000);
});
That's a good question, I would be very much in favor to transition to something without a globals in the default setup, so the usage would be something like this
const {describe, it, timeout} = require('mocha')
describe('my test', () => {
it('my slow test', () => {
timeout(10000)
})
})
and for backwards compat it would be easy to add something like this
const mocha = require('mocha')
global.describe = mocha.describe
global.it = mocha.it
@Dignifiedquire What's wrong with globals by default? And for tests in general? Why bloating every node test file with const {describe, it, timeout} = require('mocha') and making it even harder for non-CommonJs based browser tests?
@ArtemGovorov : making it even harder for non-CommonJs based browser tests?
I was imagining backwards compatibility via config or executing something like mocha.deployGlobals() / mocha.deployInterface(window).
@danielstjules: dropping support of
thisand playing with context
I like this change a lot. Doing test.timeout(5), test.skip(), test.done(), is more readable and straightforward to me, than fiddling with the context. And you can easily extend functionality via adding more methods/props to that test object.
@danielstjules: Is this a hard requirement for a new major release?
No, but I imagine we are likely to change quite a lot along the way, and old hacks might no longer work (or maybe we don't want them to work anymore), like coverage via _mocha. I wouldn't like to limit the changes just because I'd prefer not to bump the major version.
@ArtemGovorov yes something like @dasilvacontin suggest for the non-commonjs based environments. Simply namespacing everything under one global mocha in that case, so you would have
const {describe, it} = window.mocha
// and as a convenience method
window.mocha.deployGlobals = () => {
window.describe = window.mocha.describe
window.it = window.mocha.it
}
@dasilvacontin @Dignifiedquire I understand there could be a function to set globals. I don't understand why making no globals the default option. Those who'd like to use globals can do it now without doing anything. Those who want to require: const {describe, it} = require('mocha') can do it and will have to do it anyway. So why changing the status quo?
Because you are forced to have globals in the namespace with the current version, instead of being able to opt-in the global name space is already polluted as soon as mocha starts, even if I use require('mocha') to assign them. The change I suggest would make it a real choice between using globals or not.
@Dignifiedquire I don't see anything bad in having a couple of globals for test environment. Pretty sure many people don't mind it as well. Perhaps we could consider an option to remove globals for those who're strongly against them:
window.mocha.undeployGlobals = () => {
delete global.describe;
delete global.it
}
From my observations there're way more people using globals for mocha than those who are not using them.
My point is to try making it easier for the majority of existing users to migrate to the new version by trying not to change configuration defaults without a good reason.
It could be an option in .mocharc:
{
"interface": {
"bdd": true
}
}
// if "bdd": true, globals will be deployed
From my observations there're way more people using globals for mocha than those who are not using them.
Well, the 'mocha' way has always(?) been using globals, so I would put my bets on that statement too. It's only recently that we finally have browserify support, and not with all the features, I believe.
It's wiser IMO to focus on having node-able tests – you can add 'globals-only' support easily. Whereas if you don't even consider having node-able tests, you are going to have trouble adding them later... which is the situation we found ourselves into. I picture this situation like someone trying to dig out sand from a hole while a sandstorm is going on.
That's why a rewrite is being suggested – hacking on the current codebase won't lead to good code _and_ it will take ages. So it's not very motivating, and motivation for maintainers is very important.
Related: http://www.macwright.org/2014/03/11/tape-is-cool.html
It's wiser IMO to focus on having node-able tests
Leaving globals by default doesn't prevent one from writing node-able tests. Given mocha adds the support for node-able tests, one can use const {describe, it} = require('mocha'); and don't even know/care that there are any globals. And those who do worry about global space could use a config option that controls assigning those globals (config will still be applied in node-able tests, right?).
This way node-able tests are supported, the majority using globals will not have to add config options, and only those few who are somehow affected by globals will have to use the option (they must already have something in place anyway if they're affected in the current version - will just have to use the official option instead of whatever workaround they use now). Makes sense?
That's why a rewrite is being suggested
I understand, but the rewrite doesn't necessarily have to change certain previously established contract/defaults suitable for the majority of users.
IMHO, for browser tests it seems reasonable to deploy globals by default, but for server-side node tests, deploying globals breaks CommonJS style making people confused a bit about environment they are developing their tests in.
JFYI: jasmine spec runner for Node.js deploys globals always. Do people expect compatibility between mocha bdd style and jasmine?
@ArtemGovorov I don't see any problem with globals for the test environment, personally, however some people do. I'd like to support those people by allowing Mocha tests to be run via node or by a separate mocha binary. I think this is much like how tape works.
I understand, but the rewrite doesn't necessarily have to change certain previously established contract/defaults suitable for the majority of users.
To be clear, I'm not suggesting eliminating a global describe() or whathaveyou--simply that users should have the option. Tests run with the mocha executable would still have this behavior.
Perhaps that means the cli should still be within Mocha's core instead of a separate package (like mocha-cli). Indeed, this would probably be less jarring than needing users to install both packages.
@ArtemGovorov Please let me know if I haven't addressed your concerns.
It could be an option in .mocharc:
Not so hot on this idea. To recap, the idea is this:
mocha test.spec.js to run tests, you get globals (but can disable these with a flag).node test.spec.js, you obviously need to require('mocha') and get whatever you need from it.require() is present, the tests may still be written using the globals.I see the role of the mocha CLI as having three main responsibilities:
.mocharc settings, which would otherwise have to be overridden in the tests themselves.node test.spec.js, but you can't node test/**/*.spec.js. Sure, you can use find -exec or something, but it should be easy..mocharc generation. Perhaps just dump a .mocharc file in the cwd with the defaults populated, but later maybe leverage inquirer to guide people through it. Similar to npm init, karma init, etc.I'm thinking node test.spec.js should still search for .mocharc and use it. Does that sound fair?
I do like the idea of a function call to populate the globals.
I _would_ like to eliminate bin/_mocha, so that means node flags would no longer be supported. If you need them, run node --flags path/to/mocha test.spec.js or node --flags test.spec.js. This can trivially be configured in the scripts property of package.json.
This is a breaking change for sure, but also not unreasonable. It's a maintenance headache to have to manage the flags and two binaries, even if we have gotten better about it...
It could be an option in .mocharc:
Not so hot on this idea. To recap, the idea is this:
Actually this isn't a _bad_ idea, but it does mean running tests using node would still have to require('mocha') to make that happen. There's an opportunity for confusion there; I'm curious how we should avoid it.
there should also be better support for dynamic tests in the "new Mocha"
- If using mocha test.spec.js to run tests, you get globals (but can disable these with a flag).
- If running node test.spec.js, you obviously need to require('mocha') and get whatever you need from it.
- A browser context probably should use globals by default, but could be disabled via configuration.
This makes sense to me and addresses my concerns. Every group of users gets the most predictable and reasonable defaults.
Even if Browserify is used and require() is present, the tests may still be written using the globals.
Agreed. From what I'm observing, most of the people prefer globals in tests even with Browserify/Webpack. It's just faster than making Browserify/Webpack to compile your testing framework apart from everything else and easier than having to configure Browserify/Webpack to avoid the compilation.
cc'ing @gotwarlost
@ORESoftware what do you mean by this? If you have a specific use case or expected behavior, that'd be very helpful.
Keep in mind we don't want to break existing tests.
So with .mocharc, what's the planned format? YAML sounds a bit alien, from JSON and JS I'd personally prefer JS like:
exports = mocha => {
mocha.option1();
mocha.option2();
...
};
It provides more flexibility, for example one could define --require code right in the .mocharc to avoid creating a separate file just for a couple lines of code (for example, I often see people having to create some fixture.js file just to initialize chai style).
@danielstjules that's true. It sucks. But I think the worst thing we could do is break thousands of tests which rely on this--major or no.
Hmm. If the interfaces were all plugins in their own packages, we could simply version _those_.
For mocha 3 we would retain the old BDD API. We could then develop the newer BDD API in unencumbered by Mocha. Users could install it if they wish. At the next major of Mocha it could become the default. Users could always fix their version of the BDD interface and still use the new major of Mocha.
Is this sane?
Cc @danielstjules
Also since I'm probably talking about a fair amount of individual packages, a monorepo might make sense for us.
@sheerun does Bower play nice with monorepos? I'm worried about tags.
@ArtemGovorov I prefer YAML over JSON. A .js file is likely overkill for trivial usages of Mocha.
I'd envision a .js file working pretty much like what you propose. You could call setup functions in Mocha, set global variables--maybe even define global hooks--and return a configuration object if you wish. Kinda like wallaby. :)
(I'd want to support JSON too of course)
mocha is written in node.js => JS or JSON config
I'll just mention that JSON is a subset of YAML, so any YAML parser can also read any JSON file.
@travisjeffery looking for your blessing here
re: the BDD interface (and others); @danielstjules's idea about how the BDD API should look is relevant because you cannot use this.slow() if you want to leverage arrow functions. I see no reason why the functions should run with a context. There are more reasons to eliminate it than to keep it.
If we're picking nits, I'd probably just pass single object parameter to the function and use dereferencing. I'm thinking I'd like to _keep_ tests as being synchronous by default. However, this would break non-Promise async detection:
it('should do something async', ({test, done}) => {
// we use "Function.prototype.length" to check for the existence of the "done"
// parameter. this will no longer work, because the length will be 1 here if
// you dereference anything at all...
});
We can't use Function.prototype.toString() to detect if done has been used (yet), because at this point if you're using dereferences, you're transpiling. Also, this is no good either:
it('should do something async', ({test}, done) => {
// this test uses "test" object
});
it('should do something async', ({}, done) => {
// this one doesn't. nobody wants to write this.
});
Another idea, though somewhat fiddly, is:
it('should do something async', ({test, done, promise}) => {
test.slow(10000);
doMyAsyncThing(function(err, value) {
assert.ok(value, true);
done();
});
return promise; // this Promise's "resolve()" function is "done"
});
We can move interface API discussion elsewhere if necessary.
@boneskull configuration-wise I'd go with JSON since JS people are comfortable with it and it's most common for config files like that, e.g. eslint et al.
@boneskull How about this:
describe('my thing', (test) => {
it('does stuff as before', () => {
});
});
@travisjeffery well, ESLint also supports YAML. we get JS and JSON for free, so it's not a big deal to allow YAML as well.
Interface wise, do we really need to pass an object, would it not be possible to do something like this:
describe.config({timeout: 1000})('my thing', () => {
it.config({timeout: 5000})('does stuff as before', () => {
})
})
this way we can handle async tests in a similar way to now.
Bower versions on repository level (tags). So all packages in monorepo would need to have the same version.
I am working on this myself, and I am using no globals. Mocha has bugs and they partly arise from the use of globals.
https://github.com/ORESoftware/suman/blob/master/test/test1.js
that is what my API looks like in action and I have to say it's pretty nice, I have been working on it for awhile and it seems to work very well, and conforms to Mocha 3 about 80-90%.
Mocha has bugs, so complete backwards compatibility would mean keeping those bugs, which I am not willing to do. Part of the reason of a rewrite is to get rid of a lot of small bugs which are really unacceptable in a testing library after all. The major reason for a rewrite IMO is to run each test file in parallel (separate procs) and for a given file, have tests that can be run in parallel.
I like the usage of 'this' in my API because it helps prevent the developer from making mistakes, and it nicely allows for chainable it(), befores(), afters(), etc.
@boneskull by "better dynamic test support" - I meant that we need to allow a way to make dynamic tests run in parallel
this how you do a dynamic test with Mocha 3
[1,2,3].forEach(function(val){
it('does things in series unfortunately', function(){
});
});
unfortunately, even though the above allows for dynamic tests, they still run in series, no matter what you do!
I allow the user of my lib to create dynamic tests that can run in parallel, using this construct:
this.loop([1,2,3],function(val){
it('runs in parallel', function(done){
return done();
});
});
that's what I meant by better dynamic test support - it would be specifically for supporting loops which know that it is desirable to run in parallel. To run things in series in my lib, you just use the same construct as Mocha 3.
@ORESoftware
Mocha has bugs, so complete backwards compatibility would mean keeping those bugs, which I am not willing to do.
Out of curiosity, what bugs are these and why fixing them would need to break the backwards compatibility?
The major reason for a rewrite IMO is to run each test file in parallel (separate procs) and for a given file, have tests that can be run in parallel.
We had no problem implementing running tests in parallel with existing mocha API in wallaby.js.
With your loop example, I don't see any reason why the API has to change to be able to run tests in parallel. I understand mocha will run tests serially in your example, but to run them in non-serial manner you don't have to break the compatibility and change the API. It's just the matter of extending the internal runner.
What may need a change in the API for parallel test runs is adding some way to provide a hint that some tests should run in non-blocking manner and some should run serially/sequentially. But the change can be backwards compatible, like adding it.parallel(...) and it.serial(...) where it.serial(...) = it(...) by default (but configurable). describe.parallel could make all nested it parallel. Having said that, I'm not a big fan of such changes in the API, doesn't seem right to introduce purely runner related concerns into the API.
I'm a fan of describe.parallel, but there's different types of "parallel" execution. There's parallel like what this hack does: https://github.com/danielstjules/mocha.parallel From the notes there:
use of the word "parallel" is in the same spirit as other nodejs async control flow libraries, such as https://github.com/caolan/async#parallel, https://github.com/creationix/step and https://github.com/tj/co#yieldables This library does not offer true parallelism using multiple threads/workers/fibers, or by spawning multiple processes.
I'd imagine something like describe.parallel would work in that manner. Which would help for some subset of tests (IO bound, for example). But, any rob pike fan will be keen to point out that it's not really running in parallel, nor even concurrently. If we want to speed up test suites, a forking process model or something similar would be ideal. I like the approach ava took here: https://github.com/sindresorhus/ava#isolated-environment
There are several bugs. You can look through the Mocha issues to see them. They mostly have to do with globals, and the fact that tests interact with each other.
The worst bug that I heard about was how a before/beforeEach hook was invoked across tests. I mean, common, that's horrible.
One simple bug I found was this:
describe('foo',function(){
before(function(){
console.log('before');
});
beforeEach(function(){
console.log('before each');
});
describe('inner',function(){
it('stuff',function(){
console.log('test');
});
});
});
the beforeEach gets logged...that shouldn't happen. If something as simple as this is not correct, then I am not sure what to say. It's possible that the beforeEach is intended to run as Mocha is now, but really, it shouldn't run!
This is the kind of bug that I would want to fix, that would break backward compatibility.
yup.
the beforeEach gets logged...that shouldn't happen.
That is 100% intended behaviour... Jasmine does the same thing.
That could be, but it doesn't make sense, does it? The beforeEach should just run before each test in that block, but not before each describe. Is there a good reason for the current design that I am missing?
It does. rspec and jasmine also behave in the same way. Here's a jasmine description This highlights that we should document the behavior, but it's a common pattern for bdd testing frameworks, and makes sense.
@ORESoftware It totally does make sense. You can set up some context in the outer describe beforeEach and then for example adjust it in an inner describe beforeEach.
There are several bugs. You can look through the Mocha issues to see them. They mostly have to do with globals
Could you please link some of them?
Fair enough, but I think that should happen in the outer before, not the outer beforeEach. Think about nested tests, how confusing it is if every beforeEach run before every test. It's seems like it would cause more problems than not.
if this bug is really a bug, then I can't help but be amused
how confusing it is if every beforeEach run before every test
I think you are misunderstanding how mocha works and has always been working.
Yes, I am no Mocha expert. However, there is one thing that is very clear - it is difficult if not impossible to run it()s in parallel in the same mocha file. So even if you run different Mocha files in different processes (which it's not clear how to do for Mocha out of the box), for a given test, all the tests must run in series.
The loop thing that was discussed is correct. I know that much about Mocha.
1949
I don't think it's a bug.
It's the same as if you call beforeEach outside any describe, it will be executed before each test declared in any test file.
Adding the root suite name will make the beforeEach execute just for that suite and not for every test in every suite:
module.exports = {
rootSuiteName: {
beforeEach: function () {
console.log('beforeEach');
},
test2: function () {
console.log('test2');
}
}
};
it is difficult if not impossible to run it()s in parallel in the same mocha file
You may need to isolate non-sequential tests somehow, but it's not impossible, see the mocha.parallel from @danielstjules.
@danielstjules We run test files in parallel in wallaby.js, however we don't run each file in a separate process. We split all test files into N groups (where N is configurable and <= the number of available cores) and run the groups in parallel in separate processes.
The way we do it in wallaby is more performant in node (and significantly more performant in browser) than one test file per process. It's also less isolated, but in my opinion the isolation is a concern that should be controlled by a user not by a test runner. Runner should make tests run faster, not try to hide their problems. In other words, users should write tests in a way that no matter what runner and in what order runs them - they should still pass.
@ArtemGovorov you have a fundamental misunderstanding of node if you think one node process is going to benefit from using N cores. My understanding is that libuv has a threadpool available to it and those threads could potentially run on separate cores depending on the OS and hardware, but that threadpool is only for sync I/O, not async I/O. So you running a single node.js process and having N tests run to match N cores, is not really doing anything. You really need to use more Node.js processes to get more CPU power, if that's your bottleneck.
What would be of benefit to you, however, and something you are probably doing, is running each test using a promise library or async, and that will allow you to parallelize tests by using async I/O.
something like this (psuedo-syntax):
var mocha = require('mocha');
var tests = [];
async.each(tests,function(test,cb){
mocha.runTheTest(test,cb);
});
this would run the tests in parallel, if the tests use asynchronous I/O, and would run them in series if they do not use async I/O, and scaling this type of thing to the number of cores on your machine makes very little sense.
The benefits of running each test in a separate process are so painfully obvious that it's hardly worth arguing about at all.
@ORESoftware Sorry, I must have not made it clear - we run tests in parallel processes but not each file in a separate process. If you have 80 files and 8 cores, we'll run 8 processes with 10 files in each in parallel.
The benefits of running each test in a separate process are so painfully obvious that it's hardly worth arguing about at all.
Well, let's argue. Running 80 processes for 80 files on 8 core machine is not going to be faster than what we do. And not only because of the unnecessary parallelization. Each process takes time to spin up, then each test file has a number of dependencies to load: mocha, an assertion library, required source files and node modules. It's more performant to reuse processes when possible than to use a process per file.
@ORESoftware I'm not sure where @ArtemGovorov said anything about the tests running under one process.
Also, I think forking processes to run tests is a given; I'm not sure who is saying we shouldn't? I imagine Mocha 3 would take the same strategy wallaby does, which is to fork _n_ processes and attempt to reuse them.
We're not going to remove fundamental concepts (which are arguably universal to test frameworks) like beforeEach(). Again, it's my hope we can release Mocha 3 so that nobody will need to modify a test file.
If your aim is to tell us how you do not like the BDD API, then mission accomplished. I'm finding your tone and participation in this issue a little less-than-helpful, however.
Fair enough, good luck
@ArtemGovorov - sorry I misread your post, I see now that you using as many processes as cores and then running multiple tests per process, separating them out in some system or another.
My recommendation, for ease of implementation, and understanding on the part of devs, and maximum correctness, is that it makes the most sense to just run one file per process, no exceptions. The outliers at 40+ test files in a test suite can roll their own solution. The slow down in tests is always I/O as usual, so having more processes than cores really is not that big of a deal. I don't know what the average test suite size is, but I assume less than 30 tests. Trying to match process count to core count is not that useful, the most useful part of separating into different processes is correctness, and that is best accomplished by a 1:1 ratio.
@ORESoftware
Implementation simplicity is subjective. Test isolation value (if that's what you mean by "correctness") is something tricky to measure, as I have said I think it's user's responsibility to isolate tests, not the test runner's responsibility. Besides, a test runner can not guarantee the full isolation, unless it runs every test in a separate process (which is not really a good idea), otherwise there are still tons of ways to accidentally write non-isolated tests even within a single file.
So let's measure something objective, talk numbers. On my 2014 MBP:
require('mocha');
require('chai');
require('babel-core');
require('express');
I have taken those as an example, people may have other libs, like TypeScript or CoffeeScript instead of Babel, some other web framework or other libs, + other node modules their code needs, I can safely assume +140 more milliseconds per process with those.
So ~700 milliseconds per a process spawn. And that's optimistic, for example AVA guys have spotted it takes ~1 second just to start their runner.
If you have 4 logical cores and 20 test files (and I have seen a few projects with 300 test files, but let's take your number), then with the process per file approach you'll spend:
~20 * (time_to_run_a_test_file + 700) / 4
while if we split tests into 4 groups of 5 tests and run them in 4 processes with process reuse, we'll roughly spend:
~20 * time_to_run_a_test_file / 4 + (700 * 4) / 4
So your approach is roughly 3 seconds slower on 20 files. It's a very rough approximation and depends on the modules you use, etc, but hope you get the idea.
@ArtemGovorov good analysis...I generally think we need to help out our QA folk by reducing tests from running at 30 seconds down to 10 seconds or less. An extra 3 or 4 seconds to launch 5-15 extra processes I think is OK.
Furthermore...as for memory issues. Node has a upper limit of ~512mb on 32 bit systems and ~1700mb on 64 bit systems.
In my brief research, you can reasonably limit a node.js process to 100MB of memory, for processes that run tests, of course, there is a chance you will blow memory no matter what, and hopefully you'd have some sort of hook that can report that (?) You'd also could put memory limits in configuration.
At 20 processes, that's 2GB of memory, which is OK for most modern machines. At 40+ processes, you'd probably have to be on high performance machine that was not your local workstation.
You can limit Node.js memory by using this argument:
node --max_old_space_size=100 file.js
this option can be seen using
node --v8-options
Also, I think forking processes to run tests is a given; I'm not sure who is saying we shouldn't? I imagine Mocha 3 would take the same strategy wallaby does, which is to fork n processes and attempt to reuse them.
@boneskull Using the wallaby way in mocha will definitely speed things up. It should also be configurable (to be able to run all tests in a single process like now) because the multiproc run may break some code coverage tools reports (like instanbul or blanket, nyc should be ok).
What about allowing extensions on .mocharc files? That's what ESLint does
You can have .mocharc.json, .mocharc.js, .mocharc.yaml, or even .mocharc.babel.js. These formats can be enabled by plugins, with Mocha's core only implementing JSON and/or JS.
Editor syntax highlighting will work out-of-the-box. It bugs me when my editor doesn't understand an .rc file because it has no extension.
the other bug that I really don't like in Mocha is this one
//file1.js
describe('xxx',function(){
console.log('xxx')'
});
//file2.js
describe('yyy',function(){
console.log('yyy')'
});
mocha --grep "xxx"
will log
yyy
Now, I don't know why that bug exists, I doubt it's because it couldn't be fixed easily with the existing code organization, but in the small chance that it is because of this ---> rewrite
@ORESoftware I'm unclear why you're talking about bugs in this issue. Please stop commenting in it unless you have anything helpful to offer.
What about allowing extensions on .mocharc files? That's what ESLint does
Indeed, that's the idea
Randomization as a plugin using the API, I guess.
As a comment on the idea of mocha being unopinionated: I think docker's approach has much merit: 'batteries included but removable'
Make it so that mocha does everything it _needs_ to do, but it can be swapped out. That way beginner users have a full setup, but those with more advanced needs can swap in more advanced plugins.
I'm unsure whether or not to push this code to branch v3.0.0 or call it v4.0.0. I'd love it if someone could look at the stuff in the v3.0.0 branch and see if it's important enough to have its own release...
@boneskull - what did you mean in the OP that Mocha can't leverage domains? are you referring to the domain core module and if so how do you mean that Mocha can't leverage them?
domain core module and if so how do you mean that Mocha can't leverage them?
They're not available in the browser, and the domain API is being deprecated. See https://github.com/nodejs/node/issues/66
And something like https://github.com/nodejs/node-v0.x-archive/issues/5243 wouldn't be quite as beneficial as we can't bind contexts to arbitrary functions within a test's function.
What I meant was if your code under test is running in a domain and throws
or emits an error, mocha cannot catch it.
I see domain support as a necessary evil but it doesn't need to be part of
core.
Look at lab for a solution; you basically just listen on process.domain
for errors.
On Tue, Feb 16, 2016 at 22:11 Daniel St. Jules [email protected]
wrote:
domain core module and if so how do you mean that Mocha can't leverage
them?They're not available in the browser, and the domain API is being
deprecated. See nodejs/node#66 https://github.com/nodejs/node/issues/66And something like nodejs/node-v0.x-archive#5243
https://github.com/nodejs/node-v0.x-archive/issues/5243 wouldn't be
quite as beneficial as we can't bind contexts to arbitrary functions within
a test's function.—
Reply to this email directly or view it on GitHub
https://github.com/mochajs/mocha/issues/1969#issuecomment-185043550.
Christopher Hiller
https://boneskull.com | https://github.com/boneskull
The only big thing, which is actually not so big, is proper browser/HTML report support, which can generate diffs. See the still not solved https://github.com/mochajs/mocha/issues/1348.
nothing has happened here for quite a long time, so closing this, and we can readdress in another issue or discussion medium
Most helpful comment
That's a good question, I would be very much in favor to transition to something without a globals in the default setup, so the usage would be something like this
and for backwards compat it would be easy to add something like this