Vue-test-utils: [Feature Request] Better Documentation for Unit Testing

Created on 20 Jul 2017  ยท  13Comments  ยท  Source: vuejs/vue-test-utils

Current unit-testing resource for Vue is pretty scarce.
The documentations in avoriaz (thank you! @eddyerburgh for bringing this framework up) are great and really helpful, but not all developers aware of the better way to approach unit testing in Vue. Currently, I'm trying to introduce avoriaz to the js devs in my company and most of them not doing vue unit test the right way (in my opinion).

Take example of this issue where we have to wrap the unit test in extended Vue instance, which
I believe most people not aware of https://github.com/vuejs/vue-test-utils/issues/2

My proposal is that to document the "right" way to unit testing in Vue's documentation in order to create a standard way. Which I believe, will be using vue-test-utils as the unit-test framework.

Is this request can be categorized as feature-request?
Thanks! ๐Ÿ˜„

p.s.: I don't intend to push developers to use one specific unit-testing framework. Every developer has the freedom to use / create something better, else we won't be using webpack or rollup now

enhancement

Most helpful comment

I agree with @eddyerburgh that we can't provide users a simple answer on what to test. I also find that what to test in Vue isn't really different from other applications. However, since I sadly don't know of a single resource that provides a really good answer to this question (though I'm sure one must exist somewhere), I think we _can_ provide some guidelines.

Personally, I like to prioritize tests by asking 2 categories of questions:

  • __Importance__: How bad would it be if it broke?
  • __Urgency__: How likely is it to break?

Let's apply these to a hypothetical component wrapper for font-awesome and other icons:

  • Importance: If it broke and icons weren't rendering anymore, it would decrease the quality of the experience and hurt the brand image. For a chain of restaurants, this would probably just be mildly annoying. For an online store, where trust needs to be high, it could permanently alienate customers.
  • Urgency: If it's a really simple component that is not frequently changing, it's probably extremely unlikely that it will break. If we're adding new icons all the time and making changes to the interface of the component, there's more risk.

Then I use these answers to generally prioritize tests in this order:

  1. When important _and_ urgent, always include robust tests before shipping, erring towards over-testing.
  2. When important, but not urgent, write the 20% of tests that will catch 80% of bugs before shipping, but catch the last 20% of bugs before any significant refactors that touch this code.
  3. When urgent, but not important, write the 20% of tests that will catch 80% of bugs, _ideally_ before shipping, then only add more if something breaks.
  4. When not important and not urgent, don't test.

This is a slight over-simplification because importance and urgency aren't binary, but on a continuum. Their evaluation can also be pretty complex and answers are inevitably subjective. That means these are _tools_ to be used in conversations, not a simple rubric that can be mindlessly followed.

The team and development process for each project also have to be considered. For example, if the plan is to eventually hand things off to a different team, tests serve a larger role. If there's a strict deadline, so developers have to move fast, it will be more important not to over-test code that is frequently changing. Usually, there are a lot of conflicting priorities and we don't always have all the information we'd like.

These tools for prioritization can also be used at the micro level: when you know you want to test a feature, figuring out _which parts/levels_ to test. I've seen a lot of tests to check that basic Vue functionality works (e.g. the starting value of a data property is set to the correct value). This is an example of testing something of extremely low urgency, since they _very_ rarely catch any bugs.

I think more often, they lead to developers not wanting to touch code, for fear of having to hunt down and fix breaking tests when nothing's actually broken. Broken tests become just a sign that "we changed some code today." This leads to the opposite of the intended effect, where devs see broken tests during CI and think, "Oh, it's probably nothing. We want to get this feature out, so let's ship to prod and we'll fix the tests later." Inevitably, it turns out something really _was_ broken - you just didn't trust your tests, because they're usually lying. (To be fair, I think the docs I wrote are to some extent to blame for this pattern, since we currently use only very simple and contrived examples.)

@blocka Is that the kind of information you're looking for?

All 13 comments

@willypt You're right, we need better documentation.

I think @chrisvfritz will be updating the docs once vue-test-utils is released.

There's a guide section in vue-test-utils. We will add examples there in the future that might help define the right way to unit test Vue.

@eddyerburgh I'm now leaning towards changing it to a general Testing page with a Unit Testing section, which would provide:

  • a couple simple examples of how to test Vue components without vue-test-utils
  • an example of where unit tests can get trickier (as a segue into vue-test-utils)
  • a brief description of the problems vue-test-utils solves, with a link to a getting started guide in vue-test-utils' own docs

What do you think? And since you're _literally_ writing the book on testing Vue applications, ๐Ÿ™‚ would you like to author that page - or at least the unit testing section?

Haha, yes I'll write the page ๐Ÿ˜›

I think what would be very helpful is understand _what_ to test, and how to
make sure tests are not brittle. For example, a big ? for me is how to test
component rendering. If I use selectors, I feel that future of my tests are
now in the hands of my designers. Yet just about all the introductions to
testing in vue show things like wrapper.find('.selector')[0].text()

On Fri, Jul 21, 2017 at 1:47 AM, Edd Yerburgh notifications@github.com
wrote:

Haha, yes I'll write the page ๐Ÿ˜›

โ€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/vuejs/vue-test-utils/issues/18#issuecomment-316909378,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACouoR-TRXqidt6Lo_P4f_LOI7OZv46ks5sQDtZgaJpZM4Odn7B
.

@blocka I test all logic inside my components. Often that means using selectors to assert the output. I understand that it can seem brittle. You could use general selectors to avoid having to rewrite tests if your designers decide to rename classes - but tests on the VDOM are always going to be tightly coupled to the markup.

Apart from that you can test component methods (although if you aren't asserting the VDOM you can do this without mounting the component). But most of my tests assert the VDOM/ DOM.

Although a little out of the scope of this project, I feel snapshot testing
could work a little bit better for testing renders. With snapshot testing
the designers have the ability to "fix" the broken tests themselves. They
are merely shifting focus for switching to the browser to see the result,
to using the console to see the result (which could be in the same
application even, such as with vscode).

Of course snapshot testing is really sharp double edged sword...it actually
does make the tests brittle in a different way, as now any
insignificant change will cause the test to "fail" (although technically to
the designers, these are not insignificant at all...ever class and every
dom element counts). And of course, there is always the 2am emergency bug
fix scenario.

I still feel a little queezy about using dom selectors for the purposes of
code...this is something I thought we ditched from the jquery age
([data-please-dont-touch-this-attribute]) when we started using
declarative templates.

However, it's starting to sound like there is no golden rule here, and I
will just have to try and see how it goes. At the moment the designers have
not even stepped in to the current project i'm working on, so I don't even
have any classes at all, the markup will likely change wildly, etc, so I'm
just trying to think ahead while i'm writing tests.

On Fri, Jul 21, 2017 at 3:17 AM, Edd Yerburgh notifications@github.com
wrote:

@blocka https://github.com/blocka I test all logic inside my
components. Often that means using selectors to assert the output. I
understand that it can seem brittle. You could use general selectors to
avoid having to rewrite tests if your designers decide to rename classes -
but tests on the VDOM are always going to be tightly coupled to the markup.

Apart from that you can test component methods (although if you aren't
asserting the VDOM you can do this without mounting the component). But
most of my tests assert the VDOM/ DOM.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/vuejs/vue-test-utils/issues/18#issuecomment-316924493,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACouo1gIVs-zD5u2hxx4ROUhNO-W_XCks5sQFB4gaJpZM4Odn7B
.

@blocka Yep I totally understand your concern. Testing UI components can be tricky, and as you said - there is no golden rule.

I'd be interested to see what solutions you come up with for your project at the moment ๐Ÿ˜€

And agreed, snapshot testing is definitely useful. I'm going to look into that once vue-test-utils is released. Currently we're having problems with shallow and snapshot testing in Jest.

By the way, here is a technique that I didn't see anywhere...perhaps
because it's overkill.

Let's say you have a button

<button class="button" @click="doSomething">

so to test that we would have to do
wrapper.find('.button')[0].trigger('click')

instead I'm toying with

<Button @click="doSomething" />

and

wrapper.find(Button)[0].trigger('click')

On Fri, Jul 21, 2017 at 3:39 AM, Edd Yerburgh notifications@github.com
wrote:

@blocka https://github.com/blocka Yep I totally understand your
concern. Testing UI components can be tricky, and as you said - there is no
golden rule.

I'd be interested to see what solutions you come up with for your project
at the moment ๐Ÿ˜€

And agreed, snapshot testing is definitely useful. I'm going to look into
that once vue-test-utils is released. Currently we're having problems with
shallow and snapshot testing in Jest.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/vuejs/vue-test-utils/issues/18#issuecomment-316928708,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACoupcNZ-iB9BgcpVajhGu1aJHV-PWlks5sQFWmgaJpZM4Odn7B
.

@blocka do you mean that you wrap the already simple button into a component so that you can test it? Yes it is overkill imo ๐Ÿคฃ

I agree with @eddyerburgh that we can't provide users a simple answer on what to test. I also find that what to test in Vue isn't really different from other applications. However, since I sadly don't know of a single resource that provides a really good answer to this question (though I'm sure one must exist somewhere), I think we _can_ provide some guidelines.

Personally, I like to prioritize tests by asking 2 categories of questions:

  • __Importance__: How bad would it be if it broke?
  • __Urgency__: How likely is it to break?

Let's apply these to a hypothetical component wrapper for font-awesome and other icons:

  • Importance: If it broke and icons weren't rendering anymore, it would decrease the quality of the experience and hurt the brand image. For a chain of restaurants, this would probably just be mildly annoying. For an online store, where trust needs to be high, it could permanently alienate customers.
  • Urgency: If it's a really simple component that is not frequently changing, it's probably extremely unlikely that it will break. If we're adding new icons all the time and making changes to the interface of the component, there's more risk.

Then I use these answers to generally prioritize tests in this order:

  1. When important _and_ urgent, always include robust tests before shipping, erring towards over-testing.
  2. When important, but not urgent, write the 20% of tests that will catch 80% of bugs before shipping, but catch the last 20% of bugs before any significant refactors that touch this code.
  3. When urgent, but not important, write the 20% of tests that will catch 80% of bugs, _ideally_ before shipping, then only add more if something breaks.
  4. When not important and not urgent, don't test.

This is a slight over-simplification because importance and urgency aren't binary, but on a continuum. Their evaluation can also be pretty complex and answers are inevitably subjective. That means these are _tools_ to be used in conversations, not a simple rubric that can be mindlessly followed.

The team and development process for each project also have to be considered. For example, if the plan is to eventually hand things off to a different team, tests serve a larger role. If there's a strict deadline, so developers have to move fast, it will be more important not to over-test code that is frequently changing. Usually, there are a lot of conflicting priorities and we don't always have all the information we'd like.

These tools for prioritization can also be used at the micro level: when you know you want to test a feature, figuring out _which parts/levels_ to test. I've seen a lot of tests to check that basic Vue functionality works (e.g. the starting value of a data property is set to the correct value). This is an example of testing something of extremely low urgency, since they _very_ rarely catch any bugs.

I think more often, they lead to developers not wanting to touch code, for fear of having to hunt down and fix breaking tests when nothing's actually broken. Broken tests become just a sign that "we changed some code today." This leads to the opposite of the intended effect, where devs see broken tests during CI and think, "Oh, it's probably nothing. We want to get this feature out, so let's ship to prod and we'll fix the tests later." Inevitably, it turns out something really _was_ broken - you just didn't trust your tests, because they're usually lying. (To be fair, I think the docs I wrote are to some extent to blame for this pattern, since we currently use only very simple and contrived examples.)

@blocka Is that the kind of information you're looking for?

Sounds good. Great stuff to put in the manual :smile:
Right now we're writing our first app with actual tests. We've written scores of apps with vue, and many more in pre-vue days, but never with tests. We know we've been burned in the past by code that accidentally broke something else we weren't thinking of, and so we know we definitely need some kind of tests.

At the moment I'm settling on a strategy of, "let's see how long I can get a way with seeing If I can interpret, in code, with what I would would testing manually by looking at my browser. Practically speaking this means writing classical unit tests (especially when testing state management stuff), and "light" integration tests (where a mocked system is set up, but there is no specific SUT...we are testing a specific "story" (maybe these are acceptance tests?)

We've done a huge overhaul of the docs. There's lots of info on what to test, test runners to use and how to set up Vue test utils:

https://vue-test-utils.vuejs.org/en/

Closing this issue, but happy to reopen if you think anything is missing from the docs ๐Ÿ™‚

https://vue-test-utils.vuejs.org/ is updated link i think . /en/ not working in my side.

Was this page helpful?
0 / 5 - 0 ratings