Description
I would like to be able to:
Why?
When in a loading state, waiting for the API call to resolve, the API call can resolve with different data.
Let's say I was checking the boolean of isVerified:
isVerified: true, then I want to go to the verified state.isVerified: false, then I want to go to ineligible state.Expected Result
When executing:
testModel.testCoverage({ /* code here */ })
I would expect to be able to provide different contexts to iterate over my test paths with, perhaps. Not sure, this is more of a feature request.
Actual Result
I am unable to cover all test paths when requiring an API call to return a boolean value, that I'm using in a guard function for my machine.
Reproduction
https://codesandbox.io/s/interesting-brattain-lbyue
Additional context
xstate version: 4.9.1
@xstate/test version: 0.4.0
So, there actually is a solution for this.
When creating your test machine, and you need decide which path to take (isVerified: true or isVerified: false), you can simply add an actions property to your verifiedStatus state.
In this actions property, you can use assign to update the context to take you down whichever path you want. So, in this case, you can use assign to set isVerified to true when you want to go to your isVerified state. And for the opposite, set isVerified to false when you want to go to your isNotVerified state.
Then when using @xstate/test:
jest.mock('axios');
const mockedAxios = axios as jest.Mocked<typeof axios>;
describe('YOUR TEST MACHINE HERE', (): void => {
const testPlans = YOUR_TEST_MACHINE_HERE.getSimplePathPlans();
testPlans.forEach((plan): void => {
describe(plan.description, (): void => {
afterEach(cleanup);
plan.paths.forEach((path): void => {
it(
path.description,
async (): Promise<void> => {
await act(
async (): Promise<void> => {
mockedAxios.get.mockResolvedValue({
data: {
isVerified: plan.state.context.isVerified,
}
});
const rendered = await render(<YOUR_CUSTOM_REACT_COMPONENT />);
await path.test(rendered);
}
);
}
);
});
});
});
it('should have full coverage', (): void => {
return YOUR_TEST_MACHINE_HERE.testCoverage({
// Don't check states without a meta object
filter: (stateNode): boolean => !!stateNode.meta
});
});
});
I hope this helps someone else who is mocking an API call, with axios, and trying to use xstate's testing library.
@davidkpiano -- If you feel like this is a good enough use case for some documentation, I can write something up I suppose or we can modify it from here. However, if I approached the problem wrong, or if this was already covered in the docs and I missed it, let me know. Thanks!
You can do this two ways. Based on what you are describing I highly recommend the first option. Using two separate states that test each situation the API can get yourself into.
const testModelMachine = Machine({
initial: 'BlankForm',
states: {
BlankForm: {
meta: { test() {…} },
on: {
ENTER_INVALID_DATA: 'InvalidEntry',
ENTER_VALID_DATA: 'ValidEntry'
}
},
InvalidEntry: {
meta: { test() {…} }
},
ValidEntry: {
meta: { test() {…} },
on: {
SUBMIT: 'ThankYouMessage'
}
},
ThankYouMessage: {
meta: { test() {…} }
}
}
});
const testPlans = createModel(testModelMachine).withEvents({
async ENTER_INVALID_DATA() { … },
async ENTER_VALID_DATA() { … },
async SUBMIT() { … }
});
If you find that the choice needs to happen later in the line based on previous situations You can avoid duplicating nodes by saving a value to the machine's context and then guard on that value later. A context change is considered another path by the path walking algorithm so more then one walk down the same path can be done with different context values.
Now if you are testing a set of same actions. For example the ability to filter a table by terms. You could have two states (BeforeFilter, AfterFilter) and then use the cases semantics to run through each filter option. Your AfterFilter.meta.test will need to know how to handle several cases but the case (and the values for them) are provided for by the event object.
const testModelMachine = Machine({
initial: 'Unfiltered',
context: {},
states: {
Unfiltered: {
meta: { test() { … } },
on: {
FILTER: {
target: 'Filtered',
actions: assign({ filter: (_, event) => event.filterBy })
}
},
Filtered: {
meta: {
test(testContext, state) {
assert(pageFilteredBy(state.context.filter));
}
}
}
}
});
const testPlans = createModel(testModelMachine).withEvents({
FILTER: {
async exec(testContext, event) {
await page.filterField(event.filterBy);
},
cases: [
{ filterBy: 'field1' },
{ filterBy: 'field2' },
{ filterBy: 'field3' }
]
}
});
That said your specific need can be accomplished with the API mocking you just have to redesign the mocking a little. Instead of defining it in the test itself you have to manage it in the events. I did this basic promise based one.
const testModelMachine = Machine({
initial: 'idle',
states: {
idle: {
meta: { test() { … } },
on: {
GOOD_API_RESULT: 'HappyPath',
BAD_API_RESULT: 'SadPath'
}
},
HappyPath: {
meta: { test() { … } }
},
SadPath: {
meta: { test() { … } }
}
}
});
const testPlans = createModel(testModelMachine).withEvents({
async GOOD_API_RESULT({ continueMockApi }) {
continueMockApi.resolve({ panda: 'happy' });
await page.loadingDone();
},
async BAD_API_RESULT({ continueMockApi }) {
continueMockApi.reject(new APIError({ panda: 'sad' }));
await page.loadingDone();
}
});
describe('foo', function() {
beforeEach(function() {
let mockingPromise = new Promise((resolve, reject) => {
this.continueMockApi = { resolve, reject };
});
// How ever you manage mocking
this.mockApi = myService.mockReturn(mockingPromise);
});
testPlans.getSimplePathPlans().forEach(plan => {
describe(plan.description, function() {
plan.paths.forEach(path => {
it(path.description, async function() {
await path.test(this);
});
});
});
});
});
@sukima -- Thank you very much for the quick and detailed response.
Follow up question. Is there anything inherently wrong with the way I implemented it, accessing the context in the testPlans.getSimplePathPlans loop, and providing that context to my mock API call?
I do get the desired result (all test paths having coverage now) but I want to make sure I'm not abusing xstate in some way that I do not currently see. :)
You can,
When creating your test machine, and you need decide which path to take (isVerified: true or isVerified: false), you can simply add an actions property to your verifiedStatus state.
Doing it this way means you have two separate test paths. It might be right for you but then again it might remove some opportunity to have the computer build that branching paths for you. One of the advantages of using model-based testing is the ability to model your intent and let the computer do the work to generate the tests. In your case you are taking some of that generation on yourself.
I don't see any reason this is wrong just different. One of the best things about XState and statecharts in general is there are so many ways to skin a cat and still gain the benefits of using them.
Ah okay, that makes sense. I'll see either in this implementation or the next machine that I author, about doing a more "hands-off" approach in testing generation. I think you make good points.
Thank you again for taking the time to provide examples and also explain what's going on with them!
Most helpful comment
You can,
Doing it this way means you have two separate test paths. It might be right for you but then again it might remove some opportunity to have the computer build that branching paths for you. One of the advantages of using model-based testing is the ability to model your intent and let the computer do the work to generate the tests. In your case you are taking some of that generation on yourself.
I don't see any reason this is wrong just different. One of the best things about XState and statecharts in general is there are so many ways to skin a cat and still gain the benefits of using them.