Actual tests are here: https://github.com/srph/axios-response-logger/blob/master/tests/spec.js
I would assume order might explain the behavior, but not quite sure if explicit sandbox creation is the right thing to do anyway. Will have to investigate later on.
Looks like it still doesn't work with axios.
Not sure why I did not close this the first time: upon inspecting this I see the tests are being run in Node. That is totally unsupported _naturally_ for an XHR implementation. Were the tests run in a browser, say using mochify or Karma, it would be another thing, as the XHR stubs target the browser environment.
@brunolm : I still haven't seen a test running in a browser that fails, so there won't be any looking into this until that happens. The details on getting a test library's XHR stub working in a HTTP lib for Node and the browser when running on Node is definitively out of scope for us 馃槃If you are doing something like that I would try creating an axios equivalent of supertest (for SuperAgent).
FYI: axios already has its own stubbing library, moxios.
I solved this by using request method aliases of axios, instead of passing the config and initiating the request. That is, instead of
axios(config)
I used
axios[method](url, data, config)
So now,
sandbox.stub(axios, "get").returns(Promise.resolve({status:200, body: searchResponse}));
This is cleaner than using the library moxios.
Most helpful comment
I solved this by using request method aliases of axios, instead of passing the config and initiating the request. That is, instead of
axios(config)I used
axios[method](url, data, config)So now,
sandbox.stub(axios, "get").returns(Promise.resolve({status:200, body: searchResponse}));This is cleaner than using the library moxios.
Stackoverflow Answer