Imagesharp: Improve unit tests speed

Created on 17 Dec 2016  路  10Comments  路  Source: SixLabors/ImageSharp

I noticed the unit test suite of the project takes about 4 mins to run, which is pretty slow for unit tests.

Martin Fowler has some recommendations here on unit testing speed, but generally one test should be in the range of milliseconds and the suite in seconds.

We can look into how to speed them up, maybe we can move some expensive initialization. Currently there are 119 tests which run for more than 1s, here's the breakdown.

image

unit tests

All 10 comments

A big part in this being slow is the performance of the image decoders and encoders. We are working on improving them and help with this would be really appreciated.

I see 馃槃 I had a quick look now through the tests in the slow bucket, I see most of them are reading all test files from disk. I know MSTest has an AssemblyInitializeAttribute where we could read up the files once per test suite. Not sure if xUnit has it, but that could go a long way despite the decoders/encoders being slow.

cc @antonfirsov since I see he has a PR out for refactoring some of the testing #42

Yeah... Ideally we should load the images once and then clone them when testing using the new Image(image) constructor. This will much reduce the overall testing time.

However, I'm not sure how we would then test output appropriately. Certain tests we can run and mathematically ensure output accuracy but others require human interaction to test the output visually.

I think we should definitely try to improve the test performance but also understand that testing this library is a different beast to testing many libraries.

What's everyone's thoughts on this?

On AssemblyInitialzeAttribute. xUnit doesn't have a similar attribute nor is adding it. https://github.com/xunit/xunit/issues/431 though they demonstrate achieving the same effect through their extensibility model.

https://github.com/xunit/samples.xunit/tree/27d39b6833c3bd1ed607054db99bb925f810d456/AssemblyFixtureExample

@olivif @JimBobSquarePants the most significant (and kind-of desired) outcome of my PR could be an increased amount of unit tests, which well .. makes the execution time even more longer. The standard approach to unit tests is "Make them more, run them all!". I think the the main issue is, that most of the test cases in the library are not actual unit tests.

By my opinion, discouraging testing, and not increasing (or decreasing) the current coverage would be a very bad solution. If the pixel type Rgba1010102 and the Jpeg codec are both supposed to be working & supported library features, we need to prove with automatic tests, that the Jpeg codec works for Rgba1010102 with different codec configurations. I think we need more tests, but maybe we don't want to run all of them in all scenarios. We need to ask questions like:

  • How should we mange the growing {pixel types} X {input images} X {features} test space, while keeping the whole set under coverage? We can try optimizations and caching, but we can't win against a K*N*M *Width*Height -sized space.
  • If it's technically possible to disable/enable tests by specific testing configurations (it is!), shouldn't we reorganize the tests & categorize them by different testing scenarios?
  • What tests are actual unit tests (operating on small amount of data, running quickly, having assertions)?
  • What tests are intended to be verified visually?
  • What tests should be executed by CI, can we ignore some of them on AppVeyor?
  • Do developers working on specific features need to run all the tests when they apply modifications?
  • With the current developer capacity, is it possible to increase the set of "true" unit tests, in order reduce the other (functional/manual/visual) set? (I think this is practically impossible now :( )

@antonfirsov I think you're right on the money there, the slow tests we have are mostly end to end tests actually. I also think you touched upon a great point there, we cannot win against an k * n * m space.

I remembered reading about this exact problem on the google testing blog - Just say no to more end to end tests. What they recommend is implementing the following test pyramid pattern:

  • 10% end-to-end tests
  • 20% integration tests
  • 70% unit tests

image

Sounds a bit counter intuitive but testing all possible combinations of the product - end to end - is a losing battle. We could also go down the route of only enabling some tests in some configurations, but then we will run into the test lab/flakiness problems.

With the current developer capacity, is it possible to increase the set of "true" unit tests, in order reduce the other (functional/manual/visual) set? (I think this is practically impossible now :( )

I think this would provide immense value to the team, imagine how much fast we can iterate and focus on the product. For instance, while I was making the stylecop changes I had 3 commits, for every commit I had run the tests about 2 times, plus once before submitting the PR. Test run is 4 mins, so that was 28 mins in total.

I did a little test today to see how much coverage the slow tests are giving us, good news is it's not that much actually. Or at least not as much as I expected 馃槃

Coverage with all tests - slow, medium, fast - 63%

coverage_all

Coverage with fast tests - medium, fast - 42%

coverage_fast

The biggest gap I saw was on Processors and Quantizers.

@olivif are you running the tests in release mode? I'm sure I've found that they run quicker in release mode (but i could be making that up)

@tocsoft They'll run an order of magnitude quicker in release mode. No bounds checks on the indexer for one.

wow, thanks @tocsoft, that was an awesome tip. They run in a minute in release mode.

I鈥檓 gonna close this. Our tests are a different beast after two years.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

agoretsky picture agoretsky  路  4Comments

vpenades picture vpenades  路  4Comments

xakep139 picture xakep139  路  3Comments

xakep139 picture xakep139  路  4Comments

marcpabst picture marcpabst  路  3Comments