Automating checks for "good" unit tests

by thegreendroid   Last Updated May 25, 2017 01:05 AM

There's already a question about How to write good unit tests.

Based on the answers provided there, the key properties of a good unit test -

  • Short
  • Readable and conveys intent instantly
  • Independent
  • Fast
  • Repeatable
  • Tests one thing (SRP)
  • Has a good name

Keeping those properties in mind, how would one go about automating checks for ensuring only "good" unit tests are merged back to the main codebase?

I am absolutely of the opinion that automating these checks is the way to go, if it can be reasonably achieved. There are so many things a reviewer needs to watch out for when accepting a merge request - clean code, design, architecture, clean tests etc. so reducing the burden by automating checks that used to be manual is always welcome.



Answers 3


Lets sort your properties by ease of automated checking:

  • Fast - Most IDE's already tell you this

  • Short - Your line count tells you this (so long as you don't abuse white space)

  • Repeatable - Rerunning already tells you this

  • Independent - This could be done by listing what files the test requires, and what they require ...

  • Tests one thing (SRP) - Count your asserts. Is there more than one?

  • Readable - Simple, write an AI that writes code and invite it to the code review. Ask what it thinks.

  • Has a good name - Hope the AI is smarter than humans because even we suck at this.

candied_orange
candied_orange
May 25, 2017 00:39 AM

Your characteristics of unit tests are missing some of important features in my opinion:

  1. Reflects and traceable to requirements
  2. Tests all of the requirements for that unit under test
  3. Covers all corner cases
  4. Tests every line of the code & possibly every decision path

The main point of a good test is that it fails when something is wrong and not when nothing is wrong and lets you find out what was wrong so look for:

  1. Comprehensive
  2. Accurate
  3. Complete
  4. Good clear fail messages with what failed & how.
Steve Barnes
Steve Barnes
May 25, 2017 05:17 AM

As already mentioned, a good test fails when the system under test experiences "breaking" changes.

To automatically evaluate new unit tests based on above criteria you could try to implement mutation testing:

  • Determine what parts of the project are covered by the new test.
  • Generate some mutants by applying (one or more) small modifications (switching operators and such) in those parts.
  • Run the new test on each mutant. If the test fails, that's good (the test could be too strict, but that's not so much an issue compared with a test that's too weak). If the test doesn't fail, then you probably need some human review of the modifications​ of the corresponding mutant; it could be an indication that the test is too weak or doesn't cover all cases.

You'll probably get lots of false negatives at first. It will probably improve by careful selection of mutation operations that actually lead to failures. As an example, switching adjacent declarations of local variables is probably rather unlikely to yield significant errors.

Daniel Jour
Daniel Jour
May 25, 2017 11:05 AM

Related Questions



Integration tests, but how much?

Updated February 28, 2017 08:05 AM

What are the disadvantages of automated testing?

Updated June 23, 2015 09:02 AM

How is code coverage measured?

Updated February 16, 2018 22:05 PM

Systematic study of test automation pyramid

Updated August 07, 2017 17:05 PM