Should I have unit tests for known defects?

by Martijn   Last Updated May 23, 2018 15:05 PM

If my code contains a known defect which should be fixed, but isn't yet, and won't be fixed for the current release, and might not be fixed in the forseeable future, should there be a failing unit test for that bug in the test suite? If I add the unit test, it will (obviously) fail, and getting used to having failing tests seems like a bad idea. On the other hand, if it is a known defect, and there is a known failing case, it seems odd to keep it out of the test suite, as it should at some point be fixed, and the test is already available.

Tags : unit-testing tdd


Answers 7


If the bug is fresh in your mind and you have the time to write the unit test now, then I would write it now and flag it as a known failure so it doesn't fail the build itself. Your bug tracker should be updated to reflect that there is a unit test that is currently failing for this bug so that the person assigned to eventually fix it doesn't write it all over again. This supposes that the buggy code doesn't need a lot of refactoring and that the API changes significantly -- if that's the case then you might be better off not writing the unit test until you have a better idea of how the test should be written.

kaared
kaared
January 31, 2014 12:47 PM

The answer is yes, you should write them and you should run them.

Your testing framework needs a category of "known failing tests" and you should mark these tests as falling into that category. How you do that depends on the framework.

Curiously, a failing test that suddenly passes can be just as interesting as a passing test that unexpectedly fails.

david.pfx
david.pfx
January 31, 2014 12:47 PM

I think you should have an unit test with the current behaviour and in the comments, add the right test and right behaviour. Example:

@Test
public void test() {
  // this is wrong, it should be fixed some time
  Assert.assertEquals(2, new Calculator().plus(2,2));
  // this is the expected behaviour, replace the above test when the fix is available
  // Assert.assertEquals(4, new Calculator().plus(2, 2));
}

This way, when the fix is available, the build will fail, noticing you the failed test. When you'll look at the test, you will know that you changed the behaviour and the test must be updated.

EDIT: As Captain Man said, in large projects, this will not get fixed anytime soon but for the documentation sake, the original answer is better than nothing.

A better way to do it is duplicating the current test, making the clone assert the right thing and @Ignore it with a message, e.g.

@Test
public void test() {
  Assert.assertEquals(2, new Calculator().plus(2,2));
}

@Ignore("fix me, Calculator is giving the wrong result, see ticket BUG-12345 and delete #test() when fixed")
@Test
public void fixMe() {
  Assert.assertEquals(4, new Calculator().plus(2, 2));
}

This comes with the convention in your team to reduce the number of @Ignored tests. The same way you'd be doing with introducing or changing the test to reflect the bug, except you don't fail the build if this is critical for your team, like OP said that the bugfix won't be included in the current release.

Silviu Burcea
Silviu Burcea
January 31, 2014 12:50 PM

I suppose the answer really is, it depends. Be pragmatic about it. What does writing it now gain you? Maybe it is fresh in your mind?

When fixing the bug, it makes perfect sense to prove it exists by writing a unit test that exposes the bug. You then fix the bug, and the unit test should pass.

Do you have time to write the failing unit test just now? Are there more pressing features or bug that need to be written/fixed.

Assuming you have competent bug tracking software with the bug logged in it, there is really no need to write the failing unit test right now.

Arguably you might introduce some confusion if you introduce a failing unit test before a release that is happening without the bug fix.

ozz
ozz
January 31, 2014 13:31 PM

I usually feel uneasy about having known failures in test suites, because it's too easy for the list to grow over time, or for unrelated failures in the same tests to be dismissed as "expected". The same things goes for intermittent failures - there could be something evil lurking the code. I'd vote for writing the test for the code as it is now, and as it should be once it's fixed but commented out or disabled somehow.

Rory Hunter
Rory Hunter
January 31, 2014 13:36 PM

The answer is NO IMHO. You should not add a unit test for the bug until you start working on the fix for the bug and than you will write the test(s) that proves the bug and when that test(s) is failing in accordance to the bug report(s) you will go and correct the actual code to make the test(s) pass and the bug will be solved and it will be covered after that.

In my world we would have a manual test case that the QEs have failing until the bug gets fixed. And us as developers would be aware of it via the manual failing TC and via the bug tracker.

The reason for not adding failing UTs is simple. UTs are for direct feedback and validation of what I as a developer are currently working. And UTs are used in the CI system to make sure I didn't break something unintentionally in some other area of code for that module. Having UTs failing intentionally for a know bug IMHO would be counter productive and just plain wrong.

grenangen
grenangen
January 31, 2014 14:36 PM

Depending on the test tool you may use an omit or pend function.

Example in ruby:

gem 'test-unit', '>= 2.1.1'
require 'test/unit'

MYVERSION = '0.9.0' #Version of the class you test 


class Test_omit < Test::Unit::TestCase
  def test_omit
    omit('The following assertion fails - it will be corrected in the next release')
    assert_equal(1,2)
  end

  def test_omit_if
    omit_if(MYVERSION < '1.0.0', "Test skipped for version #{MYVERSION}")
    assert_equal(1,2)
  end

end

The omit command skips a test, the omit_if combines it with a test - in my example I test the version number and execute the test only for versions where I expect the error is solved.

The output of my example is:

Loaded suite test
Started
O
===============================================================================
The following assertion fails - it will be corrected in the next release [test_omit(Test_omit)]
test.rb:10:in `test_omit'
===============================================================================
O
===============================================================================
Test skipped for version 0.9.0 [test_omit_if(Test_omit)]
test.rb:15:in `test_omit_if'
===============================================================================


Finished in 0.0 seconds.

2 tests, 0 assertions, 0 failures, 0 errors, 0 pendings, 2 omissions, 0 notifications
0% passed

So my answer: Yes, implement the test. But don't confuse a tester with errors, where you know it will fail.

knut
knut
January 31, 2014 19:17 PM

Related Questions


Best practices on unit tests for consecutive functions

Updated November 26, 2018 22:05 PM

What is IBM's CUPRIMDS?

Updated February 18, 2017 10:05 AM

What does stubbing mean in programming?

Updated March 03, 2017 13:05 PM

Integration tests, but how much?

Updated February 28, 2017 08:05 AM

i want a postman's counterpart for mobile developers

Updated December 05, 2017 07:05 AM