Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Monday, January 28, 2013

Top 5 Pragmatic TDD mistakes

Today, on Telerik's blog, Bradley Braithwaite published his post "Top 5 TDD Mistakes". I like to comment on "top N mistakes"-kind of posts, so read the Bradley's post first and let me add my three cents about this one, too.

I'm pretty satisfied to read someone who is passionate and pragmatic about Test-Driven Development in the same time. Let's just quote two important sentences from Bradley's article I agree with totally:

  1. It's better to have no unit tests than to have unit tests done badly.
  2. Unit Tests make me more productive and my code more reliable.

Let me comment on each of the mistake mentioned in the article.

Not using a Mocking Framework vs. Too Much Test Setup

One may say that these two are a bit contradictory as using a mocking framework may lead to too much test setup. This is of course a matter of good balance we need to find and maintain.

If we're testing a code that has some dependencies and not using mocking framework, we're not doing unit testing in its pure form, for sure. But let's take my favourite example of testing the code that talks with the database - is there really anyting better than testing against the real database, run in-memory? Call it integration test, half-arsed test or whatever, that's the only reliable method of testing the queries itself.

But in most cases, sure, let's use stubs, but avoid mocks - the difference is very important. Mocks as a behaviour testing tool are often overused to check if the implementation is exactly as defined in the test, written by the same person in the same time - it's like checking if the design we've just intentionally created was really intentional. Let's focus on verifying the actual outcome of the method (in which stubs may help a lot) instead of coupling to the implementation details.

And, agreed, there's nothing worse to maintain than a test suite with those spaghetti-flavoured mock setups.

Asserting Too Many Elements

Agreed. There's a good rule of having single logical assert per test, even if it physically means several Assert calls. And for me having any asserts at all is a rule of equal importance.

Writing Tests Retrospectively

Agreed again, the most of TDD's gain is from having to think about the implementation before it comes to existence. It can make developer think not only about happy paths, but also about negative cases and boundary values. It also strongly supports KISS and YAGNI rules which are vital for long-living codebases.

Also, I like using TDD to cope with reported bugs. Writing failing unit (or - again - probably more integration-like) test by recreating the failure's conditions makes analysis easier, helps isolating the root cause of the failure and is often much easier than reproducing the bug in real-life scenario. Retrospectively written test for a bug is only for regression testing.

Testing Too Much Code

Not an universal truth, but I feel quite the same. I see a lot of value in unit-testing my workhorse parts of code, I do create it more-less according to the TDD principles (especially having no production code without failing test).

But I don't think having 100% code coverage as an ultimate goal makes any sense. I think there's always quite a lot of code that is just not suitable for valuable unit testing, i.e. coordinating/organizing kind of code (let's call it composition nodes as a reference to composition root), that just takes some dependencies, calls several methods and pass things from here to there, not adding any logic and not really interfering with the data. Tests written for that kind of code are often much more complicated than the code itself, due to heavy mocks and stubs utilization. Bradley's rule of thumb is what works for me, too: write a separate test for every IF, And, Or, Case, For and While condition within a method. I'd consider the code to be fully covered when just all that branch/conditional statements are covered.

Wednesday, January 23, 2013

Does it make sense to AssertWasNotCalled?

In the recent post about the importance of assertions in the unit tests I've mentioned that we should not test our code to prove what it's not doing and focus on testing what it is actually doing. Regarding that point of view I was asked whether I find it useful to use AssertWasNotCalled or similiar assertion methods provided by the mocking frameworks, as generally they are to ensure no work was done and an empty method is passing that kind of tests. This is an interesting question, let's analyze it.

I don't blindly believe in TDD and all its rules in every scenario, but let's stick to that context. As an example, let's write and test a Work method which is taking a boolean parameter to decide whether the actual work should be done.

[Test]
public void Should_do_no_work_when_false_given()
{
    // arrange
    var sut = new Service(workerFake);

    // act
    sut.Work(false);

    // assert
    workerFake.AssertWasNotCalled(x => x.DoWork());
}

I've discouraged testing what the code is not doing in the context of the third test-driven development rule, which says "you may not write more production code than is sufficient to pass the current failing test". According to that rule, for the test above, we may only end up with an empty method as an implementation - it will be just enough for the test to pass.

But earlier, there is the first rule of TDD - "You may not write production code until you have written a failing unit test". We've never saw the test failing.

Let's than approach the problem differently - starting from the "positive" path, where we can actually verify any outcome, instead of verifying the lack of it.

[Test]
public void Should_do_the_work_when_true_given()
{
    // arrange
    var sut = new Service(workerFake);

    // act
    sut.Work(true);

    // assert
    workerFake.AssertWasCalled(x => x.DoWork());
}

We can easily see this test failing for the empty Work method. Let's then implement it according to TDD rule number three (no single line of code not enforced by the tests).

public void Work(bool really)
{
     worker.DoWork();
}

The test is now passing, rules are kept. We may now add back our "negative" test case - verifying no work is done when false is passed. This time the test will fail, rule number one is satisfied. We may now legally (according to the TDD) add the condition and finish our pretty sophisticated implementation:

public void Work(bool really)
{
     if (really)
         worker.DoWork();
}

The conclusion from this oversimplified example is that the AssertWasNotCalled test alone probably makes a little sense. But when considered as a part of the larger test suite for the method, and when implemented in correct order (outcomes verified first, lack of outcome last), it may even be the only TDD-compliant way of testing the intended behaviour.

The same rule applies to the validation method example I've mentioned in the previous post. Even if I'm against the unit tests that just expect no exception being thrown, if throwing an exception when needed is the method's single responsibility, there's no other way to test the "positive" (not throwing) behaviour in the unit tests other than verifying no exception was thrown.

As a side note - AssertWasNotCalled assertion method can actually be rewritten (in Rhino Mocks syntax) into:

workerFake.Stub(x => x.DoWork()).Throws(new Exception());

And when the production code actually calls the method it should not, the exception is thrown and the test fails. So the distinction between the test I consider assertionless and the valid one can be a bit blurry. But my personal opinion I'll stick with is to always have the assertions as explicit as possible, even if this is just a syntactic sugar or no-op like AssertDoesNotThrow. This allows us to express the intent of the test clearly, making it readable and maintainable. Smoke testing approach - "if it throws, it will fail anyway" doesn't.

Image borrowed from http://www.yesodweb.com.