Recently a colleague told me about the problems they run into in their project (a web service). Among them one that stroke me as very interesting was the diminishing return of some kind of tests.
Apparently, the setup and maintenance costs of writing unit-tests for some thin-layered “glue” code was starting to be too much. They had to add complicated mocking-setup code in order to satisfy the behavior of mocked dependencies, and they didn’t feel they were getting much of out this.
Instead, they are shifting to testing the application as a whole in a “black box” way, using end-to-end tests. Since this is a web-service this means exercising HTTP requrests, analyzing responses and looking at resulting database states.
They are testing the system as a whole, and from the outside.
From now on that would be their focus, and they would write no new unit-tests / integration-tests.
This got me thinking: could it be that unit-tests are overrated? That it pays more to test the system as a whole, from the outside?
This of course reminded me of the debate / conversation between Robert C. Martin and James Coplien from some years ago. I had found this debate / conversation investigating different points of view after the controversial post and stance of DHH on how TDD was “dead”.
Could it be that some tests are just “not worth it”? Any experienced developer will say that some things are not worth testing (like getters / setters). But … could it be that entire kinds of tests don’t deliver on their promise? Like unit-tests that heavily rely on mocking, and break at the slightest refactoring or design change?
An interesting question.