Author’s note: it has come to my attention that the above illustration by Jones and Capers might be controversial. I am in the process of redacting it and replacing it with a newer source. For now—I appreciate your feedback though no need to be alarming. A 2010 study by NIST and the book How Google Tests Software – Help me test like Google back up these numbers in magnitude.
“I’ll do it later”
I’ll do it later is one of the most common anti-thesis to improving tech engineering culture. Testing becomes exponentially more expensive the longer you wait. This has to do with how much re-work and re-understanding needs to be done after-the-fact if you don’t start with a failing test before you implement the code.
Let’s explore all parts of that statement.
The Cost - Problem 1
Let’s say it takes you $1000 to write a test today. This implies that you understand the problem, the solution and can answer similar to “How will the user know this works?” and “How do we automate that check?”
Once you’re past this point, going back into the system and trying to write a test after-implementation is going to immediately get you in trouble. Is the code testable? Do you need to make changes to it to test it? Are you in a hurry? Did you spend all your allotted and estimated time writing the implementation? This is what increases the cost of writing tests significantly.
Tell your CFO that you can invest $1000 now to save $10000 of debugging later. You’d likely be scolded for not having applied that approach everywhere already.
These are common pitfalls that trip even senior developer into thinking they’re doing the business a favor by skipping tests, only to discover that after it “works”, writing a test will take them more time than the feature.
Testing only becomes harder over time, never easier. You cannot “inspect” quality into a system. No amount of analysis will outperform having written tests at the very beginning.
So they skip it.
At their own peril.
The next stage of cost increase is something going wrong and you have to bugfix and debug.
The next stage beyond that is someone else having to do so.
6 months pass and you get to the next stage - someone accidentally coupled to your untested-code and has now introduced additional behavior.
And the ultimate: the untested codebase is so old that the language, framework and dependent modules have become unsupported and you don’t know whether you can safely upgrade, introduce features or even leave it alone with the assumption that it even works in the first place.
It is very expensive to build a house first and then decide you want to have a bathroom in the second floor after you have painted everything
Testability - Problem 2
A tricky caveat for testing-after is that in order to test it once it is written, you need to retain testability. The best forcing function to testability is modularity, driven by a test. It’s very hard to write testable code without having a test, this should be obvious but something about this idea is unintuitive to most devs, even seasoned ones.
Here’s the problem - to make this unit testable, you have to transform it - not refactor - actually make changes to how it works, what it does to re-shape it in such a form that it can be tested.
Which means that by writing a test after, rather than testing the 1st iteration you are essentially doing test-before on the 2nd one. This too, is not obvious to most tech teams.
Failing Test - Problem 3
‘When there is a gas leak in your house, you don’t want the answer to your obvious question to be “somewhere”’. This is an excerpt from my conversation with Jason Gorman on the Technologist Podcast.
For a test to be of high quality, it needs to tell you why something breaks, not just that it does. So to write a high quality test after implementation, it will naturally pass and be green. In order to improve it further, you will have to revert some of your code to find the scenarios where it fails and gives you strong signals.
This again, comes with additional unnecessary costs compared to test-first.
TDD may as well be a money-printer
For how easy it is to write and modify TDD-written code, it should be an industry standard. However, the hard part of building software is understanding the business context for the behavior requesting.
Being in that mindset all the time can be taxing, especially if your environment scoffs upon inquisitive knowledge gathering over monkey-style churning of code code in feature factory.
That said, I have not seen any other practice that would significantly make progress in checking all the problem boxes above and then some!
"Tell your CFO that you can invest $1000 now to save $10000 of debugging later. You’d likely be scolded for not having applied that approach everywhere already."
A great way to put it! We need, as an industry, to get better at explaining it in these terms.