Use Test-driven Development to Retain Modularity
TDD is not about testing, but about exploration
Have you ever solved a Rubik’s cube?
It’s a fun game, until you realise you may get stuck in a loop trying to solve a corner or one color side. You will “eventually” solve it, but chances are you’ll give up first.
I want to highlight this attitude of
“Eventually we’ll get it right, but we may give up if it takes too long.”
You want to solve a side and you speculate on how you get there. You try a few things and realise you were wrong. But going back is also confusing and unknown. You can make mistakes undo-ing your steps as well.
I love Rubik’s cubes.
Software design is very similar to solving a Rubik’s cube. You want to “solve a side” but taking a too large step can cause speculative abstraction. You know, the kind that you’re thinking of “oh we might need this” — but then you never do and it’s just in the way.
Undoing is also difficult because of inter-coupled little dependencies that prevent you from making a clean undo. You didn’t really set any checkpoints.
What does this have to do with TDD?
There is an algorithm for solving a Rubik’s cube. It is surprisingly simple and the steps gradually increase in complexity or difficulty. Here’s a video.
But it’s no longer fun if it’s a math puzzle instead of a game. Unless you’re competing professionally.
TDD gets the same reputation. It has steps of increasing complexity on how to get to a minimal working state, and then design. It requires some formulaic understanding of these steps and a level of deferred emotional payoff regarding design.
Developers don’t get stuck debugging. They get stuck when they run out of confidence in their design and the next step requires a big leap of faith
Non-TDD developers will often design-first and then write code
Only to realise they need to adjust something, they’ll think on it, redesign something and change the code. They may still write tests before coding, but the code loses modularity and testability with each iteration.
I’ve observed this process and even the best engineer who has 30+ years in competition coding generally gets stuck after 5-6 repetitions.
What does stuck look like:
Staring at the screen trying to debug in their head
Tests de-synced from the working code, and incompatible with “where I want to go”
Reverse-engineering a refactor that is just-slightly-too-large for cognitive pressure
TDD practitioners start with exploring the solution space
Asking questions like:
How can I write a test that captures what a user does?
How far away — how many layers — is the public UI or API from the unit I’m working on?
What failure message should the test give me to encourage me to write code for where I want to go?
How can I describe the behavior of the unit mentioning implementation bias?
For beginners on my coaching workshops I recommend writing tests in increasing levels of complexity so we get to the happy path of the final goal with the last test.
It’s okay to throw away some of the trivial steps after refactoring starts.
But why does this work? Where is the magic?
I wondered this for a long time. Where does the TDD productivity boost come from? Why isn’t there one sometimes? What’s the difference?
I spent the past few years interviewing various practitioners, including guests on my Technologist Podcast. Here are a few highlights:
Developers stop exploring not when they’ve exhausted all options, but when they exhausted their confidence in the next step working.
Building something in a large step without seeing it compile or work requires a leap of faith. This is emotionally very taxing. This is what makes us feel tired when debugging something annoying.
TDD retains clear incremental progress by flattening out the confidence curve, making sure you don’t run out, or at least making sure you drain your energy very slowly.
There’s more typing — sometimes. But you get more done because you are taxed less mentally.
Software that is easy to change is also easy to test.
Software that is easy to test is easy to change.
Confidence is not an input of a great developer. Confidence is the output of good coding practices.
"We tend to find that things that make code easier to change tend to also make it more reliable." [TP#9]
"Code that gets used tends to get changed. [So...] If you are intending for your code to be used, you should be planning for it to be easy and safe to change." [TP#9]
When does it not work?
As you’ve learned following this post TDD is about exploring a solution space. It logically follows that it is not very useful when the solution space has been constrained by algorithmic or academic constraints.
These can range from non-technical requirements, subjective elements or lacking the ability to retry (one-use execution hardware) repetitively and changing source code.
This is often why overly simplistic TDD and Domain-driven Design examples of Todo and Booking applications fail to convince anyone because no real business concern is being explored.
I would love hearing from you
What is your experience?
How often does your team stare at their screen seeking confidence?