How to Break out of the Never-Tester Mindset and Build the Engineering Practices You Deserve
Learn to break from the angst and FUD regarding testing-adjacent investments and practices with a simple mindset shift
I work day to day with overworked engineering leaders and disgruntled tech leads. A lot of our conversations are centred on the ideas of what to do about the company’s engineering culture, where to take it and how to get everyone’s attention and creativity to play along.
I can talk for hours on this topic, and sometimes we do with my best clients, preferably after a successful product launch celebration. But this means little without the mindset shift of the leader: stop holding your teams’ hand and support them instead in holding themselves accountable.
The Business Is Testing. Everything. All the time.
test
noun [ C ]
a way of discovering, by questions or practical activities, what someone knows, or what someone or something can do
an act of using something to find out if it is working correctly or how effective it is
a situation that shows how good something is
Recall these phrases by your board members, CEO and CMO:
How long will it take…
How are we doing on…
What’s the latest on…
How did we do last quarter…
From the perspective of the business, there is nothing other than testing. Sure, not in the engineer’s way of thinking of testing—whether something is up-to-spec—but on a more primal, basic understanding of “is it good for the business?”
The outcomes of tests dictate the next decisions, the next week, the next quarter, the next budget. This has been a staple in centuries of good business practice.
Continuous Delivery is nothing other than bringing these practices into software. The complaint that your team doesn’t want to write tests stems from the notion that they have no agency in what features to add, remove, keep and when to deploy them. Tests imply capacity to make and automate decisions, not adherence to spec.
1. Express desired outcome, given choices A and B. Forget the Spec. Focus on “good for business”
Our industry has this perverse notion with unit and integration testing, especially in TDD communities, that tests are a form of defining “the situation” and once the spec is either known or codified, it can be used as a stamp on your PR to showcase exemplary quality.
Nothing could be further from the truth. The sole purpose of writing tests in the broader engineering sense is to enforce software written by this organization to test itself. The goal is self-testing code, given the assumption that the underlying behavior the software enables is good for the business. This simplifies the decision making process:
Do you understand the “good for business outcome” → Write it down for others
Translate the outcome to software behavior → Test for the presence of the behavior (or data transformation)
Write software until the presence is self-evident → Deploy it to learn new things
A: Team learns that they were wrong on the business outcome or the test? Iterate or remove feature
B: Team learns that they were right, also learned new things → Measure, then plan the next improvement
Generally, the team will be on the A-side of things. It takes decades of experience to consistently land on successful features.
2. Most of your features will be failed experiments. Test for an easy out, don’t go All In.
Software teams with endless backlogs are on a feature death march. The backlog serves as a test for the JIRA Overlords:
TODO? We need it.
Done. We have it.
Why death march? Because there is no notion of experimentation. When features are planned to be built only, your team neglects the options of eventually removing or changing it. Most business costs involved with home-grown software are in maintenance and changing behavior of legacy functionality.
When done with care and consideration for all future outcomes, feature flags enable a well-mannered, mature and calm working environment. For every “business test”, ie. (wild idea that your CMO has) write clear acceptance criteria in technical language, but also cases for failed or neutral outcomes.
Feature will be accepted when …
We want this change to improve conversion on checkout by X%
If we cannot prove a positive change within 2 weeks, disable feature flag
Add feature removal / iteration to backlog
3. This is How You Get Your Team To Write More Tests Within THIS WEEK
I’ve done this exercise with dozens of team successfully. You give them a goal, simple rules that they can follow, rules they WILL follow, rules that will paint them in a positive light.
Especially when the team has been sporadically working on 10 things at the same time within the same week, they require clear expectations on what the primary focus at any given moment is.
Not individually, for everyone.
This is how you ensure collaboration, instead of feature factory busyness:
Keep reading with a 7-day free trial
Subscribe to 🔮 Crafting Tech Teams to keep reading this post and get 7 days of free access to the full post archives.