Optimise for Change when Pressured for Features
Businesses with low outcomes for big budgets are often stuck in a negative feedback cycle. When most of their investment is perceived as ineffective, owners often respond with frugality and pressure.
🔐 This is a preview from my next booklet that is free for your viewing pleasure during TDD February. Afterwards paid subscribers will continue to have access to this as one of their benefits. Love to get your feedback!
Pressure for features
Here is a breakdown from a 2008 study presented by Jez Humble about HP Laserjet’s firmware division.
Costs
10% on code integration
20% on detailed planning
25% porting code
25% support
15% manual testing
5% innovation capacity
This breakdown screams “quality problems”. Point your attention to the bottom of the list. Innovation capacity—you guessed it! That’s the new features.
At the end of the quarter if stakeholders dump $1mil into a project, only $50k will be spent towards new features. Everything else is a waste or avoidable rework.
This also creates a vicious cycle: such teams never show evidence of what the potential is, just the 5%. Senior management may as well believe that 100% of the budget was spent on the handful of features that only requires $50k (plus all the rework).
To cut the story short—sorry, spoilers—they managed to 8X their innovation capacity up to 40%. The remaining 60% they roughly split into two areas: agile/lean planning and continuous delivery practices and cca. 23% for maintaining their testing automation suite.
Want to get a remote job at Amazon?
Start reading HackerPulse Dispatch & level up your skills as an engineer.
Get weekly:
🔹 Useful Tools & Libs
🔹 Best AI paper digests
🔹 No-nonsense career boosters
Cost optimise for change
Change is the only factor you need to get right as an organization. Extreme programming and continuous delivery are evidently the best approaches to this on the “process market”.
The cost of change is the primary business driver underneath your engineering team’s operational expenses.
Tests are leveraging everyone’s time and money best when placed in areas of your codebase and architecture that is:
most likely to change by your hand (new features)
most likely to break due to external integrators (outside team or service)
We will divide these into internal and external forces. Notice how there is a historic and a predictive element to both dimensions.
Internal
Historic: modules that changed often recently
Predictive: features on roadmap or backlog that received priority recently
External
Historic: outsourced development effort chaotic during releases
Predictive: future breaking changes of a 3rd-party API
Levers of focus
Optimise ROI on testing investment by matching the testing efforts according to your pro-change forces. Here are a few examples, the first one a typical legacy setup.
Increase and add testing
Untested legacy software that requires changes soon
Untested, profitable critical user journeys that happen to work for now
Valuable services that have E2E coverage, but no unit tests
Software system or data streams that will benefit from refactoring to meet near-term goals
Decrease and remove testing
End-to-end tests that break often and have outlived their purpose
Flaky tests that are being ignored by developers
Stable software that doesn’t meet product expectations
Buggy features that no one uses
Also, I’ve enabled DM posts now on this publication so you can DM me directly without needing a platform like LinkedIn or slack.