Then there will be no failure. —Lao Tzu, Tao Te Ching, translated by Gia-Fu Feng and Jane English
[In computer simulation games] The good participants differed from the bad ones . . . in how often they tested their hypotheses. The bad participants failed to do this. For them, to propose a hypothesis was to understand reality; testing that hypothesis was unnecessary. Instead of generating hypotheses, they generated "truths." —Dietrich Dörner, The Logic of Failure: why things go wrong and what we can do to make them right
We often dream of that one world-changing idea—the product with our name on it, immortalized. But that fantasy plays out like a romantic comedy: the couple meets, kisses once, and the screen fades to gold. That’s not the real ballgame. What happens after the great idea dawns is what truly matters. The actions you take—or avoid—once the system is built ultimately determine your success.
Here’s the uncomfortable truth: we’re wired to conserve effort. Our own plans always make perfect sense to us, so we don’t want to test them. We love thinking about how other people can benefit us, but if asked to think about how we can benefit others, our focus narrows. Our enthusiasm quickly flags, if it doesn’t die altogether. I’ve seen this play out in business settings. Instead of putting the customer first, we run away from helping the customer.
I recently spoke with a developer, who was launching an app designed to simplify product purchases—a promising idea to boost sales. I asked, “Who’s going to test this every day, indefinitely?” He didn’t understand the question. I explained: networks, operating systems, and integrations will constantly change. What works today will break tomorrow. If you’re testing regularly, you’ll catch it. If not, frustrated users will quietly delete the app and move on.
Stated that way, it’s hard to argue against constant testing with real-life users, but there’s a big problem: No one wants to do this. No one wants to pay for this. We want to launch a website, an app, or an AI chatbot—then walk away and expect it to serve customers flawlessly while we nap.
The companies who overcome this natural lethargy will attract more customers than those who “let it ride.” Will anyone at the failing company notice? Probably not. By the time those lost customers are counted—by the time they show up on someone’s Metrics Dashboard—nothing will bring them back. Here’s how that happens.
I once opened a tax-advantaged college savings account for my son. I thought my regular bank, Company 1 could set me up, and they did. A week later, I logged into Company 1’s site and couldn’t find the account. The representative explained that it was actually managed by a partner—Company 2—and that I’d need to use their platform.
“But I bought it from you. Don’t you want to be my go-to bank?”
“That would be great,” he said, “but it’s not set up that way.”
“Is anyone working on this?”
“Not that I know of.”
The story doesn’t end there. As I was forced to log in and do business directly with Company 2 each week, I noticed something. Company 2 did everything better than Company 1. Their interest rate on savings was much higher and they answered the phone quicker. Today, all my paper assets are managed at Company 2, zero at Company 1. Company 1 lost me—not because of one missing feature, but because they stopped at the sale instead of thinking about the experience.
Right now, I’m looking for work, and I submit applications via online forms. Many of them fail to save or submit properly. Either no one is testing them, or their testing is insufficient. And there’s no technical support—just silence.
What kind of candidate gives up and moves on? The high-value one. The candidate with multiple offers. The one who could have made a transformational difference. They disappear—quietly, permanently. Metrics don’t always tell this part of the story. When we consider and include the quality of opportunities lost, that annoying, expensive testing becomes imperative.
Some business goals—such as hiring only the best people—will not happen without it.
For a further distinction, security engineering expert Ross Anderson offers a valuable insight into collaborative testing and design thinking:
Requirements engineering, like software testing, is susceptible to a useful degree of parallelization. So if your target system is something novel, then instead of paying a single consultant to think about it for twenty days, consider getting fifteen people with diverse backgrounds to think about it for a day each. Have brainstorming sessions with a variety of staff, from your client company, its suppliers, and industry bodies.
But beware—people will naturally think of the problems that might make their own lives difficult, and will care less about things that inconvenience others. We learned this the hard way at our university where a number of administrative systems were overseen by project boards made up largely of staff from administrative departments. This was simply because the professors were too busy doing research and teaching to bother. We ended up with systems that are convenient for a small number of administrators, and inconvenient for a much larger number of professors. — Ross Anderson, Security Engineering, Second Edition
Businesses who take testing seriously don’t just avoid failures—they unlock growth others never see. If you're looking to build that kind of resilience, I’d love to help.