What Gets Measured Gets Managed. What Gets Mismeasured Gets Mismanaged.
AI created image.
Rory Sutherland puts it well in Alchemy: what gets measured gets managed—but what gets mismeasured gets mismanaged. For years, companies had a built-in excuse for mismeasurement. Data was hard to get. Systems were fragmented. Modeling took time. Rigor was expensive. So we lived with proxies, shortcuts, and “good enough” numbers—ever, even when we knew they were imperfect. AI is ushering that excuse towards inevitable extinction.
With the assistance of AI it is possible for organizations of all sizes to build better metrics faster than ever before. More variables. Stronger math. Larger data sets. Continuous refinement. From a measurement standpoint, the bar has been raised.
Ironically, that’s creating a new problem. The metrics are getting better, but they’re being trusted less.
When new, more sophisticated metrics show up, the reaction is predictable. Someone says, “I don’t really understand how that was calculated.” What they usually mean is that they don’t trust it. AI has made this more common. The math is stronger, but the intuition gap is wider.
That reaction is understandable. It’s also inconsistent with how we operate everywhere else.
You don’t need to understand electricity to hire an electrician. You don’t ask them to explain the physics before they wire your house. You judge the work by whether the lights turn on and the place doesn’t burn down. You don’t second-guess your doctor by demanding a walkthrough of biochemistry. You trust the diagnosis because it’s grounded in training, data, and experience—and because it works better than guessing.
For some reason, data gets treated differently.
Early in my career, I worked as a professional options trader and was a member of a U.S. listed options exchange, holding Series 7 and Series 56 licenses. At various points, I was responsible for options positions with notional values regularly exceeding $300 million. Every day, we relied on pricing models like Black-Scholes. I never derived the equation. Almost no one does. The math is advanced stochastic calculus—the kind of work that earned its creators a Nobel Prize in Economics.
That wasn’t the job. The job was understanding the inputs and validating the output against reality. When the model disagreed with the market, you revisited the inputs and assumptions. That’s professional judgment.
In organizations, we often flip that logic. We trust intuition because it’s familiar. We distrust metrics because they’re uncomfortable. But metrics aren’t opinions. They’re diagnoses. They’re your data doctor telling you where the system is healthy, where it isn’t, and where intervention matters. You don’t need to understand every equation behind the diagnosis. You need confidence that the inputs are right, the methodology is sound, and the recommendation improves outcomes.
This is where AI actually helps—and where leadership becomes the constraint. AI strengthens the math and accelerates validation. It reduces the cost of rigor and exposes weak metrics quickly. With AI assistance, there is increasingly little excuse to mismeasure.
But better measurement doesn’t automatically create trust. The more accurate a metric becomes, the less intuitive it often feels. That gap isn’t a data problem. It’s a management problem.
Leaders don’t need simpler metrics. They need better ones—and the discipline to use them. You don’t need to understand the math any more than you need to understand electrical engineering or medicine. The right question is whether the metric outperforms gut feel. If it does, the job isn’t to debate it. It’s to manage to it.