What Gets Measured Gets Managed. What Gets Mismeasured Gets Mismanaged.

AI created image.

Rory Sutherland puts it well in his book Alchemy; “What gets measured gets managed. What gets mismeasured gets mismanaged.” For years, companies had a built-in excuse for mismeasurement—data was hard to get, systems were fragmented, modeling took time. In short, rigor was expensive. As a consequence, we lived with proxies, shortcuts, and “good enough” numbers even when we knew they were imperfect. Fortunately, AI is ushering that compromise towards its inevitable, and welcome, extinction.

AI reduces the cost of rigor and exposes weak metrics quickly. With AI assistance, there is increasingly little excuse to mismeasure. It is now possible for organizations of all sizes to build metrics faster and better than ever before—more variables, stronger math, larger data sets, (most importantly) continuous refinement. The bar has been raised. Ironically, that’s creating a new problem.

The metrics are getting better, but they’re being trusted less. When new, more sophisticated metrics show up, the reaction is predictable. Someone says, “I don’t really understand how that was calculated.” What they usually mean is that they don’t trust it. AI has made this more common. The math is stronger, but the intuition gap is wider. That reaction is understandable. It’s also inconsistent with how we operate everywhere else.

You don’t need to understand electricity to hire an electrician. You don’t ask them to explain the physics before they wire your house. You judge the work by whether the lights turn on and the place doesn’t burn down. You don’t second-guess your doctor by demanding a walkthrough of biochemistry. You trust the diagnosis because it’s grounded in training, data, and experience—and because it works better than guessing. For some reason, data is treated differently.

In organizations, we often flip that logic. We trust intuition because it’s familiar. We distrust metrics because they’re uncomfortable. Metrics, however, are not opinions. They’re diagnoses. They’re your data doctor telling you where the system is healthy, where it isn’t, and where intervention matters. You don’t need to understand every equation behind the diagnosis. You need confidence that the inputs are right and the general methodology is sound.

Early in my career, I worked as a professional options trader and was a member of a U.S. listed options exchange, holding Series 7 and Series 56 licenses. At various points, I was responsible for options positions with notional values regularly exceeding $300 million. Every day, we relied on pricing models like Black-Scholes. I never derived the equation—almost no one does. The math is advanced stochastic calculus. The creators won a Nobel Prize in Economics. PhD level mathematics wasn’t the job. The job was understanding the inputs and validating the output against reality. When the model disagreed with the market, you revisited the inputs and assumptions. The model wasn’t wrong.

The gap is no longer a data problem. It’s a management problem. Leaders don’t need simpler metrics—they need the discipline to use the complicated ones.