Every time a crisis hits, the headlines sound familiar: the models failed. In 2008, when the financial system collapsed, Value-at-Risk models were accused of being dangerously wrong. During COVID, infection forecasts were mocked as "wildly off." In business, when sales or market projections miss badly, the explanation is often: "the model didn't work."
The implication is that modeling itself is unreliable, maybe even useless. But here's the paradox: in most of these cases, the models didn't actually fail. They did exactly what models are designed to do — simplify a complex world into a structured picture of possible futures. The failure was in interpretation. We asked models to deliver certainty, when all they could ever deliver was a structured expression of uncertainty.
This misunderstanding starts with how we interpret probabilities. A model spits out a number — "60% chance of success," "15% risk of default," etc. — and we instinctively treat it like a measurement, as solid as the weight of a package or the temperature outside. But probabilities aren't facts. They're beliefs. As statistician David Spiegelhalter reminds us, probabilities are "make-believe." Not in the sense of being false, but in the sense of being constructed: they reflect what we currently believe about the future, given the evidence we have and the assumptions we make.
But not all beliefs are created equal. A 70% probability built on decades of rich, stable data is very different from a 70% probability stitched together from one year of noisy observations and a handful of expert hunches. Both may be expressed as "70%," but the strength of belief behind them is worlds apart. That difference — between the probability of an outcome and the confidence we can reasonably place in that probability — is what I call the two layers of uncertainty.
The first layer is the one most people recognize: outcome probabilities. This is the distribution of possible futures, given assumptions and data. For example: a model produces a distribution of revenue for the next three years with most of the outcomes clustering between $9M and $13M. From that distribution, we find that there is at least a 60% chance revenue will exceed $10M. If 60% meets our risk tolerance, we might proceed with the investment.
The second layer is different. It's not another interval inside that distribution, but our confidence that the distribution itself is reliable. In other words, how much trust should we place in the model that generated it? If the model was built on decades of comparable data, we might place high confidence in the shape of the curve. But if we discover it was built on only 18 months of thin data in a brand-new market, our confidence in the entire distribution would be shaken. In that case, the "60% chance" is less like a solid measurement and more like a rough guess — the true probability could plausibly lie anywhere between 40% and 80%. That uncertainty about the estimate itself is the second layer. And the real question becomes: would we still invest if our belief in the 60% forecast were that fragile?
Layer 1 tells you the odds; Layer 2 tells you how much you should trust the casino.
Executives don't act on numbers — they act on beliefs. Do we believe the probability of success is above our threshold? And how strong is our belief in that probability? Most decision processes only answer the first question. The second is where the real risk hides. And this is where models often get unfairly maligned. When outcomes go badly, people blame the forecast: the model was wrong. But the real issue wasn't that the model produced the wrong distribution of outcomes. It's that decision-makers treated a weak, uncertain model as if it were a strong one.
Models will never be perfect, and they don't need to be. The failure is not in modeling but in how we interpret and communicate it. Instead of asking for "the number" — the single forecast, the single probability — decision-makers should demand two things: a distribution of possible outcomes, and a second distribution: the confidence we can place in that first distribution. That's the real upgrade. Better decisions come not from eliminating uncertainty, but from understanding both layers of it.
In the end, every decision is a bet. And the difference between a smart bet and a reckless one isn't the number — it's how well you understand both layers of uncertainty behind it.
← Back to Writing