Skip to main content

Precision without accuracy: the model trap

A sell-side analyst publishes a DCF valuation model that spits out a target price of $127.43 per share. The model has seven pages of assumptions: revenue projections to the decimal, margin forecasts with decimal precision, a calculated tax rate of 18.7%, a terminal growth rate of 2.3%. The analyst is asked on a call whether the stock at $125 is a buy. The answer, delivered with the confidence of someone reading a precisely calculated number, is yes—it is 2.23% undervalued.

This is the precision trap. A model that calculates to the second decimal place creates the illusion of certainty. But the forecast was built using assumptions with confidence bands of plus-or-minus 200 basis points. The precision is a mirage.

The distinction between precision and accuracy is not semantic. Precision is how finely you can calculate. Accuracy is whether you are calculating the right thing. It is entirely possible—and common—to have a model that is highly precise and entirely inaccurate.

Quick definition

Precision is the degree of detail or fineness in a calculation or estimate. A margin forecast of "22.5%" is more precise than "low-20s." Accuracy is whether that estimate reflects reality. A margin forecast of 22.5% is accurate only if the actual outcome is near 22.5%; if the true margin is 18%, precision is worthless.

Analysts often conflate the two, building detailed financial models with many decimal places and treating the output precision as evidence of model accuracy. It is not.

Key takeaways

  • Precision creates false confidence: When a model outputs a specific number like $127.43, investors interpret that as more reliable than $125–130, even though the precision adds no informational value.
  • Accuracy lives in assumptions, not formulas: A model is only as accurate as its input assumptions. Even a perfectly built model using bad assumptions will produce bad output.
  • The appearance of control: Detailed models give both builders and users the sensation of control—the feeling that the future is knowable because it has been calculated. This is illusory.
  • Compound precision is fatal: When multiple estimates, each with ±20% confidence bands, are multiplied together, the resulting range explodes. A model with ten such estimates has a combined uncertainty range of orders of magnitude.
  • Benchmarking masks inaccuracy: When most analysts build similarly precise (but inaccurate) models, the published target prices cluster, creating the false impression that they are accurate because they agree.

The machinery of false precision

Financial models are built with formulas. A revenue forecast for year 3 is calculated by taking year 2 revenue, assuming a growth rate, and computing the result. The formula is deterministic: it produces a specific number. But the assumptions underlying that growth rate are not deterministic—they are educated guesses.

Yet something in human psychology causes us to treat the output of a formula differently from the inputs. If you tell an investor "revenue growth will be in the range 8% to 14%," they are prepared for uncertainty. If you tell them "based on my model, revenue in year 3 will be $4.8 billion," they hear a specific prediction.

This mistake is amplified in institutional settings. A portfolio manager receives a two-page summary of a 80-page DCF model. The summary shows the implied share price and a sensitivity table. The manager does not see the assumptions; they see the calculated range of outcomes. Over time, the manager begins to treat the calculated range as the forecast uncertainty, when the real uncertainty is much larger.

Professional models do include sensitivity analysis—varying the discount rate by ±100 basis points, changing the terminal growth rate by ±50 basis points. But sensitivity analysis is only as good as the dimensions you vary. If you do not sensitize to a 30% competitive share loss (which might reduce margin by 300 basis points), the model is not reflecting the real downside.

Furthermore, the presence of a sensitivity table creates a false sense of comprehensiveness. An analyst has tested the sensitivity to discount rate, terminal growth, and margin. They have not tested the sensitivity to margin reversion timelines, to the probability of a new competitor entering the market, or to the likelihood that management's guidance is systematically optimistic. The model is precise on the inputs it was tested for, but inaccurate on the ones that matter.

The compounding problem

When a financial model strings together ten separate assumptions—revenue growth, margin expansion, capex-to-revenue ratio, tax rate, working capital change, terminal growth rate, cost of equity, risk-free rate, equity risk premium, beta—each assumption has a confidence range. Even if each individual assumption has only a 20% confidence range (meaning the true value is within ±20% of the forecast), the combined uncertainty explodes.

Consider a simple example: a five-year cash flow projection depends on revenue growth and margin. If revenue growth is forecast at 12% ±2% (so the true growth is between 10% and 14%), and margin is forecast at 18% ±3% (true margin between 15% and 21%), then the year-5 free cash flow could range from low teens percentage growth to high-20s. The confidence band on the final number is multiple times wider than the individual assumptions.

Professional models with fifteen or twenty assumptions have uncertainty bands so wide that the precision of the output becomes meaningless. Yet the output is still presented as $127.43, not "somewhere between $90 and $160 with a $125 midpoint." Precision is reported; accuracy is not.

The mathematical reality: if your model has N independent inputs, each with standard deviation σ, the output standard deviation scales approximately as the square root of N times σ. With fifteen assumptions, the output uncertainty is nearly four times as large as the individual input uncertainty. With thirty assumptions, it is five and a half times as large.

The accuracy-precision matrix

Not all models are equally misaligned. Some analysts build simple models with few assumptions; these are imprecise (output might be $120–130) but can be accurate if the assumptions are sound. Others build elaborate models with many inputs; these are highly precise but often inaccurate because the compounding problem is ignored.

The ideal—but rare—analyst builds a model with moderate precision and high accuracy. They forecast revenue growth as "high single digits, probably 8–11%," understanding that this range reflects true uncertainty. They do not pin down the margin forecast to 18.7%; they say "mid-high teens, 17–20%, depending on the cycle." The output is a range, $115–140 with $125 as the midpoint. This model is neither highly precise nor delusionally confident.

The worst case is high precision with low accuracy: the consultant's DCF model that projects revenue to the dollar and margin to the tenth of a percent, built on assumptions that were never validated against reality, presented to a client with the confidence of someone reading an equation.

How management guidance enables false precision

Chief financial officers are incentivized to provide guidance that is specific and achievable. A CFO who says "we expect earnings per share to grow 12% plus or minus 200 basis points" is being honest about uncertainty. A CFO who guides to "EPS of $3.45" in the coming fiscal year is being precise.

Analysts then anchor to that precise guidance. When the company guides to EPS of $3.45, analysts build models that start with that number, then extend it forward using assumptions about growth, margin changes, and capital structure. The guidance becomes the foundation of the model, giving the entire structure a false anchor to precision.

The problem is that management guidance is often a negotiating tool, not a forecast. Management wants to guide to a number they are confident of beating (managing expectations downward) or one that looks impressive (managing expectations upward). This is not the same as the true expectation. Yet the analyst's model treats the guided number as accurate, then compounds that error into the forecast.

Real-world examples

Uber's path to profitability: From 2016 to 2020, equity analysts built detailed DCF models of Uber. Each model forecast the year that Uber would reach sustained EBITDA profitability. The models were highly precise: specific years when the company would break even, specific margin levels in the terminal period. Yet the accuracy was poor. The actual path to profitability was slower and less certain than the models implied. Analysts who published models showing 20%+ EBITDA margin at steady state were being precise about an outcome that was fundamentally unknowable.

Intel's manufacturing transition: In 2020–2021, Intel began a major transition to advanced manufacturing nodes. Sell-side models projected specific yield curves, specific capex timelines, and specific market-share recovery paths. These models were extremely detailed, with quarter-by-quarter assumptions about fab productivity. But the transition faced technical delays not captured in the precision of the models. Analysts had high confidence in numbers (e.g., "Intel will reclaim 5nm leadership by 2023") that were inaccurate because the underlying assumptions about manufacturing complexity were wrong.

Tesla's production ramps: Every time Tesla opened a new factory, analysts built models with precise production and margin forecasts. The models calculated output to the thousands of vehicles. Yet actual ramps were messier: supply-chain disruptions, learning-curve delays, and competitive pressures meant that the precise forecasts were consistently inaccurate in direction and magnitude. The precision of the models did not predict the accuracy of the outcomes.

Common mistakes

Mistake 1: Presenting output precision as forecast precision. A DCF model outputs $127.43. The analyst presents this as the "fair value," implying that precision. The analyst should present the output as $125–130 or $115–140 depending on reasonable assumption variations.

Mistake 2: Building models with more detail than the underlying assumptions warrant. If you are forecasting revenue growth as "mid-single digits," you do not need to project margins to the tenth of a percent. The model precision should match the assumption confidence.

Mistake 3: Ignoring correlation between assumptions. Most DCF models treat each assumption (revenue growth, margin, tax rate) as independent. But they are not. If revenue growth comes in below forecast, it is often because competitive pressure increased, which also pressures margins. A model that does not capture this correlation will overstate the downside variance in the forecast.

Mistake 4: Using historical precision as a guide to forward precision. Because a metric has been calculated to 18.4% in the past does not mean next year's level is knowable to one decimal place. Time-series volatility and structural changes mean forward precision is lower than historical precision.

Mistake 5: Anchoring to management guidance without adjustment. Management guides to EPS of $3.45. Analysts use this as the baseline. But if management is sandbagging (guiding low to beat), the actual path is higher; if they are being optimistic, the actual path is lower. The precision of guidance should not be inherited by the model; instead, it should be questioned and potentially adjusted.

FAQ

Q: Should I stop building detailed financial models?

A: No. Financial models are useful tools for organizing assumptions and testing their implications. But build them with the precision that matches your assumption confidence. If you forecast revenue growth as 8–12%, do not then forecast margins to 18.7%. Keep your precision consistent with your underlying uncertainty.

Q: How do I communicate forecast uncertainty without appearing uncertain?

A: By presenting a range and explaining the drivers. "Based on our assumptions about competitive intensity and pricing power, we believe fair value is in the range of $115–140 per share, with a midpoint of $127." This is more honest than "$127.43" and actually communicates more information.

Q: Is it better to build a simple, inaccurate model or a complex, detailed model?

A: Neither. It is better to build a model that is as simple as the problem requires, with assumptions that are grounded in evidence and ranges that reflect true uncertainty. Complexity is useful when it reflects real business drivers; it is harmful when it is decorative.

Q: How can I validate whether my assumptions are accurate?

A: By checking them against reality after the forecast period. If you forecast margin at 18% in year 3, did the company's actual margin in year 3 land near 18%? Over many forecasts, you should see your midpoint forecasts tracking actual outcomes. If they do not, your assumptions are inaccurate, and precision is irrelevant.

Q: If I reduce my model's precision, won't that disadvantage me versus other analysts?

A: Possibly in the short term. But accuracy compounds. An analyst who produces honest ranges and is right 60% of the time will outperform an analyst who produces precise numbers and is right 40% of the time. Institutional investors eventually notice.

Q: What role should sensitivity analysis play in combating this problem?

A: Sensitivity analysis is essential, but only if it tests the right dimensions. Vary the inputs that are most uncertain (not just the ones that are easiest to vary). Include tail scenarios that are low-probability but high-impact. And present the sensitivity results as part of the final output, not as an afterthought.

  • Anchoring bias: The tendency to rely too heavily on an initial piece of information (like management guidance) and treat it as more precise than it actually is.
  • Overconfidence bias: The tendency to overestimate the accuracy of one's beliefs and forecasts, exacerbated by the precision of the tools used to generate them.
  • Epistemic humility: The recognition that some futures are unknowable, and that precision is a mathematical property, not a predictive property.
  • Scenario analysis: An alternative to point estimates that explicitly models multiple plausible futures and their probabilities.
  • Uncertainty quantification: The discipline of characterizing and communicating the range of possible outcomes, not just the midpoint.

Summary

Precision and accuracy are not the same. A financial model that calculates a share price of $127.43 is highly precise but not necessarily accurate. The precision creates the illusion that the future is knowable and calculable, when the underlying assumptions are educated guesses with wide confidence bands.

The antidote is building models where the output precision matches the input assumption confidence. Forecast revenue growth as a range, not a point estimate. Express margin forecasts as ranges. Present fair value as a range, with the range reflecting the true compounded uncertainty of the inputs.

Investors and clients who see the honest range ($115–140) are actually receiving better information than those who see the false precision ($127.43). Ranges are harder to interpret—they do not give you a single action—but they are more useful for actual decision-making. An analyst who is comfortable presenting uncertainty is also an analyst who has thought clearly about what they actually know.

Next

Read the next article: Mistaking correlation for causation.