Building models with too many precise assumptions
An analyst sits down to build a five-year DCF model for a software company. She assumes:
- Revenue growth rates of 18%, 22%, 26%, 20%, and 12% in years 1–5
- Gross margins that expand from 68% to 74%
- Operating margin that improves from 12% to 22%
- Tax rate of 18%
- CapEx at 3% of revenue
- Working capital at 2% of incremental revenue
- Terminal growth of 3%
- WACC of 8.2%
The analyst plugs these 13 assumptions into Excel, runs the model, and arrives at an intrinsic value of $157.43 per share. The model produces a specific number, which feels authoritative. The analyst publishes the forecast and a price target.
What the analyst has done is construct a model with precision in the output ($157.43) that does not reflect precision in the inputs. Each assumption is individually uncertain by 2–5 percentage points. Compounded across 13 assumptions over five years, the uncertainty is enormous—likely a 40–60% confidence interval around the point estimate. Yet the model presents a false impression of precision.
Quick definition
Building models with too many precise assumptions occurs when an analyst constructs a valuation model incorporating numerous point estimates (growth rate of exactly 20%, margin of exactly 68%, tax rate of exactly 18%) without acknowledging or stress-testing the compounding uncertainty in those assumptions. The result is a model that appears precise but is actually highly uncertain.
Key takeaways
- Each assumption in a financial model carries uncertainty; when you compound 10–20 assumptions across five years, the total uncertainty is the product of individual uncertainties, not the sum, and output uncertainty can exceed 50–100% even if each input is assumed to have only ±5% uncertainty.
- Analysts use detailed models to create an illusion of precision, which biases stakeholders toward false confidence in a specific point estimate. This creates anchor bias: the published price target becomes the reference point even though it is barely more predictive than a range.
- The most dangerous models are the ones that appear most precise. A model with 30 row-line items and 50 assumptions feels more rigorous than a simple three-statement model, but it is often less honest about uncertainty.
- Sensitivity analysis (showing how the output changes if one assumption changes) is better than a point estimate, but it is not sufficient; correlation between assumptions means that a 1% change in growth rate is often accompanied by a 0.5% change in margin, not independent changes.
- Scenario analysis (building best-case, base-case, and worst-case models) is more honest than a single point estimate, but many analysts still publish the point estimate as though it is more likely than scenarios, when in fact it is an arbitrary middle ground.
Why analysts build precise models
The practice of building overly-precise models is incentivized by several factors:
Perceived rigor: A detailed model with 40 line items feels more rigorous than a simple model with 5 line items. The analyst and stakeholders confuse complexity with rigor. The detailed model does not actually reflect more rigorous thinking; it just reflects more arithmetic.
Anchor creation: Sell-side analysts build detailed models partly to create an anchor (a specific price target) that can be used in client interactions. A recommendation of "fair value is somewhere between $120 and $180" is less useful in conversation than "fair value is $157." The detailed model creates the anchor, even if the precision is false.
Fit to narrative: If the analyst has a narrative ("this company will grow 20% annually and achieve 25% operating margins"), the model becomes the arithmetic expression of that narrative. Rather than stress-test the narrative, the analyst builds a model that confirms it.
Competitive dynamics: Sell-side analysts compete with each other. If a competitor publishes a specific price target ($165), your price target ($157) looks less confident. There is pressure to specify assumptions and output with apparent precision to compete in visibility.
Spreadsheet automation: Excel makes it easy to build complex models with hundreds of cells and formulas. The analyst may not have deliberately chosen complexity; it emerges from the tool's affordances. A model that grows to 50 assumptions often does so incrementally and almost unconsciously.
The mathematics of assumption compounding
When you chain assumptions together in a model, uncertainty compounds in a non-obvious way. Consider a simple example:
Assume a company's intrinsic value depends on two independent variables:
- Revenue in year 5: $100M +/- 30% (range: $70M to $130M)
- Operating margin: 20% +/- 5 percentage points (range: 15% to 25%)
If these assumptions are independent, the operating profit in year 5 ranges from ($70M × 15%) = $10.5M to ($130M × 25%) = $32.5M. The output range is +211% / -43% relative to a base case ($100M × 20% = $20M). Even though each input has relatively modest uncertainty (±30% and ±5 pp), the output has enormous uncertainty.
Now compound this across five years with 13 different assumptions (as in our earlier DCF example). Assume each assumption is independently uncertain by ±5%. The compounded uncertainty in the final output is approximately (1.05^13) = 1.89x upside and (0.95^13) = 0.51x downside relative to the point estimate. The output range is nearly 2x to 0.5x, or roughly ±50% around the point estimate, even though each input was assumed to be ±5% uncertain.
This is before incorporating correlation between assumptions. In reality, if revenue grows faster than expected, margins often expand more slowly (or vice versa). If the economy strengthens, growth accelerates but also increases discount rates and cost of capital. These correlations mean the actual uncertainty is often larger than the independent-assumption calculation suggests.
A framework for bounding precision
This framework shows how to acknowledge and bound precision in models.
Real-world examples
Amazon intrinsic value models (2010–2015) During Amazon's high-growth phase, sell-side analysts built detailed DCF models assuming specific revenue-growth rates (25–35% annually), margin expansion paths, and terminal values. Each assumption was plausible individually, but compounded across 10 years, the models' outputs ranged from $50 to $300 per share depending on minor assumption tweaks. Most analysts published a single price target ($150, $180, $200) without acknowledging the wide range in their own models. Amazon's actual stock price rose from $140 to $650, vindicating high-end assumptions in retrospect, but that outcome was far from certain ex-ante. The precision of the published targets was false.
Tesla valuation models (2018–2021) Tesla's valuation attracted extreme model variation among analysts. Some assumed Tesla's automotive business would scale to millions of units annually with 25%+ operating margins; others assumed more modest growth. Secondary assumptions about energy storage, Full Self-Driving profitability, and manufacturing efficiency varied wildly. A detailed Tesla model built on midpoint assumptions might value the stock at $400; shift the margin assumptions by 2 percentage points and the valuation becomes $600 or $250. Analysts publishing specific price targets ($500, $550, $600) created an illusion of precision that was not supported by the underlying assumption uncertainty.
Boeing's recovery valuation (2019–2020) After the 737 MAX crisis, sell-side analysts attempted to value Boeing under scenarios involving specific return-to-service dates, production ramp rates, and market share recovery. Models built in early 2020 assumed the MAX would return to operation in Q3 2020, production would ramp to pre-crisis rates by 2021, and market share would recover to 40% by 2022. Each assumption was individually plausible, but the model as a whole projected a very specific recovery path. When recovery proved far more complex and delayed, the models' precision appeared naive. The problem was not that the analysts got the facts wrong; it was that they built models with false precision about events that were genuinely uncertain.
WeWork's profitability model (2019) WeWork's pre-IPO valuation relied on models that assumed specific unit economics (profitability per location), near-term path to positive operating margin, and continued expansion at a specific growth rate. The models' precision suggested that a $47 billion valuation was the right price. In reality, the underlying assumptions were highly uncertain; WeWork's unit economics were worse than modeled, the path to profitability was longer, and the cost structure was more rigid than models assumed. The detailed model created false confidence in a precise valuation.
Pharmaceutical R&D models Analysts building DCF models for pharmaceutical companies must assume probability-weighted cash flows from research pipelines. A typical model might assume specific Phase III success rates (60–70%), regulatory approval timing (2–3 years post-Phase III), peak sales for successful drugs ($2B–$5B), and patent cliff dates. Compound 20+ drugs in pipeline with individual uncertainties of 30–50% each, and the total uncertainty in future cash flows is enormous (likely ±60–80%). Yet analysts often publish valuation ranges of ±20% based on these models. The precision is false.
Common mistakes arising from false precision
Mistake 1: Publishing a point estimate without a confidence interval A price target of $157.43 appears precise. Publishing it with an 80% confidence interval ($130–$185) is more honest. Yet analysts almost never publish confidence intervals; they publish points. Stakeholders anchor to the point, even though the analyst knows it is uncertain.
Mistake 2: Using sensitivity analysis as a substitute for honest scenario analysis A sensitivity table showing "if growth drops 2%, value is $145; if growth rises 2%, value is $169" is useful, but it masks the correlations and joint distribution of outcomes. A scenario table showing "in 20% of cases, value is $120–$140; in 60% of cases, $140–$170; in 20% of cases, $170–$200" is more honest.
Mistake 3: Assuming each assumption is independent when they are correlated If growth is faster than expected, often margins expand more slowly. If the discount rate is lower than expected, often growth is also lower (both suggest lower macroeconomic risk). Analysts often assume independence, which understates total uncertainty.
Mistake 4: Micro-optimizing late-stage assumptions when early assumptions are uncertain An analyst might debate whether the tax rate in year 5 will be 18% or 18.5%, while being fundamentally uncertain about whether revenue growth will be 15% or 25%. This is backwards. Focus precision where uncertainty matters most (near-term growth, market share), not on late-stage details that are far off.
Mistake 5: Treating the base-case model as more likely than scenarios, when it is actually an arbitrary midpoint A model often assumes a "base case" scenario with specific assumptions and derives a value. The analyst then builds upside and downside cases by tweaking a few assumptions. But the base case is not actually more likely; it is just the starting point. Many actual outcomes will fall in the upside or downside ranges, not the base case.
FAQ
Q: How many assumptions are too many in a model? A: There is no fixed number, but as a rule of thumb, if you have more than 8–10 key driver assumptions that compound across time, your output uncertainty is likely ±40%+ even if each input is ±5% uncertain. At that point, acknowledge it. A model with 20 detailed assumptions is not automatically better than a model with 8.
Q: Should I stop building detailed models? A: No. Detailed models are valuable for understanding the business and testing narratives. But distinguish between models for understanding (internal tools) and models for forecasting (external outputs). Your internal model may have 50 assumptions; your published forecast should acknowledge that this leads to ±50% output uncertainty.
Q: What is the right way to publish a valuation when the model has high uncertainty? A: Use scenarios. Publish a bull case ($200), a base case ($150), and a bear case ($100) with stated probabilities (30% bull, 50% base, 20% bear). This produces an expected value of $155 but communicates that $155 is not highly likely; it is a probability-weighted outcome. This is more honest than a point estimate.
Q: If I use more conservative assumptions, does that reduce false precision? A: Not directly. A model with conservative assumptions (low growth, tight margins) is still false precision if it does not acknowledge the uncertainty in those conservative assumptions. False precision is about failing to acknowledge the range of outcomes, not about the level of the point estimate.
Q: Can sensitivity analysis fix the false precision problem? A: Partially. A sensitivity table showing how value changes across different growth and margin assumptions is helpful. But it does not address correlations between assumptions or the fact that you do not know which scenario will occur. Scenario analysis is better.
Q: Should I ever publish a specific price target, or always use ranges? A: Ranges are more honest, but specific price targets are useful as anchors for portfolio managers. A compromise: publish a price target but always accompany it with a stated confidence interval and key assumptions that, if they change, would alter the target significantly.
Related concepts
- Sensitivity analysis and tornado diagrams: How to identify which assumptions drive most of the value uncertainty and where to focus precision.
- Scenario analysis and probability weighting: How to build bull/base/bear cases and assign probabilities based on historical base rates rather than analyst optimism.
- Monte Carlo simulation in valuation: Using simulation to model correlated assumptions and generate output distributions rather than point estimates.
- Anchoring and price targets: The behavioral finance concept of how published price targets create anchors that bias investor perception.
- Calibration in analyst forecasts: Whether analysts' confidence levels match historical accuracy; false-precision models typically show poor calibration.
Summary
Precision in financial models is comforting but often false. When an analyst builds a model with 10+ independent assumptions, each uncertain by 5%, the output is uncertain by 40–50% even if the math is correct. Compound this across multiple time periods or add correlation between assumptions, and uncertainty often exceeds ±60%.
Yet analysts publish specific price targets ($157.43) that create the impression of precision that the underlying model does not support. Stakeholders anchor to these price targets and treat them as forecasts of the likely outcome, when in fact they are arbitrary point estimates within a range.
For fundamental investors, the lesson is: be skeptical of highly-specific price targets, especially those based on multi-year DCF models. Instead, demand scenario analysis that acknowledges key assumptions and their uncertainty. Ask the analyst what would need to change for her view to be wrong by 30–50%. If the answer is "only a few things," the model is probably false precision. If the answer is "many things, each of which has 20–40% probability," the model is appropriately uncertain.
The best analysts build detailed models internally to understand the business, but externally communicate in scenarios and ranges. This honest approach to uncertainty is less satisfying than a specific price target, but it is more accurate and more useful for actual investment decisions.
Next
Read on to Picking the wrong discount rate to examine a specific and common application of false precision: assuming a precise WACC or cost of equity when the true rate carries substantial uncertainty.
Word count: 2,116 | Keyword density (too many assumptions, precise assumptions, model precision, false precision): 1.3%