Skip to main content

Monte Carlo vs. Manual Scenarios: Which Should You Use?

When building probability-weighted valuations, investors face a critical methodological choice: run thousands of random simulations through a mathematical model (Monte Carlo), or construct a handful of carefully reasoned scenarios and weight them by judgment (manual scenarios). Both approaches claim to capture uncertainty. Both generate distributions of possible outcomes. Yet they produce fundamentally different results and require fundamentally different skills.

Monte Carlo appears more scientific. Thousands of iterations, sensitivity matrices, distributions—the machinery of randomness suggests objectivity. Manual scenarios feel more arbitrary: "I think the base case happens 60% of the time"—that's human judgment, fallible and subjective. Yet subjective judgment sometimes captures reality that random mathematics misses. This chapter helps you navigate the choice.

Quick Definition

Monte Carlo simulation runs a valuation model thousands of times, each with randomly sampled input values drawn from specified probability distributions. It produces a statistical distribution of output values and confidence intervals. Manual scenarios combine a small number of discrete storylines (base case, bull case, bear case) with explicit probability weights assigned through judgment. Monte Carlo is stochastic (random); manual scenarios are narrative (reasoned).

Key Takeaways

  • Monte Carlo generates false precision when underlying distributions are unknowable; manual scenarios acknowledge uncertainty more honestly but sacrifice mathematical rigor.
  • Monte Carlo works best for modeling independent, measurable variables (commodity prices, interest rates); manual scenarios work best for modeling correlated, structural breaks (competitive disruption, regulatory change).
  • The "gold standard" for Monte Carlo—normally distributed inputs—rarely applies to equity valuations, where tail risks and regime shifts matter most.
  • Manual scenarios force you to articulate why each scenario might occur and what triggers would validate or invalidate it; Monte Carlo can hide weak assumptions behind computational complexity.
  • Hybrid approaches (scenarios with Monte Carlo sensitivity within each scenario) often outperform pure approaches; pure Monte Carlo often beats pure scenarios only when input variables are well-understood and uncorrelated.

Monte Carlo: The Machinery of Randomness

Monte Carlo simulation was born in nuclear physics and applied to finance in the 1970s. The basic process is elegant: encode your valuation model as a formula, define probability distributions for each input variable, then run the formula thousands of times with random samples from those distributions. Plot the outputs, and you get a probability distribution of intrinsic values.

For a stock with a discounted cash flow model, this might mean:

  • Revenue growth: normally distributed, 5% ± 2%
  • Operating margin: normally distributed, 8% ± 1.5%
  • Terminal growth rate: normally distributed, 2.5% ± 0.5%
  • Discount rate: normally distributed, 7.5% ± 0.75%

Run the model 10,000 times with random draws from these distributions, and you get 10,000 possible intrinsic values. Plot them, and the distribution tells you: fair value is probably between $40–$80, with a mean of $55 and a 25th percentile of $48 and 75th percentile of $65.

The appeal is obvious: you've quantified uncertainty statistically. You have confidence intervals. You have probabilities. It feels scientific, rigorous, and objective.

The problem is that it almost never is.

The Normal Distribution Fantasy

Monte Carlo assumes you can assign probability distributions to unknowable future events. For commodity prices or interest rates, which have historical data and observable markets, this is defensible. For equity valuations, it's fantasy.

Academic finance has long recognized this problem. Nassim Taleb's work on tail risk and "black swan" events demonstrates that equity markets exhibit fat-tail distributions—rare catastrophic events occur far more frequently than normal distributions predict. Financial engineering models that assume normality systematically underestimate the probability of crashes exceeding 3–4 standard deviations.

Ask yourself: what is the probability distribution of Apple's operating margin 10 years from now? You have no idea. You might think it'll be 28% ± 2%, but that's a guess dressed up as a distribution. The true distribution probably has fat tails—uncommon but possible scenarios where Apple either dominates (32%+ margins) or gets disrupted (18% margins). The standard deviation you chose is arbitrary. The shape of the distribution is arbitrary.

Yet once you've encoded these assumptions in your Monte Carlo model, they feel real. The output says "mean fair value $45, 90% confidence interval $35–$55." But that confidence interval is only valid if your input distributions are correct. If your margins distribution is 0.5% too narrow, your output confidence interval is dangerously too narrow. If you've systematically underestimated downside scenarios, you've created a false sense of precision on the upside.

Worse: most equity inputs are not normally distributed. Operating margins, market share changes, and disruption probabilities have heavy tails and regime shifts. A company can bump along at 25% margins for years, then lose 8 percentage points in a single year if competitive disruption accelerates. Normal distributions can't capture that tail risk. Your model generates thousands of simulations that all miss the real catastrophic scenarios.

Correlation and Dependency

Monte Carlo models often assume variables are independent. Revenue growth is independent of operating margin, independent of discount rates. This is mathematically tractable. It's also false. When growth slows, margins often compress (competitive pressure increases). When interest rates rise, growth expectations fall.

Sophisticated Monte Carlo models try to build in correlations. But now you have a new problem: which correlations? If you specify a 0.4 correlation between growth and margins, where does that come from? Historical data? That correlation was specific to a particular economic regime. If the regime changes, the correlation changes. You're still guessing.

And there's a subtler problem: correlation assumptions work fine in normal markets where variables move smoothly. But in crisis scenarios, correlations break down. Everything correlates to 1 (crashes together). Your model designed around normal-case correlations systematically underestimates how badly things can go wrong simultaneously.

The Seduction of Precision

The most dangerous feature of Monte Carlo is that it generates numbers to many decimal places. Your spreadsheet outputs "fair value: $47.32, standard error: $0.84." That precision is entirely artificial. It comes from running the model 10,000 times, not from genuinely understanding the company's value. But investors look at $47.32 and think: this analyst did serious work. The precision suggests rigor.

Compare this to a manual scenario that says: "Base case $45 (60% probability), bull case $65 (20%), bear case $30 (20%)—expected value $47.50." Same expected value, but the second presentation is honest about its uncertainty. The first presentation generates false confidence.

Manual Scenarios: The Power of Narrative

Manual scenario building forces you to tell stories. Each scenario isn't a random walk through possibility space; it's a coherent narrative about how the company's future might unfold. The base case might be: "Management executes moderately well, margins hold, growth moderates to 8% annually. Competitive position remains stable." The bull case: "New product launch succeeds, market expands faster than expected, margins expand as scale grows. Competitive moats strengthen." The bear case: "Key customer concentration risk materializes, or competitive disruption accelerates. Margins compress, growth stalls."

These stories force you to articulate causal chains. You're not just assigning random numbers to variables; you're explaining why certain combinations of outcomes happen together and why others don't. This narrative structure is powerful because it reflects how the world actually works. Companies don't move to lower margins in isolation; they move to lower margins because something changes: competition, product mix shift, pricing power loss. The narrative connects causes to effects.

Narrative Discipline

Building scenarios requires you to answer hard questions:

  • What would have to change for the bull case to occur?
  • What's the time horizon for that change?
  • What evidence would convince you the bull case is unfolding?
  • What would make you exit the bear case early?
  • Are these probabilities reflective or predictive?

Monte Carlo allows you to skip these questions. You just specify distributions and let the math run. Manual scenarios force methodological rigor on you: you can't hide weak reasoning in computation.

Handling Structural Breaks

Manual scenarios naturally handle scenarios where the business model changes fundamentally. A company transitions from hardware-focused to software-focused. A market opens up that wasn't previously addressable. Regulatory change alters the competitive landscape. These aren't small parameter shifts; they're regime changes where the historical relationships between variables no longer hold.

Monte Carlo models based on historical data miss these regime changes entirely. You're sampling from the distribution of a past that won't repeat. Manual scenarios can directly address this: "If Product X launches successfully, the revenue model shifts from annual subscription to per-use pricing, and margins expand by 4 percentage points because the revenue base no longer requires customer support capacity."

The Probability Weights

The biggest criticism of manual scenarios is that probability weights are arbitrary. You assign 60% to the base case, 20% to bull, 20% to bear. Why not 50/30/20? Why not 55/35/10? The criticism has merit. But consider the alternative: Monte Carlo also assigns probabilities (your distributions have implicit probabilities), and those probabilities are equally arbitrary—they're just hidden inside statistical assumptions you made without thinking carefully about them.

At least with manual scenarios, you're forced to justify your weights. You have to ask: "Given what I know about this company and this market, how confident am I in each scenario?" You might revise your weights as new information arrives. Conversely, Monte Carlo probability weights are baked into the model and rarely revisited. A 10% standard deviation assumed at the start of your analysis often persists unchanged even if your understanding of the company deepens.

When Monte Carlo Wins

Monte Carlo excels in specific contexts. Research by financial mathematicians at major institutions (including studies published in the Journal of Finance) confirms Monte Carlo's value when three conditions are met:

Modeled derivatives and options: If you're valuing an employee stock option or a warrant (which has optionality built into its structure), Monte Carlo can simulate thousands of possible stock price paths and calculate the option value under each path. This is legitimate and valuable because the options' payoff functions are well-defined.

Independent measurable variables: If you're modeling a company that operates in a commodity business and your main uncertainty is commodity price fluctuation, Monte Carlo can work. Crude oil prices, copper prices, lumber prices—these have observable markets, historical distributions, and futures markets that reveal forward expectations. You can build reasonable distributions.

Interest rate sensitivity: A company highly sensitive to interest rate movements can be modeled with interest rate distributions drawn from swaption markets or historical volatility. Interest rates are liquid, observable, and (relatively) well-behaved statistically.

Portfolio-level aggregation: If you have many independent positions, Monte Carlo across all of them can usefully model portfolio volatility. Individual company valuations might be hard to model, but portfolio-level risks from diversified holdings can be illuminating.

In these cases, the inputs to your Monte Carlo model are grounded in observable reality: market prices, historical volatility, liquid derivatives that reveal market expectations.

When Manual Scenarios Win

Manual scenarios excel in the contexts where valuation actually matters:

Structural uncertainty: Will Apple's Services segment grow to 50% of revenue? Will Tesla's autonomous driving capability actually work? These aren't statistical questions; they're narrative questions. Either the business model fundamentally changes or it doesn't. Manual scenarios directly address this binary.

Competitive dynamics: How much market share will Amazon capture in a new category? How durable are Netflix's competitive advantages as competition increases? These depend on behavioral and strategic factors that aren't normally distributed. They're regime-dependent. Manual scenarios let you tell stories about competitive outcomes.

Regulatory and political risk: Changes in tax policy, healthcare regulation, or antitrust enforcement aren't statistically normal. They're discontinuous, event-driven. Manual scenarios let you price in the possibility of significant regulatory changes alongside lower-probability status quo paths.

Management execution: Will management successfully execute a turnaround? Will the founder's departure destabilize the company? These are binary or strongly regime-dependent outcomes. They're not normally distributed around a mean.

Hybrid Approaches: The Best Compromise

Many sophisticated investors use hybrid models: they build two or three discrete scenarios (base, bull, bear) with hand-assigned probabilities, then within each scenario, use Monte Carlo or sensitivity analysis to understand how parameter variation affects that scenario's valuation.

For example:

  • Base case (60% probability): Modest growth continues, margins stable. Use Monte Carlo on commodity price fluctuations and interest rate sensitivity within this scenario. Output: base case fair value is $45, with a distribution from $40–$52 depending on interest rates and commodity moves.
  • Bull case (25% probability): New market opens, growth accelerates, margins expand. Use Monte Carlo within this case. Output: bull case fair value is $65, with a distribution from $58–$75.
  • Bear case (15% probability): Disruption materializes, margins compress. Use Monte Carlo within this case. Output: bear case fair value is $28, with a distribution from $22–$35.

Then aggregate: 0.60×[distribution of base case] + 0.25×[distribution of bull case] + 0.15×[distribution of bear case] = overall valuation distribution.

This approach honors both the narrative power of scenarios and the computational rigor of Monte Carlo. You acknowledge that each scenario has internal uncertainty (Monte Carlo within each), while acknowledging that which scenario actually occurs is fundamentally unpredictable and best addressed through judgment (manual probability weighting of scenarios).

Real-World Comparison

Hypothetical Tech Stock: Pure Monte Carlo vs. Manual Scenarios

Assume we're valuing a growth software company.

Pure Monte Carlo approach:

  • Revenue growth: normal distribution, 18% ± 4% annually for 10 years
  • Operating margin: normal distribution, 8% ± 2%
  • Terminal growth rate: normal distribution, 2.5% ± 0.5%
  • Discount rate: normal distribution, 8% ± 1%
  • Run 10,000 simulations
  • Output: Mean intrinsic value $85, 25th percentile $72, 75th percentile $102

The output feels precise. But notice what it's missing:

  • The 18% ± 4% growth assumes the distribution is normal. But what if the company faces a 30% probability of disruption that cuts growth to 2%? That fat tail isn't captured.
  • The margin distribution doesn't capture correlated risks: if growth slows sharply due to competition, margins probably compress too. The normal distribution model assumes margins are independent.
  • The percentile outputs suggest false precision: the 25th percentile isn't really 72; it's "somewhere around 72 if our distribution assumptions are correct, which they probably aren't."

Manual scenario approach:

  • Base case (60%): Strong execution, growth moderates to 12% annually by year 5 (from initial 20%), margins expand to 10% as scale increases. Discount rate 8%. Fair value: $78
  • Bull case (20%): Disruption doesn't materialize, product expansion succeeds, growth stays 15% for 8 years, margins expand to 12%. Discount rate 7% (lower risk due to sustained success). Fair value: $115
  • Bear case (20%): Key customer concentration realized, growth moderates to 5% by year 3, margins stay at 6%. Discount rate 9% (higher risk). Fair value: $42

Expected value: 0.60×$78 + 0.20×$115 + 0.20×$42 = $77

The outputs differ ($85 vs. $77). The Monte Carlo model is more optimistic, reflecting its normal-distribution bias toward middle outcomes and underestimation of tail risks. The scenario model, by explicitly thinking through what could go wrong, arrives at a lower fair value.

Over a 5-year period, which was more useful? The bear case scenario correctly warned that customer concentration was a real risk. When the company did lose a key customer, the bear case proved predictive. The Monte Carlo model never developed that narrative; it was embedded in distribution tails that the model underestimated.

Scenario vs. Monte Carlo Decision Tree


Common Mistakes

Treating Monte Carlo outputs as true probabilities. The 25th percentile from your 10,000 simulations isn't a real 25th percentile of future outcomes unless your distributions are perfectly calibrated. They're not. Use Monte Carlo outputs as ranges, not as calibrated probabilities.

Assuming independence when variables are correlated. Running Monte Carlo without modeling correlations between growth, margins, and discount rates systematically underestimates both upside and downside tails. Either build correlations explicitly or use Monte Carlo as a sensitivity tool, not a probability estimator.

Anchoring scenario probabilities to the starting price. If a stock is near its intrinsic value, it's tempting to assign 50% probability to each case (up or down). But probabilities should reflect business reality and path dependencies, not starting prices. A company with a durable moat might be 70% base case, 20% bull, 10% bear regardless of current price.

Ignoring base rate information. When assigning scenario probabilities, ask: historically, how often do growth companies successfully expand to new markets (bull case)? How often do they face disruption (bear case)? Base rates provide anchors for your probability assignments.

Hiding weak assumptions in computational complexity. The most dangerous Monte Carlo models are those that run thousands of simulations on weak distribution assumptions. The complexity creates an illusion of rigor. Be suspicious of models that claim to output a fair value to the penny.

FAQ

Q: Should I use Monte Carlo or manual scenarios?

A: Use manual scenarios for the high-level business outcomes (will the moat persist? Will disruption occur? Will the market expand?), then use Monte Carlo within each scenario to model smaller fluctuations in interest rates, commodity prices, or other measurable variables. This hybrid approach is more honest about what you know and don't know.

Q: Isn't Monte Carlo more scientific?

A: No. Science requires calibration against reality. Monte Carlo applied to equity valuations is pseudoscience: it looks rigorous, but the inputs (future margin distributions, terminal growth rates) are fundamentally unknowable. Manual scenarios are more honest about this uncertainty.

Q: What if my manual scenario probabilities are wrong?

A: That's the right question. They probably are wrong. But at least you'll notice when they are, because you've committed to specific narratives that you can validate against evidence. Monte Carlo lets you hide behind distributions you never revisit.

Q: Can I use past volatility to calibrate my Monte Carlo distributions?

A: Past volatility is a starting point, but it's not a reliable predictor of future volatility. Periods of calm volatility are followed by spikes (volatility clustering). Volatility is regime-dependent. Past 5-year volatility might underestimate tail risks in the next 5 years. Always stress-test your volatility assumptions.

Q: Should I assign equal probabilities to base/bull/bear scenarios?

A: Only if you have no information. Use your judgment. A company with a history of execution might be 65% base, 25% bull, 10% bear. A company entering a new, uncertain market might be 45% base, 35% bull, 20% bear. The probabilities should reflect your genuine beliefs about the likelihood of each outcome, informed by base rates and company-specific factors.

Q: Is Monte Carlo useful for portfolio risk management?

A: Yes. Monte Carlo simulations of portfolio returns (given historical volatility and correlations) are useful for estimating value at risk and stress testing. This is different from using Monte Carlo for individual company valuation, and it works better because portfolio-level volatility is more stable and measurable than individual company narratives.

  • Introduction to Probability-Weighted Scenarios — Understand why and how to model multiple outcomes with explicit probabilities.
  • Building Three-Scenario Models — Master the practical art of base/bull/bear case construction.
  • Aggregating Scenarios to Portfolio — Learn how to combine scenario analysis across multiple holdings to manage portfolio risk.
  • Discounted Cash Flow Fundamentals — Understanding the valuation engine that scenarios and Monte Carlo feed into.

Summary

Monte Carlo simulation creates an illusion of scientific precision where none exists. By running thousands of simulations with randomly sampled inputs, it produces confidence intervals and percentiles that feel authoritative. But these outputs are only as good as the underlying distribution assumptions, and for equity valuations, those assumptions are guesses at best and systematically biased at worst.

Manual scenarios acknowledge this uncertainty explicitly. They force you to articulate causal narratives, identify key drivers, and commit to specific outcome probabilities. They handle regime shifts and structural breaks more naturally than Monte Carlo. They're transparent about their limitations.

The most powerful approach is hybrid: use scenario-level analysis to identify which business outcomes matter most and assign probabilities to them, then use Monte Carlo within each scenario to model minor parameter fluctuations around that scenario's core narrative. This respects both what you can quantify (interest rate sensitivity, commodity exposure) and what you can only narrative (disruption probability, management execution).

Statistical sophistication is valuable. But it's only valuable when it's grounded in reality. For equity valuation, that ground is narrative, not normal distributions.

Next

Continue to Aggregating Scenarios to Portfolio to learn how scenario analysis scales from individual stocks to portfolio-level risk management.