Skip to main content

Monte Carlo Simulation Setup

A single-point valuation gives you false certainty. Monte Carlo simulation in Excel replaces that fixed number with a probability distribution, revealing not just what you think a stock is worth, but how confident you should be in that estimate and what downside you face if your assumptions slip.

Quick Definition

Monte Carlo simulation is a computational method that repeatedly samples from probability distributions you assign to key valuation inputs (revenue growth, margins, discount rate, terminal value), calculates the resulting intrinsic value for each iteration, and displays the full range of outcomes as a histogram. Instead of "this stock is worth $50," you get "there's an 80% probability it's worth between $35 and $70."

Key Takeaways

  • Monte Carlo transforms point estimates into probability distributions, exposing hidden risk in your valuation
  • Excel-based simulations using data tables or add-ins like Crystal Ball or @RISK can run 10,000–50,000 iterations in seconds
  • Defining realistic input distributions (anchored to historical data, peer ranges, management guidance) is more important than iteration count
  • Correlation between inputs—high growth typically comes with margin improvement—makes simulations meaningful
  • Output percentiles (10th, 25th, 50th, 75th, 90th) replace false precision with honest uncertainty bands

Building a Monte Carlo Model in Excel: The Architecture

Your Monte Carlo model sits on top of your existing DCF. You keep the three-statement model, but instead of fixed assumptions, you'll define probability distributions for key drivers.

Step 1: Choose Your Input Variables

Not every assumption needs randomization. Focus on 4–6 drivers that move the needle:

  • Revenue growth (Year 1–5): Largest uncertainty for growth companies
  • Operating margin (EBIT margin): Major value lever; historical range shows natural volatility
  • Terminal growth rate: Long-run growth assumption (typically 2–3%)
  • Discount rate (WACC): Less variable for stable companies; more variable if beta or leverage assumptions shift
  • Terminal multiple or exit assumption: If using exit multiple, add variance based on historical P/E or EV/EBITDA range

Leave other inputs fixed if they're less sensitive or better estimated (tax rate, shares outstanding, depreciation methods).

Step 2: Assign Probability Distributions

For each input, define a distribution shape and parameters. Common choices:

InputDistribution TypeExample ParametersRationale
Revenue GrowthNormalMean 12%, Std Dev 3%Symmetric; constrained by industry trends
Operating MarginBetaMode 22%, Min 15%, Max 30%Bounded variable; skew toward historical average
Terminal GrowthNormalMean 2.5%, Std Dev 0.3%Tight; anchored to GDP growth
WACCNormalMean 8.5%, Std Dev 0.6%Moderate variance from cost-of-capital shifts
Exit MultipleEmpiricalBased on historical high/lowUse actual historical trading range

Normal distribution works for most inputs if you've validated the range. Beta distribution is better for variables with natural floors and ceilings (margins can't exceed 100%, often have industry minimums). Empirical distribution uses actual historical observations (useful for multiples).

Step 3: Set Up Random Number Generation

In Excel, you'll use the RAND() function or a dedicated add-in to generate random numbers from your specified distributions. Each simulation iteration pulls a random value for each input from its distribution.

For normal distribution (mean μ, std dev σ), use the inverse normal function:

=NORMINV(RAND(), mean, std_dev)

For bounded ranges, truncate:

=MAX(min_value, MIN(max_value, NORMINV(RAND(), mean, std_dev)))

Excel add-ins like Crystal Ball, @RISK, or even free tools like SimulAlt automate this. They'll generate random inputs, recalculate your DCF, and store results without manual formula work.

Step 4: Run Iterations and Capture Output

Set your tool to run 10,000–25,000 iterations. Each iteration:

  1. Generates one random value for each distribution
  2. Plugs those values into your three-statement DCF
  3. Calculates intrinsic value per share
  4. Records the result

After completion, you have a column of 10,000 intrinsic values. From that, you compute percentiles.

Practical Example: Building a Monte Carlo Model in Native Excel (Data Table Method)

If you don't want to install an add-in, you can simulate Monte Carlo using Excel's Data Table feature:

Setup:

  • Column A: Iteration numbers (1 to 10,000)
  • Column B: Your intrinsic value formula (=NPV(WACC, forecasts) + terminal value)
  • Columns C–G: Your five input distributions

Method:

  1. Create a sensitivity table that recalculates intrinsic value for each iteration
  2. Use a data table with random inputs to populate the intrinsic value column
  3. Alternatively, use a VBA loop (if comfortable with macros) to:
    • Generate random draws from each distribution
    • Update the DCF inputs
    • Record the intrinsic value
    • Repeat 10,000 times

Example VBA pseudocode (simplified):

Sub MonteCarlo()
Dim iterations As Integer = 10000
For i = 1 To iterations
Sheet1.Range("RevGrowth") = NORMINV(RAND(), 12%, 3%)
Sheet1.Range("Margin") = BETAINV(RAND(), 2, 3) * (max - min) + min
Sheet1.Range("WACC") = NORMINV(RAND(), 8.5%, 0.6%)
Sheet1.Range("TerminalGrowth") = NORMINV(RAND(), 2.5%, 0.3%)

intrinsicValue = Sheet1.Range("IntrinsicValuePerShare")
Sheet2.Cells(i, 1) = intrinsicValue
Next i
End Sub

Interpreting the Histogram

Once you have 10,000 outcomes, plot them as a histogram. Key statistics:

Mean vs. Median

  • Mean: Simple average of all outcomes
  • Median: 50th percentile (50% of outcomes above, 50% below)
  • If distribution is skewed, median is often more intuitive

Percentile Bands

  • 10th percentile: 1-in-10 downside case
  • 90th percentile: 1-in-10 upside case
  • 25th to 75th percentile: middle 50% of outcomes
  • The gap reveals valuation uncertainty; narrow spreads = high confidence, wide spreads = fragile assumptions

Standard Deviation

  • Measures outcome volatility
  • Higher std dev suggests assumptions are sensitive or ranges are wide
  • Compare to mean; if std dev = 20% of mean, your model has 20% uncertainty

Skewness

  • Right-skew: Long tail of upside outcomes (more often seen when downside is capped but growth is open-ended)
  • Left-skew: Long tail of downside (margin compression, competitive disruption)
  • Symmetric: Balanced bull and bear cases (rare in reality)

Example Output (SaaS Company):

Interpretation: The median fair value is $48/share. There's an 80% probability the true value lies between $28–$78. The right skew suggests upside scenarios are more stretched than downside, a common pattern when growth could exceed expectations but margins are capped by competition.

Handling Correlation Between Inputs

Independent distributions are unrealistic. In reality, inputs move together:

  • High growth + margin expansion: Scale advantages, negotiating power with suppliers
  • Rising interest rates + higher WACC: Mechanical correlation
  • Recession scenario: Low growth + margin compression + higher discount rate all hit simultaneously

Building a Correlation Matrix:

If you're using an add-in, most allow you to specify pairwise correlations. For native Excel, define discrete scenarios (Bull, Base, Bear) where correlated inputs shift together:

ScenarioProbabilityRevenue GrowthEBIT MarginWACCTerminal Growth
Bull25%18%28%7.5%3.2%
Base50%12%23%8.5%2.5%
Bear25%6%18%9.5%1.8%

Run 5,000 iterations of Bull, 10,000 of Base, and 5,000 of Bear. The resulting histogram naturally reflects correlation and tail risk.

Practical Excel Implementation Checklist

  • Define 4–6 input distributions anchored to historical data and peer ranges
  • Set up a three-statement DCF with formulas (not hard-coded numbers) for all inputs
  • Use an add-in (Crystal Ball, @RISK) or VBA to automate iteration
  • Run 10,000+ iterations; save results in a dedicated sheet
  • Calculate mean, median, std dev, and percentiles (10th, 25th, 75th, 90th)
  • Plot histogram; overlay percentile bands
  • Stress-test: Set one input to 1 std dev worse and rerun; note impact on median
  • Document all distribution assumptions in a reference table
  • Compare median outcome to current market price and your margin of safety threshold

Common Mistakes

Starting with Arbitrary Ranges If you assume revenue could grow anywhere from 5% to 25%, you're not running a simulation—you're guessing. Anchor ranges in historical company data, analyst consensus, and peer benchmarks.

Ignoring Correlation Treating inputs as independent will underestimate downside (when one thing goes wrong, others usually do too) and overestimate upside. Use scenario-based correlation or specify covariance matrices.

Running Too Few Iterations Fewer than 5,000 iterations leaves gaps; tail percentiles bounce around. Fewer than 10,000 and your results are noisy. Aim for 25,000+.

Confusing Percentiles with Probability The 10th percentile is not "worst case"; it's a 1-in-10 outcome. True worst-case (existential disruption) may lie further left. Use percentiles as guidance, not gospel.

Forgetting That Garbage In = Garbage Out If your underlying three-statement model is broken (forecasts don't roll forward correctly, formulas are inconsistent, WACC calculation is wrong), no amount of Monte Carlo sophistication fixes it. Validate the base model first.

FAQ

Q: Should I use 10,000 or 50,000 iterations? A: 10,000 is sufficient for stable percentiles. 25,000–50,000 reduces noise but runs slower. For personal Excel models, 10,000–20,000 is practical.

Q: Can I do Monte Carlo without an add-in? A: Yes, using VBA loops or data tables. It's more manual but free. Add-ins are worth it if you run valuations frequently.

Q: What if my histogram is bimodal (two peaks)? A: Suggests two distinct regimes (e.g., industry disruption vs. steady-state growth). Consider splitting into discrete Bull/Base/Bear scenarios for clarity instead of one continuous distribution.

Q: How do I set terminal growth when I don't know long-run GDP? A: Use 2–3% (typical long-run nominal GDP growth in developed markets). Tighten the distribution (low std dev) to reflect that this is constrained by macro reality. If you're modeling an emerging market, use 3–4%.

Q: Should I adjust WACC based on scenario? A: In theory, WACC is fixed (your cost of capital). In practice, if a scenario materially changes firm risk (e.g., high-leverage recession case), you can adjust. But first, make sure revenue, margin, and FCF variance are doing the heavy lifting.

Q: How do I know if my distributions are realistic? A: Back-test: Pull 10 years of historical data for your inputs, compute empirical percentiles, and compare to your assumed distribution. Does your distribution's 10th percentile match history? If not, adjust.

  • Sensitivity Analysis: Single-variable "what-if"; foundation for defining Monte Carlo ranges
  • Scenario Analysis: Discrete Bull/Base/Bear cases; can feed into Monte Carlo
  • Value at Risk (VaR): Risk management metric; percentile-based, same structure as Monte Carlo output
  • Stochastic Modeling: General framework for modeling randomness; Monte Carlo is one method
  • Bootstrap Simulation: Uses historical data to generate distributions without assuming a shape

Summary

Monte Carlo simulation in Excel bridges the gap between false precision (a single intrinsic value) and honest uncertainty (a probability distribution). By defining realistic input distributions, running thousands of iterations, and analyzing the resulting histogram, you replace "I think this stock is worth $50" with "I'm 80% confident it's worth between $35 and $70, with median $48."

This layer of rigor—quantifying not just your valuation but your confidence in it—separates disciplined investors from gamblers. Position sizing, conviction scores, and margin-of-safety thresholds all flow from understanding your outcome distribution, not from a single-point estimate that's almost certainly wrong.

Next Steps

Your simulation reveals where outcomes cluster and where tail risk lives. Now make that distribution actionable: Excel What-If Analysis Tools shows you how to use Goal Seek and Solver to reverse-engineer market assumptions and validate whether your downside is real or a modeling artifact.