Historical Accuracy of Scenarios: Learning From Past Forecasts
Scenario analysis looks good in theory. You build a base case, a bull case, and a bear case. You assign probabilities. You calculate a probability-weighted fair value. It feels disciplined, thoughtful, systematic. But does it actually work? Do scenario forecasts that seemed reasonable in 2019 look reasonable in 2024? Do they predict the crises before they happen or only explain them after?
The honest answer is: scenarios are accurate at predicting the base case and inaccurate at predicting the direction and magnitude of departures from it. This chapter examines four historical periods—the financial crisis, the pandemic, inflation shock, and tech disruption—to understand when scenarios work and when they fail catastrophically.
Quick Definition
Scenario forecast accuracy measures how well scenarios constructed at one point in time predicted actual outcomes in subsequent years. High accuracy means base cases, bull cases, and bear cases contained outcomes that actually occurred. Low accuracy means scenarios anchored to wrong baseline assumptions or failed to anticipate the direction of change. Perfect accuracy is impossible; the goal is honest recalibration to understand your systematic blindspots.
Key Takeaways
- Scenarios accurately capture base-case outcomes roughly 60% of the time, but rarely predict directional shifts (which way you're wrong when you're wrong).
- Bear cases almost always underestimate downside: the actual catastrophe exceeds the "worst case" scenario. This is a systematic bias in scenario construction.
- Bull cases are more accurate than bear cases, but tend to underestimate the time required for upside scenarios to materialize.
- The largest forecast failures come from regime shifts (structural changes) that were labeled as "bear cases" but shouldn't have been—they should have been re-assigned as "the new base case."
- Scenario accuracy improves when you include base rates (historical frequencies) and stress-test for assumptions that might break down under regime shifts.
The Financial Crisis, 2007–2009: The Bear Case That Wasn't
In 2007, most institutional investors had three scenarios:
Base case (65%): Moderate growth, stable housing market, credit spreads normal, financial sector profits steady. Fair value for bank stocks: X.
Bull case (20%): Stronger economic growth, credit cycle extends, margin expansion. Fair value: X × 1.2.
Bear case (15%): Recession, housing downturn, credit spreads widen, financial sector struggles. Fair value: X × 0.75 to X × 0.6.
Actual outcome: The bear case's X × 0.6 proved too optimistic. Major banks traded down to X × 0.15. Some traded at X × 0.05. Lehman Brothers, valued at $32 billion in 2007, became worthless in 2008. The actual crisis exceeded every reasonable bear case.
Why the Scenarios Failed
1. Regime shift wasn't contemplated. Scenarios assumed the credit system remained functional but stressed. They didn't contemplate a scenario where credit essentially seized up—where the financial system itself became unstable rather than just unprofitable. This wasn't a bad recession; it was a potential systemic collapse. The bear case had no category for that.
Post-crisis research by the Financial Crisis Inquiry Commission and academic studies by Carmen Reinhart and Kenneth Rogoff documented that systemic financial crises follow different dynamics than cyclical recessions. Scenarios built on post-WWII stability without financial system instability miss the tail risks that occur once per generation.
2. Correlation assumptions were wrong. Scenarios assumed that if housing fell 20%, mortgages would perform okay because borrowers still had jobs. But in a recession, housing and employment both fall. Mortgage defaults were correlated not just to housing prices but to unemployment. The scenario didn't model simultaneous shocks to housing, employment, and credit.
3. Tail risk was underestimated. Every scenario analyst knew that a full credit collapse was possible. But assigning 15% probability to "recession" made that probability feel much lower than the true tail risk. The bear case was a 15% chance. A 5% chance that the bear case was wrong about the financial system was a 0.75% tail risk. In reality, that 0.75% tail risk happened.
4. Analysts were anchored to "normal" historical ranges. Historical real estate downturns were 15–25%. Scenarios assumed downturn: -15%. What if it's -30%? Most analysts didn't ask this question because -30% housing declines hadn't occurred in living memory. The scenarios were bounded by recent history.
What This Taught
By 2009, investors learned: your bear case is probably too optimistic. It should be your "stressed base case." Your actual bear case should be significantly worse. Smart investors started assigning 30% probability to truly catastrophic outcomes, not 15% to moderate recessions.
This overcorrection lasted a few years, then gradually reverted. By 2019, scenarios had begun to assume financial systems were stable again. This was probably wise—the 2008-style crisis recurrence probability had genuinely fallen due to regulatory reforms. But the lesson remained: bear cases underestimate tail risk systematically.
The Pandemic, 2019–2020: The Bear Case That Happened Faster Than Expected
In late 2019, typical investor scenarios looked like:
Base case (70%): GDP growth 2–3%, earnings growth 4–6%, valuations stable to modestly expanding.
Bull case (15%): Lower interest rates, stronger earnings growth, multiple expansion. Markets up 10–15%.
Bear case (15%): Recession, earnings down 10–20%, multiple compression. Markets down 20–30%.
Then in March 2020, reality delivered something strange: a combination of the bear case and the bull case simultaneously.
The bear case happened: global supply chains broke, corporate earnings collapsed, unemployment spiked from 3.5% to 14.7% in two months. The S&P 500 fell 34% from peak to trough. On paper, this was the bear case materializing.
But the bull case also happened: interest rates fell to zero, central banks deployed unlimited stimulus, governments passed multi-trillion-dollar spending packages. Asset prices, which should have fallen further as earnings collapsed, stabilized and then recovered. By August 2020 (five months later), the S&P 500 had recovered to all-time highs.
Why Scenarios Missed This
1. They didn't anticipate simultaneous supply and demand shocks that would trigger policy response. The bear case contemplated recession. But recession scenarios assume a normal monetary policy response (modest rate cuts). The pandemic created a unique scenario: recession-level demand collapse combined with inflation-level supply collapse, which triggered policy responses that would have been inappropriate for either alone.
2. They underestimated policy response speed. The Fed moved from "hold rates steady" to "unlimited stimulus" in two weeks. Congress passed a $2 trillion relief package in three weeks. Scenarios typically assume policy changes over months or years, not days.
3. They didn't model the regime shift: from "gradual cyclical adjustment" to "emergency extraordinary measures." The pandemic didn't feel like a recession scenario. It felt like a systemic crisis, but not a financial system failure. It was a novel tail risk that previous scenarios couldn't easily categorize.
What This Taught
Investors learned that policy response matters as much as the initial shock. A recession that triggers emergency stimulus might be less harmful to equity investors than a recession that triggers policy tightening. Scenarios now more explicitly model central bank behavior: not just assuming it's reactive, but asking: what are the political constraints on policy response? How fast can policy move?
The second lesson: diversification across asset classes (stocks and bonds) proved effective. Stocks crashed, but bonds held up, and the combination (60/40 portfolio) fell only about 20%. This validated portfolio-level scenario analysis: downside protection came from diversification, not from predicting the crash.
The Inflation Shock, 2021–2023: Bull Case That Became Bear Case
In late 2020, as vaccines rolled out, scenarios looked like:
Base case (60%): Temporary supply-chain disruptions, modest inflation spike (2–3%), interest rates remain low, "transitory" is the consensus word. Earnings grow 5–8%, valuations hold steady.
Bull case (25%): Strong pent-up demand, rapid growth, inflation moderates, rates stay low. Earnings growth accelerates, valuations expand slightly.
Bear case (15%): Persistent supply-chain problems, inflation doesn't moderate, Fed forced to raise rates aggressively. Earnings compress, valuations compress. Markets down 15–25%.
Actual outcome: Elements of both bull and bear cases materialized, but in a sequence that scenarios didn't anticipate.
2021 (Bull case unfolding): Strong demand, rapid growth, earnings beat expectations. Markets up 28%.
2022 (Bear case, but worse): Inflation didn't moderate. Fed raised rates 425 basis points in one year. Valuations compressed more than any scenario anticipated. Growth stocks (which scenarios said would benefit from low rates) fell 50%+. Markets down 18%, but high-growth segments down 50%.
2023–2024 (New base case): Inflation moderates, growth slows modestly, Fed stops raising rates. Markets recover to all-time highs.
Why Scenarios Missed This
1. They anchored "normal" inflation to 2% when structural conditions had changed. The base case assumed inflation would prove "transitory"—a temporary spike before reverting to trend. But supply-chain reshoring, demographics, energy policy, and labor dynamics had shifted structural inflation upward. The 2% inflation target became difficult.
2. They didn't model the interaction between inflation and valuation multiples. Scenarios said "if inflation rises, the Fed will raise rates." But they didn't fully articulate the second-order effect: higher inflation and higher rates simultaneously collapse growth stock valuations. A growth stock valued on $1 earnings with a 20% expected growth rate and 2% discount rate risk premium looks much cheaper when the risk premium jumps to 4% and growth slows to 10%. Scenarios often treated valuation effects as secondary, but they were primary.
3. They underestimated Fed response speed. Most scenarios assumed gradual rate increases: maybe 1–2% per year. The actual Fed hiked 425bp in 12 months. This wasn't unprecedented, but it was faster than most base cases.
4. They failed to update as new data arrived. Scenarios built in late 2020 and early 2021 said inflation was transitory. By mid-2021, data clearly showed it wasn't. But many investors didn't rebuild scenarios; they updated them only slightly. The scenarios' base case remained "transitory" even as evidence mounted otherwise.
What This Taught
The lesson: update scenarios quarterly, and especially after major data surprises. If your scenarios built in December 2021 had inflation moderating in 2022, and it didn't, that's a signal to rebuild them in March 2022. Investors who did this caught the bear case early. Those who didn't remained overexposed to growth.
The second lesson: valuation effects are primary, not secondary. A scenario that says "inflation rises" must explicitly calculate the change in discount rates and growth assumptions, and model the impact on valuation multiples. It's not enough to say "earnings fall because margins compress." You must also say "valuation multiples compress because investors demand higher return premiums."
Tech Disruption, 1999–2003: The Bull Case That Took 20 Years
In 1999, scenarios for Amazon looked like:
Base case (50%): Amazon becomes a major retailer, but margins remain low (1–2%). The company must navigate profitability. Fair value: $8–$12 per share (using 1999 earnings and multiples).
Bull case (30%): Amazon dominates retail, e-commerce becomes 30% of retail, margins expand. Fair value: $15–$25.
Bear case (20%): Amazon faces retail competition, loses to specialized retailers or fails to expand beyond books. Fair value: $2–$5.
Actual outcome: Over 20 years, the bull case happened. Amazon expanded from books to everything, captured majority e-commerce share, expanded internationally, then moved to AWS and became dominant in cloud computing. But the timing was wrong. The bull case played out, but not between 1999 and 2003. It played out between 2010 and 2024.
What Happened in Between
In 2000–2001, the bear case happened first: Amazon stock crashed from $100+ to $6 as the dot-com bubble burst. Investors who had assigned 20% probability to the bear case in 1999 saw that case materialize with extreme rapidity. Amazon looked dead.
But Amazon didn't go bankrupt. Instead, it refined its strategy, stopped pursuing growth at all costs, began generating cash flow (actually turned profitable in 2001), and positioned for long-term dominance. By 2010, the bull case had clearly begun to materialize. By 2024, the bull case had come true with a vengeance: Amazon was worth $1.7 trillion, one of the world's most valuable companies.
Why Scenarios Worked (Eventually) but Failed (In Timing)
1. Scenarios nailed the direction but not the timeline. The bull case for Amazon was correct about what would happen (e-commerce dominance, AWS dominance). It was incorrect about when. The scenario said "10 years" (by 2009). It actually took 20+ years.
2. They didn't account for the "valley of death." Scenarios often jump from current state to end state without modeling the intermediate valley where things look terrible. Amazon in 2001 looked like a failed company. The scenario said "Amazon will dominate retail," but didn't model the phase where Amazon's growth strategy would first require massive losses and then require massive reinvestment before profits materialized.
3. They didn't assign enough probability to the path where the bear case happens first, then the bull case happens. A smarter scenario in 1999 might have been: "40% probability: bear case (crash) followed by bull case (recovery and dominance); 30% probability: bull case directly; 20% probability: bear case (permanent)." This would have acknowledged that downside could temporarily exceed bear case expectations if the underlying bull thesis remained intact.
What This Taught
The lesson: scenario timelines are almost always wrong. Things take longer than expected. This is so robust that it has a name: the "planning fallacy." Your bull case probably will happen, but not in the timeframe you expect. Your bear case might happen first, but the underlying story might persist afterward.
The corollary: don't use scenarios to time trades. Use them to allocate capital to bets you believe in long-term. If you believe the bull case for a company is valid, a temporary move to the bear case (stock crashes) is opportunity, not a signal to exit. Amazon bulls who exited in 2001 missed the best trade of the next 20 years.
Meta-Analysis: When Scenarios Work and When They Fail
Across these four episodes, patterns emerge:
Scenarios are accurate when:
- They correctly identify the direction of change (bull vs. bear)
- They're updated regularly as new evidence arrives
- They don't anchor to irrelevant historical precedents
- They model correlations between variables that move together
- They account for policy response, not just market mechanics
Scenarios fail when:
- They underestimate the magnitude of tail risk (bear cases are too optimistic)
- They don't anticipate regime shifts (when structural assumptions break down)
- They're anchored to the base case with insufficient willingness to shift
- They miss second-order effects (valuation multiples, policy response, correlation shifts)
- They ignore or underestimate the time required for changes to materialize
Real-World Metrics: How Accurate Were Various Forecasts?
Let's measure accuracy explicitly. Define a scenario as "accurate" if the actual outcome (5 years later) falls within the range implied by that scenario.
2007 scenarios for 2012 outcomes (Financial crisis cohort):
- Bear case: Markets down 20–30%. Actual: Markets down ~15% peak-to-trough, then recovered to +40% by 2012. Partially accurate on the crash, wrong on recovery.
- Base case: Modest growth. Actual: Growth was 1–2% (slower than base case assumed). Partially accurate.
- Bull case: Strong growth. Actual: Never happened in that timeframe. Inaccurate.
- Accuracy rate: ~40–50%.
2019 scenarios for 2024 outcomes (Pandemic cohort):
- Base case: 2–3% growth, stable valuations. Actual: 2–3% growth (mostly accurate), valuations soared then corrected. Partially accurate on growth, wrong on valuations.
- Bull case: Stronger growth, modest multiple expansion. Actual: Growth was volatile (pandemic, then strong recovery, then slowdown), valuations expanded beyond bull case assumption due to AI. Directionally accurate but wrong on drivers.
- Bear case: Recession, losses. Actual: Happened briefly in 2020, then reversed. Timing wrong.
- Accuracy rate: ~50–60%.
2020 scenarios for 2025 outcomes (Inflation cohort):
- Base case: Transitory inflation, modest growth. Actual: Inflation persisted longer, Fed hiked hard, growth slowed. Inaccurate on inflation, partially accurate on growth slowing.
- Bull case: Strong growth, stable inflation. Actual: Growth was strong in 2021, weak in 2023, recovering. Wrong on inflation, timing.
- Bear case: Persistent inflation, rate hikes. Actual: Happened, but in magnitude and timing. Directionally accurate.
- Accuracy rate: ~45–55%.
1999 scenarios for 2024 outcomes (Tech disruption cohort):
- Bull case: Amazon dominates e-commerce and tech. Actual: True, but with 20-year delay and intermediate crisis. Accurate direction, massively wrong timing.
- Bear case: Amazon fails. Actual: True briefly (2000-2001), false long-term. Accurate for 2–3 year interval, wrong 20-year.
- Accuracy rate: ~30–40% (correct outcomes, wrong timing).
Overall observation: Scenario accuracy is roughly 40–60% depending on timeframe and how strictly you define "accurate." Longer timeframes (20 years) are less accurate than intermediate horizons (5 years). Scenarios are more accurate for direction than magnitude.
Scenario Accuracy Over Time
Common Mistakes When Reviewing Scenarios
Claiming accuracy for scenarios that were accidentally right. In 2007, few scenarios explicitly predicted a financial crisis. Those that did were often wrong about the mechanisms (they said "housing crash leads to recession," which was true, but underestimated the systemic risk). Scenarios that predicted recession were accidentally right about the direction but wrong about the magnitude and cause.
Revising scenarios after the fact to match outcomes. "I always thought inflation would persist" becomes a common refrain after inflation persists. But your actual 2020 scenarios said "transitory." Don't revise history. Keep old scenarios and compare them to outcomes honestly.
Using scenario misses to abandon scenario planning entirely. Some investors throw up their hands: "Scenarios don't work, forecasting is impossible, just buy and hold." But scenario planning's value isn't prediction—it's decision discipline. It forces you to articulate beliefs, test them against outcomes, and update. The discipline matters even if the predictions are wrong 50% of the time.
Attributing scenario misses entirely to randomness. Some were random (the pandemic was unpredictable). But many were not. Underestimating Fed response speed, overestimating inflation's transitoriness, not modeling regime shifts—these were analyst failures, not randomness.
FAQ
Q: If scenarios are only 50% accurate, why use them?
A: Because the 50% you get right provides value. The other 50% teaches you about your systematic blindspots. By reviewing your scenarios against outcomes, you learn: do you systematically underestimate downside? Overestimate timing? Miss regime shifts? That feedback loop is worth far more than "just buy and hold."
Q: Should I update my scenarios when evidence contradicts them?
A: Yes, immediately. If your base case said inflation was transitory and it's not, that's a signal to rebuild. You might say: "We were wrong. The new base case is persistent inflation. The new bear case is deflation." This updating is where scenarios provide real value—forcing you to adapt as the world changes.
Q: How do I avoid overconfidence when scenarios match outcomes?
A: Acknowledge luck. If your 2019 scenarios said "moderate growth" and 2024 had moderate growth, that's partly skill and partly luck. The pandemic didn't happen in your base case (it was a bear case tail risk that you underweighted). Your bull case assumptions weren't tested. Be humble about accuracy that came from favorable conditions rather than predictive skill.
Q: Are scenarios less valuable for very long timeframes (20+ years)?
A: They're valuable for different reasons. Long-term scenarios (20 years) are almost always directionally correct (the bull case for a great company's dominance usually comes true), but timing is nearly always wrong. Use long-term scenarios for capital allocation (do I believe in this company's 20-year thesis?) rather than return forecasting (what will it return in the next 3 years?).
Q: Should I trust scenarios built by my financial advisor?
A: Only if your advisor explicitly reviewed past scenarios against outcomes and acknowledged where they were wrong. An advisor who says "here are my 2024 scenarios for 2029" without referencing their 2019 scenarios for 2024 (and admitting where they missed) hasn't learned from history.
Related Concepts
- Building Three-Scenario Models — Master scenario construction, informed by lessons about what makes scenarios succeed or fail.
- Monte Carlo vs. Manual Scenarios — Understand why manual scenarios' transparency makes it easier to review their accuracy over time.
- Aggregating Scenarios to Portfolio — Use scenario analysis to structure portfolios in ways that account for historical forecastable risks.
- Margin of Safety — Learn how margin of safety helps you survive scenario misses.
Summary
Scenario forecasts are accurate about 50% of the time—perhaps better for direction, worse for timing and magnitude. The financial crisis exceeded bear cases. The pandemic surprised with its policy response. The inflation shock persisted longer than expected. The tech disruption took 20 years instead of 10.
But scenarios aren't meant to be perfect. They're meant to be a structured way to think about the future, update that thinking as evidence arrives, and stress-test your portfolio against possible outcomes. The investors who benefited most from scenario analysis weren't those who predicted correctly; they were those who:
- Built scenarios based on evidence and updated them when evidence changed
- Used scenarios to allocate capital to long-term bets they believed in
- Understood when their scenarios were at risk of being wrong (regime shifts)
- Maintained margin of safety for when the bear case exceeded their expectations
History shows that bear cases are almost always too optimistic. Build them more cautiously. History also shows that bull cases often take longer than expected. Don't time entry points based on scenarios. Instead, use scenarios to make informed long-term allocations and accept that the timing will be wrong.
Next
Continue to Summary: Embracing Uncertainty for final reflections on how to use probabilistic thinking in investment decisions.