Documenting Your Model
A spreadsheet without documentation is a black box. Six months later, you won't remember why you chose 12% growth (was it management guidance? peer average? historical trend?), and anyone reviewing your work will wonder whether your assumptions are grounded or arbitrary. Disciplined documentation transforms a clever one-off calculation into a repeatable, defensible process.
Quick Definition
Model documentation is a comprehensive record of every assumption in your valuation—its value, justification, data source, and date of validation. This includes not just the numbers but the reasoning: historical precedent, peer comparisons, management guidance, industry reports, and the explicit trade-offs you made. Documentation answers "Why 12% growth, not 10%?" with data and logic, not intuition.
Key Takeaways
- A model you can't explain to a skeptic is a model you don't understand well enough to act on
- Structured documentation (assumption register, source tracker, change log) makes audits fast and refinements traceable
- Separating point estimates from ranges (plus historical data validation) shields you from false precision
- Clear labeling (input cells, formula cells, output cells) prevents formula errors and makes models bulletproof
- Version control (date stamps, "as of" dates, assumption change tracking) lets you walk back mistakes and learn from past errors
Building an Assumption Register
Create a separate sheet in your model called "Assumptions" or "Model Inputs." This is your single source of truth.
Structure:
| Category | Input | Value | Unit | Justification | Source | Data | Validation |
|---|---|---|---|---|---|---|---|
| Revenue | Growth Yr 1 | 12% | % | Mgmt guidance midpoint | Oct 2024 earnings call | "We expect 10–14% growth" | Peer average 11%, so 12% is conservative |
| Revenue | Growth Yr 2–5 | 10% | % | Deceleration toward industry | Historical company analysis | Avg past 5 yrs: 9.8% | Industry long-term: 8–12%, so 10% reasonable |
| Margin | EBIT Margin Yr 1–3 | 22% | % | Current run-rate | Last 4 Q avg: 21.8% | Trailing twelve months: 21.8% | Peer median: 23%, company at low end but trending up |
| Margin | EBIT Margin Yr 4–5 | 24% | % | Peer average, scale leverage | Comparable companies analysis | Peers: 22–26%, median 24% | Historical peak: 24% in 2019, likely repeatable |
| Margin | Terminal EBIT Margin | 25% | % | Mature-state competitive equilibrium | Peer analysis + industry structure | Median 5-year peer EBIT: 24.5% | Assume modest premium to peers (scale, brand) |
| Tax | Effective Tax Rate | 21% | % | Blended statutory | Company 10-K, schedule J | Federal 21% + 1% state | Rate stable; no one-time benefits |
| CapEx | CapEx % of Revenue | 3.5% | % | Asset-light model | Historical CapEx/Revenue | Trailing 3-yr avg: 3.4% | Consistent with SaaS archetype; no acceleration |
| WC | Working Capital Change | 1% | % of growth | Normal for recurring revenue | Historical analysis | Change as % of revenue growth: 0.8–1.2% | Conservative; reflects customer collections |
| Discount | WACC | 8.2% | % | CAPM + after-tax cost of debt | Calculated below; see WACC tab | Risk-free rate (10Y Tsy) 4.2%, Beta 1.1, MRP 6% | Within historical range 7–9% |
| Terminal | Terminal Value Method | Gordon Growth | -- | Perpetuity; stable growth | Standard for mature firms | N/A | Alternative: Exit multiple = 18x EBITDA checks to ~$65/sh |
Columns Explained:
- Category: Revenue, Margin, CapEx, Tax, Discount, Terminal (groups related inputs)
- Input: Specific line item (Growth Yr 1, EBIT Margin Yr 4–5)
- Value: The number you plugged into the model (12%, 22%, etc.)
- Unit: Percent, dollars, ratio
- Justification: Why this number? (Management guidance, peer benchmark, historical average, forward estimate)
- Source: Where you got the data (earnings call transcript, company 10-K, Bloomberg, CapitalIQ, analyst report)
- Data: The actual supporting numbers (e.g., "Peers 22–26%, median 24%")
- Validation: Cross-check from another angle; does the number pass the sniff test? (Historical range, peer outlier check, management credibility)
Diagram: Documentation Hierarchy
Color-Coding and Cell Labeling
Professional models use consistent color conventions:
- Light blue: Input cells (you enter data here; formulas never touch these)
- Light yellow: Formula cells (calculations; you don't edit directly)
- Light green: Output cells (intrinsic value, key results; what you report)
- Gray: Disused cells or scratchwork
Excel Example:
=A1 [Light blue: Input cell, revenue growth rate]
=B1 * 1.12 [Light yellow: Formula deriving Year 1 revenue]
=$B$50 [Light green: Output cell, intrinsic value]
Cell Naming: Instead of using $B$50, use meaningful names:
IntrinsicValuePerShareRevenueGrowthYr1TerminalWACC
This makes formulas readable: =SUM(DiscountedCashFlows) / SharesOutstanding instead of =SUM($D$15:$D$24) / $B$6
Formula Documentation
For complex formulas, add comments:
' WACC Calculation
' Cost of Equity = Risk-free rate + Beta * Market Risk Premium
' WACC = (Equity Weight * CoE) + (Debt Weight * After-tax CoD)
=RiskFreeRate + Beta * MarketRiskPremium
=(E_Weight * CoE) + (D_Weight * CoD_AfterTax)
Create a separate "Formulas" sheet listing each calculation:
| Formula Name | Location | Calculation | Notes |
|---|---|---|---|
| Free Cash Flow | D15:D24 | NOPAT + D&A - CapEx - Δ WC | NOPAT = EBIT * (1 - Tax Rate) |
| Terminal Value | D25 | FCF_Terminal * (1+g) / (WACC - g) | Gordon growth; g = 2.5% |
| Enterprise Value | D26 | PV(FCF Yr 1–10) + PV(Terminal Value) | Discounted at WACC |
| Equity Value | D27 | EV - Net Debt | Net Debt = Total Debt - Cash |
| Intrinsic Value per Share | D28 | Equity Value / Diluted Shares Outstanding | Includes in-the-money options (dilution) |
The Change Log
Maintain a log of every model version and what changed:
| Version | Date | Change | Reason | Impact on Valuation |
|---|---|---|---|---|
| 1.0 | 2024-10-15 | Initial DCF build | Q3 earnings release | Base case: $58/sh |
| 1.1 | 2024-10-22 | Raised Yr 1–2 growth to 13% | Oct 20 earnings call; raised guidance | Fair value → $62/sh (+6.9%) |
| 1.2 | 2024-11-05 | Margin assumptions: 22% → 21% Yr 1–3 | 3Q results showed margin compression vs. expectations | Fair value → $60/sh (-3.2%) |
| 1.3 | 2024-11-15 | Terminal growth 2.5% → 2.3%; WACC +40bps | Risk-free rate rose 100bps (10Y Tsy 4.8%); de-rating | Fair value → $54/sh (-10%) |
This log is crucial. When you're reviewing a valuation 6 months later, you can see exactly what changed and why the value moved.
Source Documentation
For each assumption, record the source file and page:
- Earnings call transcript: "10-21-24 Goldman Sachs Conference Call, Minute 23:45, CEO quote"
- 10-K filing: "FY 2023 10-K, Item 1.A (Risk Factors), page 12"
- Bloomberg terminal: "MSFT Historical Snapshot, Oct 22, 2024; Beta 1.12; 3-year monthly returns"
- CapitalIQ: "Comparable Companies: EV/EBITDA multiples, Oct 22, 2024 snapshot; peer list in Appendix A"
- Historical company data: "Historical Free Cash Flow computed from 10-Ks FY2019–2023, shown on 'Historical Data' tab"
This makes it trivial to update. When the company reports new results, you know exactly which cells to revisit.
Sensitivity Documentation
Include a summary of sensitivity ranges in your documentation:
| Input | Low Case | Base | High Case | Range | Notes |
|---|---|---|---|---|---|
| Revenue Growth Yr 1 | 9% | 12% | 15% | 6% | Mgmt guidance 10–14%; plus/minus 2% |
| Terminal EBIT Margin | 22% | 25% | 27% | 5% | Peer range 22–26%; 25% is median |
| WACC | 7.5% | 8.2% | 9.0% | 1.5% | CoE 7.5–9.5%, CoD shifts 50–100bps with leverage |
| Resulting Valuation | $48 | $58 | $71 | $23 | Spread = 47% from low to high |
This shows your margin of safety at a glance: If the stock trades at $65, your base case gives $58 (8% upside), but high case gives $71 (9% upside). If new information hits your low case ($48), you're down 18% from $58. Is that loss tolerable for your position size?
Real-World Example: Documented Model Structure
Sheet: "Cover"
- Company name, model date, preparer, version
- Executive summary: Fair value, price target, recommendation
Sheet: "Assumptions"
- All inputs with justifications (as shown above)
Sheet: "Financials" (3-statement model)
- P&L, Balance Sheet, Cash Flow, Year 1–10
- All formulas clearly labeled (blue for input, yellow for calc)
Sheet: "DCF Calculation"
- Free cash flows derived
- Terminal value methods
- Enterprise value, equity value, intrinsic value per share
Sheet: "Sensitivity"
- Data tables for 1- and 2-variable scenarios
- Summary stats (median, std dev, percentiles)
Sheet: "Historical Data"
- Last 10 years of actual financials (from 10-Ks)
- Peer comparables and multiples
- Validation of assumptions against history
Sheet: "Formulas"
- Index of all complex formulas
- Explanation of calculation logic
Sheet: "Changes"
- Change log with version history
- What changed, why, impact on valuation
Sheet: "Sources"
- URL links, file paths, document references
- Dated citations (so you know when data was pulled)
Common Mistakes
Writing Vague Justifications "12% growth" with no reasoning is worse than no documentation. Write: "Management guided 10–14% growth on Oct 21 call; I used 12% as conservative midpoint, 100 bps below 5-year median. Peers average 11%."
Not Separating Inputs from Calculations If a cell looks like an input but contains a formula, future edits will break your model. Use color-coding and cell names to make this crystal clear.
Forgetting the Data Source "Margin assumption 22%" is useless if you can't remember whether that came from management, historical data, or peer average. Always cite: "10-K average FY2019–2023: 21.8%; Adjusted for one-time items: 22.0%."
Skipping Validation Document not just your assumption but how you validated it. "Growth 12%: Mgmt guidance 10–14% (conservative midpoint). Check: Peers 8–15%, company historically 9%, industry CAGR 7%. So 12% reasonable but not stretched."
Never Updating Documentation If you change an assumption from 12% to 13% growth, update the Assumptions sheet immediately. A model where docs don't match the formulas is worse than no docs at all.
Assuming Future-You Will Remember You will not. Document like you're explaining to someone hostile to your thesis who wants to poke holes. Be thorough.
FAQ
Q: How much documentation is too much? A: Enough to explain every assumption to a skeptic. Usually: Assumptions tab + 1-page narrative + change log. For peer review: add Formulas tab + Sources tab.
Q: Should I document my biases? A: Yes. Write: "I'm bullish on management's ability to expand margins; this is subjective. Conservative case assumes no margin expansion. Compare both."
Q: How do I document uncertainty? A: Use ranges. "Growth: 10–14% (mgmt guidance); I modeled 12% base, +1% upside case, -2% downside case." Makes uncertainty explicit.
Q: Can I use a shared spreadsheet template? A: Yes, standardize across your team. One template for SaaS, one for industrial, etc. Include sample documentation showing your standard of rigor.
Q: What if I discover an assumption was wrong? A: Create a new version. Document the change in the log. Never hide old work; let the trail show how your thinking evolved.
Related Concepts
- Model Audit Trail: Detailed record of every formula and its purpose
- Assumption Validation: Testing assumptions against historical data and peer benchmarks
- Scenario Analysis: Discrete cases (Bull/Base/Bear) with explicit assumption variations
- Version Control: Git or similar tracking changes over time
- Peer Review: External review of model structure, assumptions, and outputs
Summary
The difference between a professional valuation and a lucky guess lies in documentation. A well-documented model answers every question a skeptic would ask: Where did this number come from? Does history support it? How does it compare to peers? What happens if I'm wrong? How did you arrive at this conclusion?
Documentation transforms a spreadsheet from a hidden artifact into a transparent, auditable, defensible process. It lets you walk back mistakes, learn from past errors, update efficiently, and—most importantly—explain your work to others (or to yourself six months later).
If you can't explain an assumption in writing with supporting data, you shouldn't be using it in a model.
Next Steps
Your model is now documented, auditable, and defensible. The final test: Backtesting Historical Accuracy shows you how to validate whether your modeling process is genuinely predictive or just fitting the past. Test your assumptions against reality before you trust them with real money.