Common comp analysis pitfalls
Every investor who has built a comparable analysis believes it is rigorous and objective. You gather a peer set, calculate median multiples, apply them to your target company, and arrive at a valuation. It feels scientific. In practice, comps analysis is riddled with hidden choices—some explicit, many subconscious—that steer you toward whatever answer you want. The peer set is "curated," the multiple is "adjusted," the outliers are "removed," and at the end you have dressed up your bias in the language of market multiples. This chapter catalogues the most dangerous pitfalls and how to avoid them.
Quick definition: Pitfalls in comps analysis are methodological errors—wrong peer selection, mean vs median confusion, anchoring to outliers, or timing mismatches—that make comps output misleading even when executed technically correctly.
Key takeaways
- The peer set is destiny: a poorly chosen peer set will produce a misleading valuation no matter how carefully you calculate multiples
- Anchoring to a single high or low multiple in the set and excluding it as an "outlier" is confirmation bias dressed as rigor
- Mean and median produce different messages; using one when the other is appropriate hides important dispersion in the peer set
- Comparing a business at cycle peak to peer-set medians built across the cycle guarantees misprice
- Mixing business models, growth rates, and market segments in a single peer set collapses the entire analysis
Building a Defensible Peer Set
The wrong peer set trap
The single largest source of bad comps valuations is the wrong peer set. And the insidious part is that you can be technically rigorous about calculating the multiples while the foundation is rotten.
A peer set is wrong when:
The businesses are not actually comparable. You want to value Shopify (a SaaS platform enabling third-party sellers to build storefronts). Your peer set includes Wix (another website-builder SaaS), Squarespace (e-commerce SaaS), PayPal (payments infrastructure), and Stripe (payments processing). Do these belong together? Shopify and Wix and Squarespace share the same go-to-market (SMB self-serve) and charge based on GMV or transaction volume. PayPal and Stripe are different: they are payments-infrastructure businesses with different unit economics and competitive dynamics. Mixing them distorts the analysis. You should build two separate peer sets: one for seller-SaaS (Wix, Squarespace) and one for payments (PayPal, Square). If you must compare across, acknowledge you are mixing business models and adjust multiples explicitly.
The growth trajectories are wildly different. You want to value a 40% revenue-growth SaaS company. Your peer set includes both 50% growth SaaS companies and 5% growth "mature" SaaS companies. The median multiples hide the fact that the peer set spans two economic universes. A 50% growth company may trade at 8x EV/Revenue; a 5% growth company may trade at 2x EV/Revenue. The median of 5x is meaningless for a 40% growth company—it should be compared only to peers growing at similar rates. If the peer set does span different growth rates, segment them, calculate cohort multiples (e.g., peers growing 30–50% trade at median 7x), and apply the appropriate cohort multiple to your target.
The market segments are mismatched. You are valuing an enterprise SaaS company selling to large Fortune 500 companies. Your peer set includes mid-market SaaS (serving 50–5000 person companies) and SMB SaaS (serving under 50 person companies). These have radically different sales cycles, churn rates, unit economics. Enterprise SaaS takes 6–18 months to sell and has low churn; mid-market takes 3–6 months and higher churn; SMB is self-serve and fast but high churn. Multiples will differ materially. Do not mix them without explicit segmentation.
Company size is unconstrained. You are valuing a $5 billion market-cap company. Your peer set includes both $500 million micro-caps and $200 billion mega-caps. Size matters: mega-cap software trades at lower multiples than emerging-growth software because of lower growth expectations and higher operational stability. A $5 billion target company should be compared to peers in the $2–10 billion range, not a set spanning 400x in market cap. As a rule, include only peers within 1/3 to 3x the target's size.
The business models are fundamentally different. You are valuing a high-margin SaaS business. Your peer set includes a lower-margin marketplace (which takes a commission on transactions) and a lower-margin cloud infrastructure business (which competes on price). These do not belong together. Each business model carries different margin potential and should be valued relative to its own model, not some average across mixed models. If you include them, note the business model and gross-margin differences explicitly.
The solution: before you calculate a single multiple, write down your peer criteria. What growth stage? What market segment? What business model? What geographic focus? Then screen for peers meeting those criteria. Your peer set should be 6–15 names with demonstrable similarity on the axes that matter most.
The outlier exclusion trap
You build a peer set and calculate median EV/Revenue as 6.5x. But one company in the set trades at 2.0x (a struggling competitor or a special situation) and another trades at 12.0x (a growth leader). Your instinct: exclude these as outliers, recalculate median as 6.8x, feel confident. You have just committed confirmation bias disguised as rigor.
Here is the danger: that 2.0x company is not an outlier to be excluded. It is a signal. Why does it trade at 2.0x while peers trade at 6x+? Either (a) it is broken or at structural risk, (b) the peer set is wrong and it deserves a lower multiple, or (c) it is a deep value opportunity. Excluding it blinds you to all three. Similarly, the 12.0x company is not inflated fantasy—it may have higher margins, better growth, stronger moats. Excluding it blinds you to the fact that your peer set includes meaningful dispersion.
The right approach: do not exclude outliers. Instead, investigate them. Why is that company trading at 2x? Is it a debt restructuring? Is it losing market share? Is it in a different segment? If it is genuinely in your peer set, the low multiple is information—it may suggest your target company deserves a lower multiple than the median. If it is not, it should not be in the peer set in the first place. Excluding it is lazy. Similarly, investigate the high-multiple outlier. If it has better margins and growth, maybe your target deserves a premium to the median if it has similar traits.
An alternative approach: use quartiles instead of excluding. Report the 25th percentile, median, and 75th percentile valuation. Show the range. This forces you to acknowledge that your peer set has meaningful dispersion and that your target could fall anywhere within it.
The mean vs median confusion
You have seven peer-set companies. Their EV/Revenue multiples are: 3.2x, 4.1x, 5.8x, 6.2x, 6.5x, 7.1x, 18.0x. What is the median? 6.2x. What is the mean? 7.27x (the 18.0x outlier pulls the average up). Which should you use?
This sounds like a technical detail, but it affects your valuation. If you use the mean, you are implicitly treating all companies equally, even though one is priced very differently. If you use the median, you are saying the "typical" peer is at 6.2x, and the high outlier is informational but not the base case.
For most peer sets in equity analysis, the median is more defensible. It is less sensitive to outliers and better represents the "typical" comparable company. However, the median can hide important information. If your peer set has seven companies at 5–7x and one at 18x, the mean of 7.27x is telling you something: that one high-multiple company is dragging the group up. Why? Is it higher growth, higher margin, better moat? If so, your target might deserve a premium.
The right approach: report both mean and median. And then dig into the dispersion. If the mean and median are close (say, 6.2x mean and 6.0x median), the peer set is relatively tight and you can be confident in your valuation. If the mean is materially higher than the median, the peer set has outliers pulling the average up—investigate why. If the median is much higher than the mean, the peer set has dragging downside—investigate that too. Dispersion itself is informational.
The cycle timing trap
You are valuing a cyclical company (an industrial manufacturer, a semiconductor company, an oil major) and want to use comps. You gather peer trading multiples today. You calculate median P/E as 8.5x. You apply it to forward earnings and conclude the company is fairly valued. But what point in the cycle is the company at? And what point in the cycle were the peer multiples measured at?
If your target is at peak-cycle earnings (high profitability, low competition, fat margins) and you compare it to a peer set that was priced during the last cycle trough (when earnings were depressed and multiples were elevated to reflect recovery expectations), you will misprice. Peak-cycle earnings are higher than trough-cycle earnings, but the multiple should be lower because the market is more pessimistic about forward returns from the peak.
Example: Cyclical software licensing company revenue and earnings peaked in 2007–2008 before the financial crisis. A peer-set analysis using 2008 multiples (which were inflated because the market expected recovery) would have suggested the company was cheap in 2010, when in fact earnings were in a sustained secular and cyclical decline.
The right approach: for cyclical companies, calculate historical peer multiples at similar points in the cycle, not just the trailing or forward multiple. Ask: where was my target company last cycle at this point, and what multiple did it trade at? Compare to where its peers traded at a similar point. Or use a normalized earnings approach: calculate earnings across a full cycle (peak to trough), normalize to a mid-cycle level, and apply multiples based on that normalized earnings.
The growth-rate assumption error
You build a peer set of companies all trading at 15x P/E. You assume your target company will grow earnings at 15% annually, in line with what you believe the peer set is pricing. You conclude the company is fairly valued. But you have made an assumption about what growth the peer-set multiples are pricing, and you have not validated it.
Here is the problem: a 15x P/E on a company with 15% earnings growth might be cheap, fairly valued, or expensive depending on the cost of equity and long-term growth assumptions embedded in that multiple. If the peer set is pricing 15% growth for 5 years then 8% terminal growth, and your target company can only achieve 15% growth for 3 years then 5% terminal growth, your target should trade at a lower multiple than the peer set.
Alternatively, if the peer set is a collection of companies with mixed growth profiles, the median 15x could be averaging a 25% growth company (which justifies 15x) and a 10% growth company (which is expensive at 15x). Which is your target more like?
The right approach: reverse-engineer the growth assumptions embedded in the peer-set multiples. If the peer set trades at 15x P/E on average, what growth rate is embedded in that multiple? (This requires a quick earnings-growth assumption and a cost-of-equity estimate.) Compare that embedded growth to your target company's growth profile. If they are aligned, use the multiple. If your target has lower growth, apply a lower multiple. If your target has higher growth, maybe it deserves a premium.
The "adjusted multiple" trap
You identify that your target company has higher margins, lower leverage, or better growth than the median peer. You adjust the multiple upward: "Peers trade at 6x, but our target is higher-quality, so we apply 7x." You have just introduced discretionary art into what was supposed to be a data-driven analysis. How much is "higher quality" worth? 10%? 20%? You have no framework. This is a recipe for anchoring on whatever answer you want.
The other adjustment trap: "Peers trade at 6x on a trailing basis, but our target is at an inflection where forward earnings will grow 30% next year, so we apply 4x forward EV/Revenue, which is better." You have just applied the multiple to a different metric. Are you actually comparing apples to apples?
The right approach: if you believe the target deserves an adjustment to the peer multiple, say so explicitly and quantify it. For example: "Peers trade at median 6x EV/Revenue. Our target has a 5-percentage-point gross margin advantage, which should support a 10% premium to multiples (0.6x / 10% = 6.6x). Given the target's higher growth (40% vs peer median of 28%), we apply a further 5% premium to 6.93x." Now you have made your assumptions explicit and someone can challenge them. The alternative—subjective adjustment—obscures your reasoning.
The precedent transaction trap
You find that a peer company was acquired two years ago at 8x EV/Revenue. You apply the same multiple to your target company and feel anchored to real market data. But M&A multiples and trading multiples are not interchangeable. M&A multiples embed:
- Control premium (20–30% above pre-announcement trading price)
- Synergy value (the buyer expects to extract cost savings or revenue upside)
- Competition among bidders
- Deal timing and risk premium
A deal done at 8x EV/Revenue does not mean the company should trade at 8x in the market. It might have traded at 5.5x before the deal was announced. And the buyer might have expected $500M in synergies, which represents half the valuation premium. For a standalone company or one without the same synergies, the multiple should be lower.
The right approach: if using precedent M&A comps, back out the control premium (typically 20–30%) and synergy value (estimate as a percentage of EBITDA or revenue). The trading-comparable multiple is more relevant. Alternatively, use M&A multiples as a ceiling, not a base case. If a peer was acquired at 8x EV/Revenue, the trading comp should be lower, and that is your fair-value zone for a similar standalone company.
The survivorship bias trap
You are building a peer set of software companies. You screen for the top 10 largest software companies by market cap. You notice they all trade at premium multiples (10–15x EV/Revenue) and seem to have structural advantages (global scale, strong margins, durable competitive advantages). You conclude that software, as a whole, trades at high multiples. You have just succumbed to survivorship bias.
The largest software companies are the winners of the last 20 years. Smaller, failed, or merged competitors are not in your peer set. A complete peer set would include companies that grew fast and then stalled (a surprise downside that dragged multiples down), companies that were disrupted (negative multiples on acquisition), companies that merged (no longer independent). The survivors are the tip of the iceberg.
Similarly, if you build a peer set using current constituents of the S&P 500, you miss the companies that fell out of the index due to poor performance. Your peer set is biased toward winners.
The right approach: supplement current trading comps with historical precedent transactions, including failed acquisitions, mergers, and down-round funding. Look at how software companies that decelerated from 40% growth to 10% growth were repriced by the market. Use that information to stress-test your comps.
Real-world examples
Comcast and cable peers (2010–2015). An investor building comps on Comcast might include other cable operators (Charter, Cox) as peers. But cable businesses are cyclical and capital-intensive. During the 2010–2015 period, cord-cutting accelerated and investors repriced the entire sector. Comps anchored to old multiples were misleading. A better approach would have been to (a) segment the business—video, internet, telephone—(b) compare video to media/entertainment, internet to telecom, and (c) stress-test the model for continued cord-cutting. Comps alone could not have captured the structural change.
Amazon as a retailer vs a tech platform (2008–2014). Early comps for Amazon compared it to traditional retailers (Walmart, Target) based on revenue. But Amazon was building a platform (AWS, marketplace) that had higher margins and better growth. Comps to retailers told you Amazon was expensive; comps to cloud/platform companies would have suggested fair value or discount. The peer set was chosen based on revenue size, not business model. A better approach would have been to separate retail and platform business lines for valuation purposes.
Tesla (2015–2017). Comps to traditional automakers (Ford, GM, Toyota) suggested Tesla was wildly overvalued even at lower multiples (say, 2x P/E vs traditional auto at 5–6x P/E). But Tesla was not a traditional automaker—it was a high-margin software-enabled automotive company. Better comps might have been to luxury automakers (Ferrari, Porsche) or consumer tech companies (Apple, Nvidia). Still, the rapid growth meant even those comps would have struggled. A hybrid approach combining comps to luxury auto, comps to high-growth tech, and a DCF would have been more robust.
Common mistakes
Mixing cyclical and non-cyclical companies in the same peer set. Do not put semiconductor manufacturers next to software companies without explicitly noting the cyclicality difference.
Assuming the median is the answer. The median is a starting point, not a conclusion. Investigate why peers trade at different multiples.
Excluding data points that do not fit your narrative. If a competitor is trading at a much lower multiple, that is information. Do not discard it.
Calculating multiples on trailing earnings without considering where the company is in the cycle. For cyclical companies, use normalized earnings or multiples at equivalent cycle points.
Using precedent M&A multiples without backing out the control premium and synergies. M&A multiples are ceilings, not comps.
FAQ
How many peers should I include? Minimum six, maximum 15. Fewer than six and you lack statistical validity; more than 15 and you are probably including non-comparables and diluting the set.
Should I weight peers equally or by size? Equal weighting is fine for introductory analysis. If you want to weight by size (on the theory that larger companies' multiples are more liquid and representative), do so explicitly.
What if there are no true peers? Build a peer set from adjacent business models or geographies. Or use historical precedent transactions. Or supplement with a DCF. Do not force a bad peer set into silence.
How do I know if my peer set is good? Low dispersion (mean and median are close) suggests a tight, comparable set. High dispersion suggests heterogeneity—investigate whether the set is wrong or whether dispersion itself is meaningful.
Related concepts
- Building the peer set that actually compares — Deep dive into peer-set construction methodology
- Comps vs DCF: when each wins — Understanding the limitations of comps and when to rely on DCF instead
- Valuation multiples checklist — A systematic approach to avoiding these pitfalls
Summary
Comps analysis is a powerful tool, but it is riddled with hidden choices that steer you toward whatever answer you want. The peer set is destiny—choose it carefully, define your selection criteria before you calculate multiples, and investigate why peers trade at different multiples rather than excluding outliers. Mean and median serve different purposes; use both and investigate dispersion. Adjust multiples only when you can quantify the adjustment explicitly. For cyclical companies, time the comparison to the cycle. And remember: comps show you what the market is paying, not necessarily what you should pay. Layer comps with DCF and you will be far more robust.