Competitive Intelligence Playbook
The Competitive Intelligence Enablement Playbook
How to build a CI program that reps actually use — from mapping your competitive landscape to enabling sales in the moment.
Enter your work email to read the full playbook. No spam.
· Contents
What's inside.
Here's how competitive intelligence usually gets assigned: someone on PMM gets a tap on the shoulder — “Can you own CI?” What that actually means is: build battlecards for every competitor, keep them updated, make sure sales can find them — and this is maybe 20% of your job. Six months in, either the docs are outdated and no one trusts them, or the person owning CI is constantly playing catch-up. The problem isn't the person. It's the model.
The teams that make this work don't try to cover everything. They build a small set of high-quality core assets, the positioning logic that actually matters, and use AI to scale everything else. That means deeper coverage, faster answers, and much less maintenance. That's what this playbook is about.
What we've learned about CI programs that last.
The insight matters more than the asset. Anyone can format a battlecard. What makes it useful is the positioning logic behind it — why you win against this type of competitor, what buyers are actually weighing, which objections are real versus noise. That logic comes from deal data and buyer conversations. AI can help you find it and turn it into content. It cannot replace the PMM who synthesizes it.
Most competitors don't need their own battlecard. If something shows up twice a year, reps don't need a full breakdown — they need a pattern. Treating every competitor as tier 1 is how CI programs collapse under their own weight.
Distribution is where CI programs die. If your battlecards live in a wiki, you don't have a CI program — you have documentation. Real CI shows up where reps are already working: in Slack, in the CRM, inside the deal flow. If they have to go searching for it, most won't.
Map Your Competitive Landscape
The most common mistake in CI is starting with the wrong competitors. You build a battlecard for the competitor your CEO mentioned last quarter and miss the three that are quietly showing up in 40% of your deals.
Start with data, not intuition.
Pull competitor frequency from your CRM
Before you decide which competitors get resources, find out which ones are actually in your deals. Pull your CRM data for the last 6–12 months and ask:
- Which competitors show up most often?
- Which ones are tied to your biggest losses?
- Where are you consistently winning versus losing?
- Does the competitive mix change by segment, deal size, or vertical?
This is your ground truth. It should drive your prioritization, not market noise or brand recognition. If your CRM data is messy — which it usually is — supplement it with call transcripts. Tools like Gong are especially useful for catching competitor mentions that never made it into structured fields.
Define your tier 1 threshold
Tier 1 competitors are the ones that show up frequently enough to warrant a named battlecard and ongoing maintenance. A reasonable heuristic: any competitor that appears in more than 10 deals per quarter deserves a named card.
The number will vary by your deal volume. The point is to set a threshold based on frequency — not on how well-known the competitor is or how much noise they're making in the market. A venture-backed startup getting press coverage might show up in 2 deals a quarter. A quiet incumbent might show up in 30. Resource accordingly.
Most teams end up with 5–10 tier 1 competitors. Don't cap it arbitrarily — use the frequency threshold and let the data tell you where to draw the line.
Group everything else into buckets
Every competitor that doesn't meet your tier 1 threshold goes into a category bucket. This is not a consolation prize. It's a deliberate design choice that makes your CI more scalable and often more useful for reps.
The insight: if you get down to what truly makes your company different, a lot of your competitors start to look the same. They create the same friction in deals. Your response to them is essentially the same. If that's true, one well-built category card is more useful than ten thin named cards with nothing meaningful in them.
The trick to bucketing is to start with the place of pain — not "what category does this competitor belong to in the market" but "what is the problem this type of competitor creates for buyers who are evaluating us?"
How to define your buckets in practice:
- Start with the pain, not the label. Instead of asking "what category is this competitor in?" ask "what problem does this type of competitor create for buyers evaluating us?"
- Anchor on your own positioning first. You need a clear center.
- Group competitors by deal patterns: objections, buyer behavior, win/loss reasons.
- Sanity check with someone newer. If they can't quickly classify competitors, your buckets aren't clear yet.
- Keep a running list. Once something is categorized, reps shouldn't have to figure it out again.
3–5 buckets is the right range. Fewer than 3 is too broad. More than 5 usually means you're over-segmenting.
Incumbent suite modules
Spreadsheet & in-house
Point-tool challengers
Example prompt for AI with deal context: "Summarize closed-lost deals from the last 6 months grouped by competitor. For each competitor, what were the top objections and what did buyers say they valued about their solution? Then identify which competitors seem to create similar patterns in deals and suggest how they might be grouped."
Where the Insight Comes From
Do this before you build anything. A battlecard is only as good as the thinking behind it. If the inputs are weak, the asset will be too.
The sources that actually matter
Buyer interviews on competitive deals are the highest-signal source available. When a buyer tells you, post-decision, what they were evaluating and why they chose or didn't choose you — that's primary intelligence. It reflects how real buyers perceive real tradeoffs in the context of an actual decision. It's more valuable than anything you'll find on a competitor's website or G2.
This is the source most CI programs don't have systematic access to. Win-loss interviews on competitive deals are the single best input you can feed into your battlecards.
Deal data at scale tells you what patterns look like across dozens of competitive deals. What objections come up most when a specific competitor is in the deal? What win rates do you see when you're displacing an incumbent vs. competing head-to-head? You need volume to see patterns. This is where a structured win-loss program becomes a CI asset.
Rep intelligence is scattered and perishable. Reps hear things in deals — what competitors are saying about you, pricing changes, new features buyers are asking about — that don't make it into any system and evaporate within weeks. Build a lightweight intake process: a Slack channel, a CRM field, a standing agenda item in your weekly sales meeting. Make it easy to surface, not mandatory to document.
External sources — G2 reviews, competitor websites, pricing pages, job postings, press — are useful for context and for catching major changes. They are not useful as the primary source for positioning decisions. A competitor's website tells you what they want buyers to think. Deal data tells you what buyers actually think.
The human-in-the-loop requirement
AI can pull all of these sources together, surface patterns, flag changes, and draft battlecard content. What it cannot do is make the positioning judgment call.
When buyer data says "buyers keep citing our implementation timeline as a reason they chose the competitor" — a human has to decide whether that's a product problem, a sales problem, a messaging problem, or a pricing problem. That decision is the insight. Everything downstream of it — the battlecard copy, the objection handling, the landmine questions — can be AI-assisted.
Humans own the insight. AI scales the output. You are responsible for the positioning logic. Don't hand that off.
Humans
- ·Set the positioning logic
- ·Map buyer language to messaging
- ·Name the real tradeoffs
- ·Sign off on every update
- ·Kill content that has gone stale
AI
- ·Ingests calls, docs, and deals
- ·Surfaces patterns and flags drift
- ·Drafts copy from your direction
- ·Answers reps in Slack and CRM
- ·Watches monitored competitor URLs
Build the Asset Layer
Now you can actually build. There are two types of assets — one for each tier. They serve different purposes and require different levels of depth.
A note on depth in the age of AI
The old pressure to keep battlecards short and scannable was real when reps were expected to read them directly. That constraint is mostly gone.
Now, your battlecards act as a knowledge base for AI. A rep asks a question in Slack, the AI pulls the relevant answer, and surfaces only what they need in that moment. The rep never reads the full document. That changes how you should think about depth. More detail is not a problem — it's an advantage. You can include nuance, edge cases, and the reasoning behind your positioning.
Think of this less as a one-pager and more as a system your AI can draw from. Your assets should live somewhere your AI can reliably access in a structured, queryable format — clear headers and sections, consistent tagging, markdown, stored where your AI tools can connect to it.
Named battlecards (Tier 1)
A named battlecard is not a feature comparison. Feature comparisons go stale immediately and tell reps almost nothing about how to win a deal. The point is to give reps — and your AI assistant — the positioning logic they need to handle this competitor in a live conversation.
Every named battlecard should answer:
Why do buyers consider them? Not from your perspective — from the buyer's. What legitimate problem does this competitor solve? What's their pitch? If you can't articulate why a rational buyer would choose them, your battlecard is going to read as propaganda and reps won't trust it.
What are buyers actually choosing between? In most deals, the decision isn't "your product vs. their product." It's a set of tradeoffs the buyer is navigating — implementation speed vs. depth, total cost vs. switching risk, proven solution vs. newer approach. Name the real tradeoff.
Where do you win and why? Be specific. "Better product" is not an answer. "Faster time to first insight because we don't require a dedicated implementation team" is an answer. This should come from real deal data and buyer language, not internal assumptions.
Landmine questions — questions a rep can ask early that reveal whether this competitor is in the deal and shift how the buyer evaluates options.
Objection handling — what actually comes up in deals when this competitor is involved. Write responses that sound like something a real person would say.
Where they can't go — what can't they do, or what requires significant workarounds? Be concrete. This is not about saying "we're better" — it's about showing where the tradeoffs break down in real situations.
Category battlecards (Long tail)
A category battlecard gives reps a pattern to recognize and respond to. The rep may have never heard of the specific competitor they're facing. What they need is a framework, not specific intel they don't have.
Every category battlecard should answer:
- What is this type of competitor? A crisp description a rep can absorb in 60 seconds. Main characteristics. How to recognize one.
- Why would a buyer choose them? Articulate the legitimate case before you argue against it.
- Known examples — a running list of competitors that fall into this bucket. Update it as new ones get categorized.
- Key message and proof points — your positioning logic against this category, tied back to your core value props.
- Landmine questions and objection handling — generalized versions that work across the category.
Layer AI on Your Battlecards
Your battlecards are the foundation. They are not meant to cover everything. You will never anticipate every question a rep asks in a live deal, and you don't need to. The goal is to give your AI a strong starting point, then let it fill in the gaps in a controlled way.
Without that structure, things break quickly. If you let AI pull from anywhere, you get a mix of sources — some accurate, some outdated, some just marketing. The output might sound confident, but reps won't trust it for long.
The fix is simple: define where the AI is allowed to look, and in what order.
The source hierarchy
Define this explicitly for your setup. The order matters.
1. Named battlecard — your highest-trust source. Your positioning logic, grounded in deal data and buyer interviews. The AI should lead with this when it exists.
2. Category battlecard — if there's no named card, fall back to the category. This gives the AI a strategic frame. It understands the type of competitor and how you typically win, then fills in specifics using other sources.
3. Explicitly monitored pages — for each tier 1 competitor, define a set of URLs the AI is allowed to search: their documentation, changelog, pricing page, help center. These are curated, authoritative sources. The AI can use them to answer questions about specific features, pricing details, or recent announcements your battlecard may not cover.
4. Scoped web search — the fallback. If something isn't covered in your assets or monitored pages, the AI can search — but only within the competitor's own properties (e.g., docs.company.com, company.com/pricing). Not the open web. This keeps answers grounded while still giving you flexibility.
What this prevents
Without a clear hierarchy, AI pulls from everywhere: analyst reports with outdated information, G2 reviews that don't reflect current reality, press coverage that reflects competitor marketing rather than product truth. Garbage sources produce garbage answers that erode rep trust in the whole system.
What this looks like in practice
When a rep asks "what do we say when a prospect is evaluating Competitor X?" the AI works down the hierarchy:
- Is there a named battlecard for Competitor X? If yes, lead with that positioning logic.
- Is Competitor X assigned to a category? Pull the category battlecard for the strategic framework.
- Does Competitor X have monitored URLs configured? Supplement with specific details from those sources.
- Is a scoped web search needed to fill the gap? Use that as a last resort, scoped to their domain.
The rep gets an answer grounded in your positioning, augmented by current specifics, without requiring you to have anticipated every possible question in advance.
Get Intelligence to Reps When It Matters
A battlecard sitting in a wiki isn't competitive intelligence. It's documentation. The difference comes down to timing. Does the insight show up when a rep is in the middle of a deal, or only when they remember to go looking for it? Most of the time, they won't go looking. So distribution matters just as much as the content itself.
The Slack agent as primary distribution channel
Reps already live in Slack. An AI agent that answers competitive questions there removes the navigation cost entirely. No wiki to find, no battlecard to open, no search to run.
The agent should handle questions like:
- "What do we say when a prospect is evaluating [Competitor X]?"
- "We just found out [new competitor] is in a deal — what do we know about them?"
- "Buyer is pushing back on pricing compared to [Competitor]. What's our response?"
- "What's our win rate against [Competitor] in enterprise deals?"
For the agent to be useful, it needs to be grounded in your actual intelligence using the source hierarchy — not generic web search. The category layer is particularly important here: even when a rep asks about a competitor you haven't built a named card for, the agent can give a useful, strategy-grounded answer by drawing on the relevant category battlecard plus scoped web research.
Teach reps how to ask good questions. "Tell me about Competitor X" is a bad query. "What objections do buyers usually raise when evaluating Competitor X for enterprise deals?" is a good one. A short onboarding session with examples pays dividends for months.
One instruction worth adding to your agent configuration: an "on a call" mode. If a rep signals they're mid-conversation, the agent responds in 3 bullets or fewer, leading with the most important thing to say — not background context, not a full battlecard summary. The one sentence that matters most right now.
Connecting to Claude, other AI tools, and workflows via MCP
Your competitive content doesn't have to stay siloed in a single tool. With MCP (Model Context Protocol), your battlecards and deal intelligence can become a data source that other tools and workflows in your team already use.
This matters because teams are increasingly building their own AI workflows — for writing outbound emails, preparing for customer calls, drafting proposals, building slides. If your competitive intelligence is available via MCP, it can be embedded in any of those workflows without anyone having to copy-paste from a battlecard.
A few practical examples: a rep building a competitive comparison slide can have their AI tool pull from your approved battlecard content rather than making up positioning. A PMM drafting new messaging can query your deal intelligence to see what buyers have said about specific competitors. A sales leader preparing for forecast review can ask about competitive patterns in a specific segment.
With Hindsight, the competitive enablement agent, Slack assistant, and deal intelligence are available via MCP — so any Claude-based or other AI workflow your team builds can pull from your CI foundation automatically.
Push vs. pull
Most CI distribution is pull: reps go find what they need. The Slack agent makes pull easier. But proactive push distribution is more reliable.
A few push workflows worth setting up:
New deal competitive alert — when a competitor is logged on a new opportunity above a certain ACV, send the rep a 1-paragraph competitive brief via Slack or CRM notification. They didn't ask for it. It's there before they need it.
Weekly competitive digest for sales leaders — every Monday, a summary of which competitors appeared in deals last week, win/loss rates by competitor, and any notable patterns. Automated, not manually compiled.
Battlecard update notification — when a battlecard is updated, notify the reps who have active deals involving that competitor. Targeted, not a mass announcement.
Keep CI Current
Stale battlecards are worse than no battlecards. A rep who trusts outdated intelligence and uses it in a deal can do more damage than one who says "let me find out and get back to you."
The failure mode of most CI maintenance processes: a quarterly review where a PMM goes through every battlecard and tries to update what they can. This creates a massive effort spike that consistently gets deprioritized, and quarterly is too slow anyway — a lot changes in 90 days.
The better model: automated monitoring that flags changes, AI that suggests updates, and a quarterly review that handles structure and strategy rather than catching up on everything that should have been updated months ago.
Step 1: Set up automated monitoring per competitor
For each tier 1 competitor, define a scheduled monitoring workflow. Don't try to do all competitors at once — set one up, see how it runs, then add more. The sources that matter most: their pricing page, changelog, documentation, and blog.
Most AI tools can be configured to check a URL on a schedule and summarize changes. The goal is to have a workflow that surfaces relevant changes to you via Slack or email without requiring you to go looking.
What you want from a monitoring alert is not just "something changed on competitor.com/pricing" but "the pricing page now shows a new enterprise tier starting at $X — previously there was no enterprise pricing listed." The more specific the alert, the faster you can assess whether it warrants a battlecard update.
Hindsight has this built in — you can configure monitoring workflows per competitor that run on a cadence and feed into your update queue. But even without dedicated CI tooling, you can set this up with a combination of tools your team likely already uses.
Step 2: Let AI suggest updates — don't let it write them
When monitoring surfaces a change, or when deal data shows a pattern worth capturing, the right workflow is:
- AI flags the potential update with supporting evidence
- You review it and decide whether it represents a meaningful shift
- If yes, you define the updated positioning logic — that's your job
- AI drafts the updated section based on your direction
- You review and approve before anything changes
This is the human-in-the-loop principle applied to maintenance. The AI handles detection and drafting. You handle judgment. Nothing overwrites your approved content without sign-off.
The reason this matters: an AI that automatically updates your battlecards based on what it finds on competitor websites will eventually surface something inaccurate, out of context, or reflecting competitor marketing rather than deal reality. Your credibility with reps depends on content they can trust.
Step 3: Quarterly review — structure and strategy, not catch-up
With automated monitoring handling the tactical layer continuously, your quarterly review becomes a short, focused session on structural questions:
- Has any long-tail competitor crossed your frequency threshold and earned a named card?
- Has any tier 1 competitor's deal frequency dropped enough to reconsider their status?
- Are there new competitive patterns emerging that warrant a new bucket?
- Is the core positioning logic for any named competitor still accurate?
- Are there new URLs to add for competitors you've started tracking more closely?
This review should take 2 hours, not a week. Because the continuous layer has been handling updates, you're not trying to rebuild everything from scratch. You're adjusting the system.
The last question to ask yourself: is this getting more manageable or less manageable over time? A well-maintained CI system should get easier to run as it matures. If it's getting harder, something in the setup is wrong — either tier 1 is too broad, monitoring is generating too much noise, or the distribution layer isn't working and you're compensating with manual effort.
Why Programs Fail
CI programs fail in predictable ways. Here are the four most common failure modes.
Too many Tier 1 cards.
Signs: 20+ named cards. Most haven't been updated in 6+ months. Maintenance consumes PMM time and nothing is actually current.
Root cause: No frequency threshold. Battlecards were created on request rather than driven by deal data.
Fix: Pull the last 6 months of deal data and apply a frequency threshold. Archive or convert to bucket cards any competitor that doesn't meet it. Redirect maintenance time to the ones that actually matter.
Built from the wrong sources.
Signs: Reps say battlecards don't reflect what they're hearing in deals. Buyers raise objections that aren't covered. Win rates aren't improving despite updated cards.
Root cause: Battlecards were built from external sources — competitor websites, G2, analyst reports — rather than deal data and buyer intelligence. They reflect what competitors say about themselves, not what buyers think when choosing.
Fix: Audit your battlecard sources. For each key claim, ask: is this grounded in what buyers have actually said, or in what the competitor says? Replace external-source claims with deal data and buyer interview evidence.
Reps aren't using it.
Signs: Battlecards are up to date. Reps still lose competitive deals they should win. Post-loss analysis shows reps weren't aware of relevant intelligence.
Root cause: Distribution failure. Intelligence lives somewhere reps don't naturally go, or in a format that requires too much reading mid-deal.
Fix: Audit where reps actually go for competitive help — usually Slack or a senior rep. Meet them there. Set up the Slack agent. Use push workflows to get intelligence to reps before they need to ask.
No feedback loop from deals.
Signs: The CI program produces output but doesn't improve over time. The same objections keep showing up without better answers. Win rates against key competitors aren't moving.
Root cause: No mechanism for deal outcomes to feed back into the CI assets. Battlecards get updated based on monitoring but not based on what's actually happening in deals.
Fix: Close the loop. Structured win-loss interviews on competitive deals. Deal analysis that flags competitive patterns automatically. Rep debriefs after significant competitive losses. The CI program should get smarter with every deal, not just every quarter.
Putting It Together
The CI programs that work at scale have a common structure.
A small, well-maintained asset layer — named battlecards for tier 1 competitors, category cards for the long tail — built on positioning logic that came from real deal data, not competitor marketing.
A defined source hierarchy that tells your AI assistant where to look and in what order — so reps get answers grounded in your approved content, supplemented by curated sources, without garbage from the open web contaminating the context.
A distribution model built around the moment of need — Slack, CRM, proactive alerts, MCP connections to other tools your team uses — not a wiki that reps have to remember to visit.
A maintenance model that runs continuously in the background rather than requiring a heroic quarterly effort — automated monitoring, AI-suggested updates, human sign-off on anything that changes.
The goal isn't comprehensive coverage of every competitor. It's a system where the intelligence you actually need is current, accessible, and improving over time — without requiring a full-time analyst to keep it running.
The order that works
- Pull deal data and set your tier 1 frequency threshold.
- Map the competitive landscape — named competitors, substitutes, no-decision.
- Build category buckets before you build named cards.
- Source your insight from deal data and buyer interviews first, external sources second.
- Build your named battlecards with positioning logic, not feature comparisons.
- Define your AI source hierarchy and configure it explicitly.
- Set up the Slack agent and push workflows before you announce the program.
- Configure automated monitoring for tier 1 competitors.
- Run the first quarterly review 90 days in — adjust the tier structure, not just the content.
Start Now
Ready to see your competitive position in your own deal data?
Connect your CRM and get your first competitive deal analysis within hours. No setup fees.