Last quarter, I walked a hospitality group through their first AI visibility audit. They were confident going in. Strong Google rankings, solid review scores, a well-known brand in their market. Then we ran 40 prompts across ChatGPT, Claude, Perplexity, and Gemini. Their brand appeared in seven. A boutique competitor with half their properties showed up in twenty-two.
That gap did not exist because the competitor had better rooms or better marketing. It existed because the competitor's content and third-party signals happened to align with what AI platforms pull from when generating recommendations. The hospitality group had no idea because they had never measured it.
Why the Audit Comes First
Most brands skip straight to optimization without knowing where they actually stand. That means investing in fixes that may not address real gaps and having no way to measure whether anything worked. Research from AirOps found that only 30% of brands remain visible in consecutive AI-generated answers, and just one in five sustain visibility across five repeated runs of the same prompt. That inconsistency makes structured baseline measurement essential rather than relying on a single spot-check.
Step One: Build Your Prompt Library
Your audit is only as useful as the prompts you test. The biggest mistake is testing branded queries exclusively. As the Wellows audit checklist notes, your brand should appear for branded queries because that is table stakes. The real test is non-branded queries where users seek solutions, not your company by name.
Build a library of 25 to 40 prompts across three categories. Category prompts are broad: "best enterprise device management solutions" or "top luxury hotels in Napa Valley." Comparison prompts pit options against each other: "Company A vs. Company B for enterprise procurement." Problem-solution prompts mirror actual buyer questions: "how do I manage Apple devices across a distributed workforce."
Pull language from real sources. Google Search Console shows the queries driving impressions. Sales call recordings reveal how prospects phrase their needs. RevenueZen recommends keeping prompt phrasing natural: if you would not ask a coworker the question that way, do not use it.
Step Two: Test Across Platforms
Run every prompt across at least four AI platforms: ChatGPT, Claude, Perplexity, and Gemini. Each draws from different data sources and applies different logic to determine which brands surface. A brand that appears consistently on Perplexity might be invisible on Claude. Testing on a single platform gives you a fragment, not a baseline.
For each prompt and platform combination, record five data points: whether your brand appears, its position in the response, the language AI uses to describe you, which competitors appear alongside you, and what sources the platform cites. Keep testing conditions consistent. Use the same account settings, test during similar times, and avoid prompts that reference browsing history.
Step Three: Score Your Visibility
Calculate your visibility score: the percentage of prompts where your brand appeared in at least one platform's response. If you tested 40 prompts and appeared in 10, that is a 25% visibility score. Simple, but powerful as a tracking metric.
Then go deeper by prompt category. You may find you appear in 60% of comparison prompts but only 5% of non-branded category prompts. Search Engine Journal's analysis found that brands typically achieve citation visibility for a small percentage of relevant queries, with dramatic variation by funnel stage. Knowing where your gaps concentrate is more actionable than knowing your aggregate number.
Score your competitors using the same prompt set. If a competitor holds 45% visibility to your 15%, that 30-point gap represents every prompt where a buyer heard their name instead of yours.
Step Four: Analyze the Qualifiers
Position matters, but framing matters more. A brand described as "a strong option for enterprise teams" occupies a different competitive position than one labeled "suitable for smaller organizations." Those qualifiers shape buyer perception before they ever visit your website.
Review every mention and tag the language AI uses. Look for patterns. Are you consistently framed as a specialist in one area but invisible in another? Does AI attach qualifiers that do not match your current positioning? One B2B client we audited discovered that ChatGPT repeatedly described them as "best for startups" despite having pivoted to enterprise two years earlier. The outdated framing persisted because their most-cited content still reflected the old positioning.
Step Five: Document and Distribute
Structure your baseline report around three sections: your visibility score broken down by prompt category and platform, a competitive comparison showing share of recommendation for each tracked competitor, and a gap analysis identifying the highest-value prompts where you are absent but competitors are present.
My Web Audit's agency walkthrough recommends including an ROI section connecting visibility gaps to estimated business impact. If your average customer value is known, even a rough estimate of missed AI-driven discovery moments gives leadership a number to react to.
This report is not a one-time deliverable. Re-run the same prompt set monthly, track the trend lines, and tie changes back to specific optimization work. The brands that treat AI visibility as a measurable, trackable metric are the ones that improve it.
Sources: AirOps, "How to Measure AI Search Visibility" (2025), Wellows, "The Ultimate AI Search Visibility Audit Checklist" (2025), Search Engine Journal / IQRush, "The AI Search Visibility Audit: 15 Questions Every CMO Should Ask" (2025), RevenueZen, "Top 5 AI Brand Visibility Monitoring Tools" (2025), My Web Audit, "AI Visibility Audit Example: Full Walkthrough for Agencies" (2026), Trysight, "Brand Monitoring in Perplexity AI: Complete Guide" (2026)
Last quarter, I walked a hospitality group through their first AI visibility audit. They were confident going in. Strong Google rankings, solid review scores, a well-known brand in their market. Then we ran 40 prompts across ChatGPT, Claude, Perplexity, and Gemini. Their brand appeared in seven. A boutique competitor with half their properties showed up in twenty-two.
That gap did not exist because the competitor had better rooms or better marketing. It existed because the competitor's content and third-party signals happened to align with what AI platforms pull from when generating recommendations. The hospitality group had no idea because they had never measured it.
Why the Audit Comes First
Most brands skip straight to optimization without knowing where they actually stand. That means investing in fixes that may not address real gaps and having no way to measure whether anything worked. Research from AirOps found that only 30% of brands remain visible in consecutive AI-generated answers, and just one in five sustain visibility across five repeated runs of the same prompt. That inconsistency makes structured baseline measurement essential rather than relying on a single spot-check.
Step One: Build Your Prompt Library
Your audit is only as useful as the prompts you test. The biggest mistake is testing branded queries exclusively. As the Wellows audit checklist notes, your brand should appear for branded queries because that is table stakes. The real test is non-branded queries where users seek solutions, not your company by name.
Build a library of 25 to 40 prompts across three categories. Category prompts are broad: "best enterprise device management solutions" or "top luxury hotels in Napa Valley." Comparison prompts pit options against each other: "Company A vs. Company B for enterprise procurement." Problem-solution prompts mirror actual buyer questions: "how do I manage Apple devices across a distributed workforce."
Pull language from real sources. Google Search Console shows the queries driving impressions. Sales call recordings reveal how prospects phrase their needs. RevenueZen recommends keeping prompt phrasing natural: if you would not ask a coworker the question that way, do not use it.
Step Two: Test Across Platforms
Run every prompt across at least four AI platforms: ChatGPT, Claude, Perplexity, and Gemini. Each draws from different data sources and applies different logic to determine which brands surface. A brand that appears consistently on Perplexity might be invisible on Claude. Testing on a single platform gives you a fragment, not a baseline.
For each prompt and platform combination, record five data points: whether your brand appears, its position in the response, the language AI uses to describe you, which competitors appear alongside you, and what sources the platform cites. Keep testing conditions consistent. Use the same account settings, test during similar times, and avoid prompts that reference browsing history.
Step Three: Score Your Visibility
Calculate your visibility score: the percentage of prompts where your brand appeared in at least one platform's response. If you tested 40 prompts and appeared in 10, that is a 25% visibility score. Simple, but powerful as a tracking metric.
Then go deeper by prompt category. You may find you appear in 60% of comparison prompts but only 5% of non-branded category prompts. Search Engine Journal's analysis found that brands typically achieve citation visibility for a small percentage of relevant queries, with dramatic variation by funnel stage. Knowing where your gaps concentrate is more actionable than knowing your aggregate number.
Score your competitors using the same prompt set. If a competitor holds 45% visibility to your 15%, that 30-point gap represents every prompt where a buyer heard their name instead of yours.
Step Four: Analyze the Qualifiers
Position matters, but framing matters more. A brand described as "a strong option for enterprise teams" occupies a different competitive position than one labeled "suitable for smaller organizations." Those qualifiers shape buyer perception before they ever visit your website.
Review every mention and tag the language AI uses. Look for patterns. Are you consistently framed as a specialist in one area but invisible in another? Does AI attach qualifiers that do not match your current positioning? One B2B client we audited discovered that ChatGPT repeatedly described them as "best for startups" despite having pivoted to enterprise two years earlier. The outdated framing persisted because their most-cited content still reflected the old positioning.
Step Five: Document and Distribute
Structure your baseline report around three sections: your visibility score broken down by prompt category and platform, a competitive comparison showing share of recommendation for each tracked competitor, and a gap analysis identifying the highest-value prompts where you are absent but competitors are present.
My Web Audit's agency walkthrough recommends including an ROI section connecting visibility gaps to estimated business impact. If your average customer value is known, even a rough estimate of missed AI-driven discovery moments gives leadership a number to react to.
This report is not a one-time deliverable. Re-run the same prompt set monthly, track the trend lines, and tie changes back to specific optimization work. The brands that treat AI visibility as a measurable, trackable metric are the ones that improve it.
Sources: AirOps, "How to Measure AI Search Visibility" (2025), Wellows, "The Ultimate AI Search Visibility Audit Checklist" (2025), Search Engine Journal / IQRush, "The AI Search Visibility Audit: 15 Questions Every CMO Should Ask" (2025), RevenueZen, "Top 5 AI Brand Visibility Monitoring Tools" (2025), My Web Audit, "AI Visibility Audit Example: Full Walkthrough for Agencies" (2026), Trysight, "Brand Monitoring in Perplexity AI: Complete Guide" (2026)
CONTACT US
