Is your site visible to AI?

Large Language Models process web content uniquely. Our audit assesses your site's optimization for AI search and discovery.

Features

verified

Precise AI
Visibility Scoring

Our scoring system provides a clear assessment of your site's visibility to LLMs — giving you a single, actionable metric to measure and track your AI discoverability over time.

model_training

Key LLM
Coverage

Our analysis covers ChatGPT, Claude, and Gemini — so you know exactly how your site performs across the AI platforms your audience is already using.

auto_awesome

Actionable
Insights

Receive specific, prioritized fixes for each failing check — so you can improve your AI visibility in hours, not weeks, without guessing what to change.

AI visualization
How It Works

10 checks across 3 dimensions of AI readiness.

We crawl your site's HTML, robots.txt, and llms.txt, then run 10 heuristic checks to measure how well AI systems can index, understand, and recommend your content.

  • check_circle Indexability — crawler access & llms.txt presence
  • check_circle Understandability — schema.org, signal-to-noise & structure
  • check_circle Recommendability — E-E-A-T, consistency & citation readiness
Understanding AI Visibility

The new frontier of digital discoverability.

By Benjamin J. Schütz, Founder of AI Visibility Checker

visibility

What is AI Visibility?

AI visibility is the degree to which a website can be discovered, understood, and recommended by AI assistants and LLM-powered search engines such as ChatGPT, Claude, and Gemini. Unlike traditional SEO, which optimizes for search engine rankings and click-through rates, AI visibility focuses on whether large language models can access your content, parse its meaning, and cite it as a trustworthy source in their responses. As AI-driven search becomes the primary way people find information, AI visibility is emerging as a critical metric alongside traditional SEO.

speed

What is an AI Visibility Score?

An AI Visibility Score is a metric from 0 to 100 that quantifies how discoverable, understandable, and citable your website is to large language models. It is calculated by running 10 heuristic checks grouped into three categories:

  • check_circle Indexability — Can AI crawlers access and navigate your content?
  • check_circle Understandability — Can AI parse your structured data, semantics, and content?
  • check_circle Recommendability — Will AI cite and recommend your content to users?

Scores of 0–39 indicate weak visibility, 40–69 moderate, and 70–100 good. Your AI Visibility Score serves as a KPI for the AI era — a single number that tells you whether AI systems can find you, understand you, and recommend you.

psychology

How Do LLMs Evaluate Websites?

Large language models evaluate websites through three lenses, each corresponding to a dimension of your AI Visibility Score:

1. Technical Access

LLMs first check whether they are allowed to crawl your site. This means examining your robots.txt for AI-specific user agents (GPTBot, ChatGPT-User, ClaudeBot, Google-Extended, PerplexityBot, CCBot) and looking for an llms.txt file at the site root that provides structured context about your site.

A common failure mode is adding User-agent: * Disallow: / to block scrapers, which inadvertently locks out every AI crawler. Another is blocking CCBot to prevent Common Crawl inclusion, forgetting that Common Crawl is a primary training-data source for most open-weight and commercial LLMs. If an AI crawler cannot fetch your page, nothing else matters — no amount of structured data can compensate for a hard block at the HTTP level.

2. Content Quality

Next, LLMs assess whether your content is machine-readable. They look for JSON-LD structured data (schema.org types such as Article, Product, Organization, FAQPage, BreadcrumbList), evaluate the signal-to-noise ratio of your rendered HTML, check whether your content converts cleanly to markdown (preserving headings, lists, and semantic structure), and determine whether topic coverage matches what a reader would expect for your vertical.

Pages rendered entirely client-side with JavaScript, pages with a text-to-HTML ratio below 10%, and pages that collapse into walls of unstructured text when stripped of styling all score poorly here. LLMs prefer server-rendered HTML with clear heading hierarchy (H1 → H2 → H3), explicit lists, tables for tabular data, and valid JSON-LD that declares the page type and its key facts.

3. Trustworthiness

Finally, LLMs evaluate whether your content is worth citing. They look for E-E-A-T signals — Expertise (named authors with credentials), Experience (first-hand evidence, original data), Authoritativeness (domain reputation, external citations), and Trustworthiness (publisher identity, contact information, legal transparency).

Specific markers that raise citation likelihood: a named author with job title and LinkedIn, visible publication and last-modified dates, explicit outbound links to authoritative sources (Wikidata, government sites, peer-reviewed research, primary vendor docs), clear About and Contact pages with a real legal entity, and quotable sentences containing concrete claims and numbers that an LLM can lift verbatim into an answer. Sites lacking these signals may rank well in search but get skipped by LLMs choosing who to cite.

FAQ

Frequently Asked Questions

What is AI visibility? expand_more

AI visibility is the degree to which a website can be discovered, understood, and recommended by AI assistants and LLM-powered search engines such as ChatGPT, Claude, Gemini, and Perplexity. Unlike traditional SEO, which optimises for Google’s link-based ranking algorithm, AI visibility focuses on whether large language models can access your content, parse its meaning, and cite it as a trustworthy source when generating answers for users.

LLMs learn about your site through three channels: training-data scraping (Common Crawl, curated web corpora), real-time retrieval when a user asks a question (live fetching by ChatGPT search, Perplexity, Gemini), and retrieval-augmented generation pipelines built by third parties. All three channels depend on your site being crawlable, your content being machine-parseable, and your authority being verifiable — the exact three dimensions the AI Visibility Score measures.

A site can rank #1 in Google and still be completely invisible to ChatGPT if its robots.txt blocks GPTBot, or if its content is rendered entirely client-side and the crawler sees an empty HTML shell. AI visibility is a separate, parallel concern to search-engine visibility — and it is increasingly the one that determines whether your brand gets named in an AI answer.

How is the AI Visibility Score calculated? expand_more

The AI Visibility Score aggregates 10 heuristic checks grouped into three weighted categories. Each check produces a 0–100 sub-score; the sub-scores are combined into the final score.

Indexability (can AI reach your site): (1) Crawler access — does robots.txt allow GPTBot, ClaudeBot, CCBot, Google-Extended, PerplexityBot? (2) llms.txt presence — is there a valid llms.txt file at the root?

Understandability (can AI parse your content): (3) schema.org structured data — are JSON-LD blocks present and valid? (4) Signal-to-noise ratio — what percentage of the HTML is actual textual content? (5) Markdown compatibility — does the page convert cleanly to markdown? (6) Semantic coverage — does the content cover the topics an LLM expects for your domain?

Recommendability (will AI cite you): (7) Entity linking — do you link out to authoritative entities like Wikidata or official sources? (8) E-E-A-T evidence — are authors named, credentials visible, publisher identified? (9) Factual consistency — do claims across pages agree? (10) Citation likelihood — are there quotable definitions and data points?

Scores of 0–39 indicate weak visibility, 40–69 moderate, and 70–100 good. A perfect 100 is rare — most well-optimised sites land in the 70–85 range.

What is llms.txt and why does my site need one? expand_more

llms.txt is a plain-text file placed at the root of a website (at /llms.txt, similar to /robots.txt or /sitemap.xml) that provides a structured, markdown-formatted summary specifically for large language models. It was proposed by Jeremy Howard of Answer.AI in September 2024 and has been adopted by Anthropic, Stripe, Cloudflare, Perplexity, and thousands of other sites.

A valid llms.txt starts with an H1 containing the site name, a short blockquote describing what the site does, then one or more H2 sections listing key pages as markdown links with short descriptions — for example: an Acme Corp file with a one-line summary “Acme builds payments infrastructure” and a Docs section listing the getting-started and API-reference URLs.

Without an llms.txt, an LLM that wants to understand your site has to crawl many pages and infer the structure itself — slow, error-prone, and often skipped entirely. With one, the LLM gets an authoritative, concise map of your site in a single request, which improves the accuracy of answers it generates about you and increases the likelihood it will cite specific pages. Adding llms.txt is one of the highest-ROI AI visibility improvements — it takes ten minutes and is directly checked by the AI Visibility Checker.

Is the free scan really free? expand_more

Yes — the full AI Visibility Score, all 10 checks, and high-level recommendations are completely free for any public URL. No signup, no account, no credit card, no rate limiting beyond basic abuse prevention. You can run the free check as often as you like on as many domains as you like.

The optional Premium Fixing Guide costs $19.99 as a one-time payment (no subscription, no renewal). It delivers AI-generated, domain-specific fixes for every failing check: a ready-to-paste llms.txt tailored to your site, an updated robots.txt that allows AI crawlers, JSON-LD structured data blocks you can drop into your pages, and step-by-step written implementation instructions. The report link remains accessible for 100 days after purchase.

Payments are processed by Lemon Squeezy (acting as Merchant of Record), which handles sales tax and invoicing. We do not store card details. Refunds are handled in line with Lemon Squeezy’s policy and applicable consumer-protection law.

How is this different from traditional SEO tools? expand_more

Traditional SEO tools — Ahrefs, SEMrush, Moz, Screaming Frog — measure metrics optimised for Google’s link-based algorithm: keyword rankings, backlink profiles, domain authority, Core Web Vitals, crawl errors, meta-tag quality. These metrics matter for search-engine ranking, but they do not measure whether a large language model can actually ingest and cite your content.

AI Visibility Checker measures a different set of factors: AI crawler access (GPTBot, ClaudeBot, CCBot, Google-Extended, PerplexityBot user-agents in robots.txt), llms.txt presence, JSON-LD structured-data validity, signal-to-noise ratio of rendered HTML, markdown convertibility, semantic coverage, entity linking to authoritative sources, and E-E-A-T signals at the page level. None of these are core KPIs in any mainstream SEO tool.

The two sets of metrics diverge often. A site can rank #1 for a keyword on Google and be completely absent from ChatGPT answers because its robots.txt blocks GPTBot, its content is client-rendered, or it lacks structured data. Conversely, a site with mediocre Google rankings can become a frequent LLM citation source if it publishes clear, structured, machine-readable content with strong E-E-A-T signals. AI visibility is a parallel discipline, not a subset of SEO.

Which AI assistants does this cover? expand_more

The analysis checks compatibility with all major AI assistants and LLM-powered search engines: ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), Perplexity, and Microsoft Copilot. Specifically, we check your robots.txt for the following crawler user-agents: GPTBot, ChatGPT-User, OAI-SearchBot (OpenAI); ClaudeBot, anthropic-ai, Claude-Web (Anthropic); Google-Extended (Google for Gemini training); PerplexityBot (Perplexity); CCBot (Common Crawl, used by many LLMs for training); and Bytespider (ByteDance / Doubao).

Beyond crawler access, the content-quality and trustworthiness checks are provider-agnostic. JSON-LD, llms.txt, E-E-A-T signals, markdown compatibility, and semantic coverage are universal factors that influence how any LLM ingests, parses, and cites your content — regardless of which model or assistant the user is interacting with.

We update the crawler user-agent list as new AI crawlers emerge. If a major AI provider announces a new user-agent, the check is usually updated within days.

Have a different question? Contact us →

Pricing

Transparent Pricing for
Global Impact.

Scoring Analysis

Instant AI visibility baseline

$0
  • done 100% Free for any URL
  • done Global LLM visibility score
  • done 10 checks across 3 categories
SPECIAL OFFER

AI Fixing Guide

Custom AI-generated improvements for your site

$29.99 $19.99/once
  • auto_awesome AI-generated fix for every failing check
  • code Ready-to-use llms.txt, robots.txt & JSON-LD
  • psychology Tailored implementation instructions per check

No subscription required. One-time payment for report access.

Fetching site resources...
Analyzing AI crawler access...
Evaluating semantic structure...
Calculating visibility score...