TL;DR

There are already plenty of tools out there to measure "AI visibility", but to understand the fundamentals I start at the beginning. For me, that means lifting the hood to see what actually happens — from a technical perspective — after you type a question or search query "into AI".

How Fast Can a New Site Become Visible in ChatGPT? My Second Fan-Out Measurement

Last updated: April 20, 2026

Fan-out baseline: singer surrounded by microphones as a metaphor for building AI visibility

Expert Take: There’s no shortage of AI visibility tools, but I want to understand what actually happens when you ask AI a question.


Why I wanted to measure this

I wanted a concrete answer to one question: does ChatGPT pick up handsongeo.com? And more importantly: does it find me when someone asks broadly about Generative Engine Optimization, without mentioning my brand name?

That second point is the real test. An AI system that finds you when someone types your name is expected. An AI system that mentions you when someone asks for “the best GEO resources” or “experts in the Netherlands” is far more valuable.

So on April 18, I ran fan-out queries using the Quolity Chrome extension. 90 planned runs across 30 prompts, with multiple variants and repetitions. 26 unique prompts, divided into five clusters: brand-specific, generic GEO, comparative/experts, practical how-to, and Dutch experts. Each prompt was run three times to test consistency. The result: branded good, generic not.

What are fan-out queries?

When you ask ChatGPT a question, the system executes multiple search queries to gather information. A prompt like “Which websites write about GEO?” can trigger queries like site:ahrefs.com generative engine optimization, site:semrush.com AI search visibility GEO, and original Generative Engine Optimization paper arxiv.

These underlying search queries determine which sources ChatGPT sees and ultimately cites. They reveal not just whether you appear in the answer, but how the system searches for information. Across my 77 runs, ChatGPT executed 584 fan-out queries in total. That’s where the real insights are: not in the question “am I mentioned?” but in the question “am I even being searched for?”

The numbers

Of the 90 planned runs, 77 were successful. The breakdown:

Segment Runs As source Cited Brand mentioned
Branded 18 16 17 18
Generic 57 1 1 2

When someone explicitly asks about Hands on GEO, I’m found. When someone asks broadly about GEO, AI visibility, or experts, I’m virtually absent. The consistency across runs was notably high: 23 out of 26 prompts produced the same result three times. This isn’t noise — the pattern is stable.

What ChatGPT does understand about my site

For branded queries, ChatGPT was already able to retrieve content published on April 17 — one day before my measurement. It recognized that Hands on GEO is a Dutch website about GEO, focused on B2B marketing, run by me as an independent internet professional. It cited strengths like transparent author information, source references, and original experiments. It described the site as “a useful and fairly credible niche source.”

One unexpected finding: ChatGPT partly verified my identity through my privacy policy. That page states that Hands on GEO is managed by “Hans Schepers, an independent internet professional.” A page most marketers treat as a legal checkbox functioned as an identity check for the model. The privacy page was retrieved as a source 14 times — more than any content page except the homepage.

The homepage itself was retrieved 16 times, followed by the pillar article on GEO for B2B marketers (4 times) and the piece on measuring AI visibility (4 times). What stood out: half of my published articles weren’t retrieved at all, even for branded prompts. ChatGPT selects sharply — it picks the pages most relevant to the specific question, not everything on your site. For anyone working on GEO: every page is a potential identity signal, but not every page carries equal weight. Your homepage and author information do the heavy lifting.

The tipping point for generic queries

The first real generic mention came for “Which websites write about GEO and AI visibility?” In run 1, I was cited as a source. In run 2, ChatGPT mentioned me in the text but without a source link. In run 3, I didn’t appear at all.

That fluctuating pattern may be the most telling result. It suggests my site is right at the tipping point of generic visibility. The model knows me, but doesn’t trust me enough to cite me consistently. For the remaining 23 generic prompts — spanning definition questions, how-to’s, expert lists, and comparative questions — the result was consistent: absent all three times.

Who wins the generic GEO queries

To put the competition in perspective, I tallied all sources across the 77 runs. The top 5 domains for generic prompts:

# Domain Times as source
1 developers.google.com 236
2 arxiv.org 136
3 searchengineland.com 112
4 semrush.com 78
5 ahrefs.com 75

Handsongeo.com ranked 9th overall with 63 mentions — but those come almost entirely from branded prompts. Without brand queries, my site would land at 1–2 mentions.

These are established English-language domains with years of authority behind them. Notably, LinkedIn ranked 3rd and 5th (nl.linkedin.com 118 times, linkedin.com 111 times). For expert prompts, ChatGPT pulls profile information primarily from LinkedIn — not from personal websites or trade media. A three-week-old niche site doesn’t compete on authority here. It competes on even being part of the search space the model looks at.

How ChatGPT searches behind the scenes

The most instructive insight came from the fan-out queries themselves.

For generic prompts, ChatGPT searches with site: operators targeting specific domains: site:arxiv.org, site:searchengineland.com, site:semrush.com, site:moz.com. It doesn’t search the whole web. It starts with a set of domains it considers authoritative. Handsongeo.com isn’t in that set.

That’s a different insight from “I don’t have enough authority.” It means I’m not even in the search space. ChatGPT isn’t looking at my site and choosing something better. It isn’t looking at all.

Of the 584 fan-out queries, 92 contained my domain name or brand name — but only for prompts where my name was already in the question. For any B2B marketer looking to apply this: you’re not just competing for the best content. You’re competing for a spot in the source set the model even considers.

Four types of generic invisibility

In the data, I found that “generic” is actually four different problems, each requiring a different approach.

Definition questions (“What is GEO?”): ChatGPT picks arXiv and Search Engine Land. ArXiv appeared 76 times as a source in this cluster, Search Engine Land 50 times. You’re competing with knowledge authority — original data and experiments are your way in.

How-to’s (“How do you optimize content for AI?”): ChatGPT picks official platform documentation. Google Developers appeared 104 times as a source, Microsoft Learn 40 times. The opportunity for a niche site lies in translating those technical docs into practical advice for marketers who aren’t developers.

Expert lists (“Who are GEO experts in the Netherlands?”): ChatGPT pulls expert information primarily from LinkedIn. The domain nl.linkedin.com appeared 97 times as a source — more than all other domains combined. Competitors like Chantal Smink, Jarik Oosting, and Jaap Jacobs are named as GEO experts. If you’re not visibly claiming GEO expertise on LinkedIn, you don’t exist for this type of prompt.

Comparative questions (“Compare the best GEO resources”): ChatGPT builds its own overviews from established sources. iPullRank (21x), Marketing AI Institute (20x), and Frankwatching (18x) dominate here. The way in is being cited externally by those sources — when they link to you, you become part of the source set the model synthesizes.

“Publishing more content” helps with definition questions. “Optimizing LinkedIn” helps with expert lists. “Getting published externally” helps with comparative questions. “Translating platform documentation into practical advice” helps with how-to’s. No single approach solves all four — and that may be the most practical insight from this entire measurement.

The sources are English — even for Dutch prompts

A pattern I only noticed on closer analysis: for generic GEO queries, 98% of retrieved sources were English-language, even when the prompt was in Dutch. Search Engine Land, arXiv, Semrush, Ahrefs — all English. The model treats GEO as an English-language discipline.

Only when the prompt explicitly asks about the Netherlands does ChatGPT switch to Dutch sources. For expert prompts, 53% were Dutch-language: Emerce, Marketingfacts, Frankwatching, and dozens of agency sites.

Three implications. First: my Dutch content doesn’t even compete for generic GEO queries, because ChatGPT searches in a different language segment. Second: the English versions of my articles are strategically more important than I thought. They’re not just translations — they’re my only chance to compete for generic GEO queries, even when those queries are asked in Dutch.

Third: the cluster where Dutch sources dominate — expert prompts — is exactly where I’m absent. That’s the cluster where my Dutch-language positioning would deliver the most value. The Dutch trade media appearing there (Emerce 24 times, Frankwatching 20 times, Marketingfacts 18 times) are precisely the platforms where a guest publication would strengthen the “Hans Schepers ↔ GEO” association in the model.

What B2B marketers can take from this

From 77 runs and 584 fan-out queries, I distill seven action points relevant to any B2B marketer — not just me.

  1. Measure fan-out queries, not just end results. Whether you appear in the answer is the final picture. The fan-out queries show why you’re in or out. That’s where you can steer.
  2. Treat your homepage and author information as primary identity signals. ChatGPT retrieved my homepage 16 times and my privacy policy 14 times — more often than my best articles. The pages you consider administrative, the model uses to determine who you are.
  3. Claim your expertise on LinkedIn. For expert prompts, LinkedIn was by far the dominant source (nl.linkedin.com 97 times, linkedin.com 111 times). If your profile doesn’t explicitly claim your field of expertise, you don’t exist for this type of query.
  4. Publish in English, even if your audience is local. For generic GEO queries, 98% of retrieved sources were English-language, regardless of the prompt language. Your English content isn’t a translation — it’s your ticket into the search space.
  5. Invest in external validation over more owned content. Guest articles in trade media that ChatGPT already uses as sources (Emerce, Frankwatching, Search Engine Land) carry more weight than ten additional blog posts on your own site.
  6. Choose your strategy per query type. Definition questions require original data. How-to’s require translation of platform documentation. Expert lists require LinkedIn. Comparative questions require external citations. One approach doesn’t work for all four.
  7. Repeat your measurement. A single snapshot tells you about this moment. Only through repetition do you know whether you’re moving forward or backward.

What I’m doing with this

Branded visibility can emerge quickly. Generic visibility is a different story. AI systems play it safe and choose established sources over a niche site that just launched.

But the first generic mention is there. That proves my site can, in principle, compete. The question now is: how do I make it structural?

The roadmap is clear. Content that matches the queries ChatGPT executes internally — not the questions I think users ask, but the fan-out queries the model actually runs. External signals through guest articles at Emerce and Frankwatching, the Dutch trade media ChatGPT already uses as sources. Strengthening LinkedIn as an entity source, specifically for the expert prompts where I’m currently absent.

And there are two dimensions I want to add in measurement round 2. Perplexity and Gemini as platforms, to see if this pattern is ChatGPT-specific or applies more broadly. And English-language variants across all prompt clusters, not just branded and generic — the expert and how-to prompts in English are currently blind spots in my data.

What this measurement doesn’t prove

This is one measurement, on one platform, on one day, with one model (gpt-5-4-thinking). The consistency within runs was high, but that says nothing about stability over weeks. I have no control group and don’t know if 1 out of 57 is bad, normal, or good for a three-week-old site. There are no benchmarks available for this type of measurement either — which is precisely why I’m publishing it.

This is a test case, not a final conclusion. What I can say with certainty: the difference between branded and generic visibility isn’t a gradual spectrum. It’s a binary difference — you’re in the search space or you’re not. And the path from one to the other isn’t a matter of “publishing more content.” It’s a matter of the right signals in the right places.

That’s exactly why I started Hands on GEO: not to claim how GEO works, but to measure it and show what happens. This first measurement shows the foundation is there. Now the real work begins: moving from being found to being chosen.


FAQ

How does the Quolity Chrome extension work?
Quolity records which search queries ChatGPT executes internally, which sources it retrieves, and which of those are cited in the answer. The export is one JSON file per run.

Is this representative of all AI platforms?
No. This covers only ChatGPT with gpt-5-4-thinking. Perplexity and Gemini may choose different sources. Platform comparison is a logical next step.

What are site: operators in fan-out queries?
ChatGPT searches some prompts by targeting specific domains: site:arxiv.org, site:searchengineland.com. If your domain isn’t in that set, your content won’t be considered for that type of query.

What’s the difference between “source” and “mention”?
A source means ChatGPT retrieved your page. A mention means you appear in the answer text. You can have a mention without a source — the model knows you but doesn’t fetch your page.

How long has handsongeo.com been live at the time of measurement?
Three weeks, with 14 published articles in Dutch and English.

Can I measure this myself?
Yes. The Quolity Chrome extension is available as a browser add-on. You need a ChatGPT account and must enter each prompt manually. The extension automatically records the fan-out queries and sources. Budget 2–3 hours for a set of 30 prompts with three runs each.


Sources: 77 Quolity fan-out JSON exports, ChatGPT gpt-5-4-thinking, April 18, 2026.

Leave a Comment

Volg Hans:LinkedIn·X·GitHub·Bluesky·Mastodon