TL;DR

When someone asks an AI whether your brand is trustworthy, it doesn’t look at what you write — it looks at what others say about you. I saw it happen in real time, in the internal search queries Perplexity ran. That means external mentions — guest articles, citations, Reddit — carry more weight than your own content calendar. Something I still have a lot of work to do on myself.

This Is What Happens Inside Perplexity When You Ask a Question.


Expert Take

I asked Perplexity whether my site was trustworthy and then looked under the hood. Perplexity wasn’t interested in my site or my blog posts. What it really wanted to know was what other people say about me. That’s a completely different way of thinking than SEO — and honestly, I’d never looked at it that way before. But at least now I know what I need to build.


TL;DR

When someone asks an AI whether your brand is trustworthy, it doesn’t look at what you write — it looks at what others say about you. I saw it happen in real time, in the internal search queries Perplexity ran. That means external mentions — guest articles, citations, Reddit — carry more weight than your own content calendar. Something I still have a lot of work to do on myself.

Last updated: April 14, 2026


The Experiment

I asked Perplexity one question: “Is Hands on GEO a reliable source about GEO?”

Above the answer it said: Completed 2 steps. Those two steps are what’s interesting. Step one was a web search — three internal queries running at the same time. Step two was a direct visit to my homepage. Only then did Perplexity write its answer.

The answer itself was positive: Hands on GEO is a reliable source for B2B marketers who want to learn more about GEO. Fine. But what kept my attention was what happened in those two steps.

Every browser has a built-in tool that lets you see the requests a site sends in the background. I had it open while Perplexity was working, and could see exactly what it was doing internally. It wasn’t what I expected.


What Query Fan-Out Is and Why It Matters

When you ask an AI search engine a question, it doesn’t just pass that one question to the internet. It splits your question into several smaller search queries that run at the same time. That’s called query fan-out — think of a hand of cards fanning open. The results from all those separate queries are then combined into one answer.

As a user, you normally don’t see any of this. But if you know where to look, it’s readable.

For my question, Perplexity ran three internal queries:

  • “Hands on GEO reliable source”
  • “Hands on GEO credibility”
  • “Hands on GEO reputation”

Then it visited my homepage directly and read the first text it found: “GEO Expert & B2B Strategist — Your B2B buyers are asking AI. Are you showing up in the answer?”

That’s it. Three questions about my reputation and a visit to my homepage. No questions about my area of expertise, no questions about what I write.


AI Doesn’t Ask What You Know — It Asks What Others Think of You

Perplexity translated my question internally into: what does the internet say about this brand? All three search queries were about whether I’m trustworthy and credible in the eyes of others.

That’s different from how SEO works. In Google, your visibility is largely based on what you publish yourself — how good your articles are, how you optimise them. In an AI trustworthiness assessment, it’s about what others say about you. Third-party mentions, citations in trade publications, discussions about your brand on Reddit.

One external mention with context probably carries more weight than five well-written articles on your own site. Different logic, different approach.


Still: Name Confusion in the Source List

The three search queries were correctly aimed at my brand. But among the fifteen sources Perplexity consulted, there was josephkerski.com — a site about GIS education and geographic data analysis. Not GEO as a marketing discipline, but a site where “GEO” appears prominently in the geographical sense.

That site also showed up in an earlier test I ran on the same platform. In that first test, there were even more geography sources in the list: cartography, geography education, GIS systems.

Perplexity recognised my brand correctly. But the sources it consulted still contained confusion about the name. Brand recognition and source retrieval apparently don’t always stay in sync.

This is noise you don’t see in regular search behaviour. A geography PDF has no effect on your position in Google. On the judgement Perplexity makes about my credibility? Possibly it does.


What You Can Actually Do With This

1. Make sure other sites write about you

Guest articles, mentions in industry roundups, citations by other publications — that’s the raw material an AI uses when assessing your brand’s trustworthiness. One strong article on your own site counts for less in a trustworthiness assessment than one external mention with a bit of context around it.

2. Make your homepage unambiguous

Perplexity visited my homepage directly as part of its assessment. Make sure your first paragraph clearly explains what you do, for whom, and in which field — so an AI can’t read your name or key terms two different ways.

3. Add invisible metadata that names your field

Schema markup is information that search engines and AI systems read without visitors ever seeing it. A description there that clearly names your area of expertise helps AI systems make the right connections. In-text citations increase AI visibility by up to 40%, according to research (Aggarwal et al., KDD 2024) — but that only works if an AI has correctly identified you first.

4. Check your own AI reputation

Type into Perplexity: “Is [your brand] a reliable source about [your field]?” Then open the source list. Check whether the sources Perplexity consulted are in the same field you think you’re in. If there’s noise in there — sites using your key terms in a different way — you know name confusion is happening. You don’t need any technical knowledge for this check: the source list is visible to everyone.

Honestly, I haven’t done all four of these myself. I’ve only been running handsongeo.com for a few months. The guest articles are on the list, not in the calendar. My metadata is better than it was six months ago, but far from where it needs to be. I’m writing this not as someone who has it all figured out — but as someone who’s working out how this works and occasionally finds something worth sharing.


What I Don’t Know Yet — and Haven’t Done Yet

I’ve run two tests on the same platform, with the same question, at different moments. That’s not much. I haven’t looked at ChatGPT and Google AI Mode the same way. I haven’t fully implemented the steps I describe above on my own site. No guest articles published, no systematic external mentions built up, and the name confusion around “Hands on GEO” is something I haven’t actively tackled yet.

This isn’t a guide from someone who knows. It’s a writeup of one observation that showed something concrete about how AI works — and the question of what you can do with that as a marketer. A question I’m still answering myself.


FAQ

What is query fan-out?
When you ask an AI search engine a question, that question gets split internally into several smaller queries that run simultaneously. The results are then combined into one answer. As a user you only see the final result — the individual queries are normally not visible.

Which questions did Perplexity ask internally?
Three: “Hands on GEO reliable source”, “Hands on GEO credibility” and “Hands on GEO reputation”. None of them asked about my field or the content of my site. Perplexity also visited my homepage directly as a second step.

How do you see which sources Perplexity used?
Click on the source list after a Perplexity answer. That shows all the sites that were consulted. If you want to see the internal queries themselves, you’ll need your browser’s developer tools — a bit more technical, but not impossible.

Why does Perplexity look for reputation rather than content?
When you ask whether something is trustworthy, Perplexity treats that as a question about what others think — not what the source claims about itself. Hence queries focused on external signals: “reliable source”, “credibility”, “reputation”. For an informational question — “what is GEO?” — the internal queries are focused on the content instead.

Does the name “Hands on GEO” create a disadvantage with AI?
“Hands on GEO” also resembles “hands-on geography” — geography through direct experience. Perplexity recognised my brand correctly, but josephkerski.com (a GIS education site) still appeared in the source list — in both tests I ran. Brand recognition worked fine; source retrieval still brought in geographical noise.

Does Perplexity always return the same results?
No. Results and sources can vary per session. What I describe here are snapshots — not fixed truths.


Sources

Read also: Everything a B2B Marketer Should Know About GEO in 2026 · What is AI Search? · What is GEO?

Leave a Comment

Volg Hans:LinkedIn·X·GitHub·Bluesky·Mastodon