top of page

AI engineering <3 creative

Insights

September 22, 2025

The GEO Illusion: What Actually Drives AI Visibility

Author: Jasmine Rienecker, SR AI Engineer

MSc Mathematics and Computer Science, Oxford University



AI assistants like ChatGPT, Gemini and Meta AI are rapidly becoming primary gatekeepers of brand discovery. When users turn to these systems for recommendations, being invisible in their responses means your brand risks disappearing from consumer awareness. Companies that have doubled down on traditional SEO and GEO tactics are finding fewer citations in AI-generated answers, while competitors with strong cross-domain authority show up more often, even when their own websites are technically weaker.



The Limits of SEO and GEO Strategies


This isn’t a failure of execution; it’s a mismatch in strategy. Both SEO and GEO target single-domain signals such as page structure and on-site content, while AI-driven systems prioritise multi-domain validation patterns. Analysis of citation patterns across major AI platforms reveals that claims often require either multiple independent domain confirmations or exceptionally high single-source trust scores before earning citation status.



How AI Systems Really Choose Sources


To understand this pattern, you need to follow how AI systems transform web content into authoritative answers.


AI systems begin in familiar territory: crawling, indexing, and parsing web content for use in both training and search databases. Traditional SEO hygiene still matters here as clean markup, proper headings, XML sitemaps and canonicals help ensure your content gets discovered and processed correctly.



Surviving the Chunking Process


The divergence from traditional search engines begins during content processing, where websites are chunked into digestible pieces. Here, GEO techniques briefly take centre stage, as signals like DOIs, clear attributions, and JSON-LD make a chunk easier for the system to parse and index. A short, well-tagged claim wrapped in JSON-LD is much more likely to survive the chunking process intact than the same claim buried in a long, unstructured page. This means GEO optimisation shouldn't be overlooked, as it ensures your content maintains its structure and authority markers even when broken down into fragments for AI processing.


However, surviving chunking is only the first step. When users ask open-ended questions about brands or products, queries that trigger external web retrieval rather than relying solely on the model’s training, your content faces two more hurdles.


The system first matches the user’s query to the content chunks using a combination of lexical and semantic matching. Passing this filter gives a chunk the right to compete, but it doesn’t guarantee visibility.


The decisive stage is re-ranking: a multi-step process that turns hundreds of candidate chunks into the select few that will actually get cited. During re-ranking, the system clusters candidate passages by their source domain and runs domain-trust evaluation.



Why Domain Trust Matters


This evaluation looks beyond on-page SEO and GEO by considering factors like a publisher’s reputation in academic indexes, the quality and authority of inbound links, whether the content can be found in institutional archives, and how often respected outlets reference your domain. The system is scoring entire domains for credibility, not just individual pages for polish.


Next comes support aggregation, where the re-ranker counts how many distinct domains support the same claim and then weighs those counts against each domain’s trust score. Most systems only treat a claim as corroborated when it appears across multiple independent domains, or when it’s backed by a single source with exceptional authority.


This re-ranking stage creates a hard limit for site-focused strategies: publishing the same claim across ten pages of your website still counts as just one supporting voice. No amount of internal duplication boosts the number of independent confirmations the AI cares about.



Building Cross-Domain Authority


Breaking through this validation barrier requires a fundamental shift in approach. Rather than focusing solely on optimising your own digital properties, successful companies are building systematic programs that cultivate independent confirmations across authoritative domains.


01 Monitoring

Establish baseline visibility into the AI-generated answers by running industry-relevant questions that potential customers might ask. Documenting which companies, sources, and publications appear consistently in responses allows identifying trends into the types of sources being referenced and measuring your progress over time. The goal is not vanity metrics but creating a working map of the evidence AI systems actually use in your vertical.


02 Source Intelligence

Profile the authoritative sources that show up in your monitoring to understand what each domain values. Understanding these preferences, whether that be original datasets, short expert commentary or long-form research, is crucial for positioning your insights effectively and therefore gives you the best chance of being cited independently.


03 Systematic Experimentation

Design targeted experiments to establish thought leadership across identified authoritative sources. This involves creating content specifically designed to be referenced by independent publications rather than optimising your own website. This could be in the form of original research, proprietary frameworks that consultants and analysts might adopt, or educational resources that academic institutions could cite. For every experiment, measuring distinct domain references, assessing the trust quality of those domains, and tracking changes in AI citation patterns provides actionable insights for refinement.



The New Definition of Visibility


The fundamental shift here is to stop treating visibility as a property of your site alone and start treating it as a property of the network of independent, referenceable confirmations that AI re-rankers actually count.


As AI systems handle growing shares of business research, invisibility in AI answers increasingly means invisibility to prospects. Companies succeeding in AI visibility understand that optimisation and authority operate on different planes. They maintain technical excellence while building something more valuable: independent validation of their expertise across multiple domains.


These organisations focus on making their insights referenceable rather than just discoverable and cultivate mentions through genuine thought leadership. SEO and GEO get you into the candidate pool and help you survive initial processing, but independent authority determines whether AI systems trust you enough to quote.


In this new landscape, the winners won't be those with the most optimised websites, they'll be the organisations whose expertise is so well-established across independent sources that AI systems naturally recognise them as trusted authorities.


Contact Author:

jasmine@stupidhuman.ai



Access Full Research

For free access to full research, request below.

Other research

Competitive Advantage Through Regulation: Europe’s Vision for Responsible AI

University of Oxford Joins Stupid Human's Research Program

Research Paper: AI Assistants Push Brand and Government Biases Onto Users

AI on the run: AI's running shoe brand preference

AI on the rocks: AI's Brand Choices in Spirits

bottom of page