SEO & AI: Optimizing for answers, not just rankings
AI Overviews have reshuffled the SERP entirely. The question is no longer just "can Google find me?" — it's "will the model cite me?"
In May 2023, Google quietly began rolling out what it called SGE — Search Generative Experience. By 2025, it was renamed AI Overviews and was live for over a billion users worldwide. The implications for SEO were immediate and, for many site owners, brutal.
Queries that once reliably funneled clicks to the top organic result now get answered directly in the SERP — a synthesized paragraph pulled from multiple sources, sitting above the fold, above paid ads, above everything. For informational queries, which had always been the lifeblood of content-driven businesses, organic click-through rates dropped sharply. Some studies reported falls of 15–25% on affected query types within months of full rollout.
The old playbook — target a keyword, write a 2,000-word post, earn a position — still works, but it's no longer sufficient. The game has a new layer: you now need to be the source the model reaches for.
How AI Overviews actually selects sources
Google hasn't published the weights, but through large-scale testing and academic analysis, a reasonably clear picture has emerged. AI Overviews strongly favors sources that score well on what the industry has taken to calling E-E-A-T signals: Experience, Expertise, Authoritativeness, and Trust. These aren't new — they've been in Google's Quality Rater Guidelines since 2018 — but they've become far more load-bearing now that they feed a generative system, not just a ranking signal.
The model appears to favor content that is direct and structured. Fluffy introductions, buried answers, and SEO padding that stretches a 300-word answer across 2,000 words of prose are working against you. The system is skimming for extractable facts and clear declarative statements. If your answer is buried in paragraph seven, the model may find someone else's answer in paragraph one and cite them instead.
The new anatomy of a well-optimized page
If you're writing for AI Overviews citation, structural clarity is everything. The most defensible pattern looks like this: lead with a direct answer, follow with supporting evidence or nuance, use headings that mirror the exact phrasing a user would search. Schema markup — specifically FAQ, HowTo, and Article — remains relevant because it gives the model explicit semantic hooks to pull from.
Internal linking structure matters more than it used to as well. The model tends to favor authoritative pages that are well-referenced within a site's own content graph. A single standalone article, however good, is less likely to be cited than a page that sits at the center of a topic cluster where multiple surrounding pages point to it as a canonical reference.
"The best SEO content in 2026 reads like it was written for a knowledgeable human — because that's exactly what the model is trying to simulate when it synthesizes an answer."
Zero-click doesn't mean zero value
The instinct to treat AI Overviews as purely adversarial misses something important. Being cited in an Overview — even when the user doesn't click — is a brand impression. Users see your domain name in the citation panel. For businesses selling higher-consideration products or services, that exposure compounds over time. The click-through loss is real, but it isn't the whole story.
There's also evidence that queries where the AI Overview appears actually have higher commercial intent clicks further down the page. The model handles the informational layer; users who still want to go deeper — to compare, to buy, to verify — are more qualified when they do click. Early conversion data from several content-heavy publishers suggests that while overall clicks fell, revenue per click rose meaningfully on the traffic that remained.
What this means practically
The content that survives — and thrives — in this new environment has a few things in common. It is genuinely authoritative: written by someone with real expertise, citing primary sources, making falsifiable claims. It is structurally generous: easy for a model to parse, with answers near the top and supporting detail below. And it is updated: stale content, even if historically well-ranked, loses ground to fresher sources as models increasingly weight recency for rapidly-changing topics.
The sites that treated SEO as a keyword insertion exercise and outsourced content creation wholesale are the ones facing the steepest correction. The sites that treated SEO as a distribution layer for genuine expertise are, by and large, holding up.
That's not a coincidence. It's the system working as intended — finally.