You noticed organic CTR dropping across your top-performing queries over the past quarter. Your rankings held steady, impressions remained stable, but clicks fell 20%. The immediate assumption is AI Overviews, but during the same period Google also expanded People Also Ask boxes, adjusted featured snippet formatting, and rolled out new ad layouts. Seer Interactive’s September 2025 data showed that even queries without AI Overviews experienced a 41% CTR decline year-over-year, proving that AI Overview expansion is not the only force suppressing organic clicks. Attributing CTR decline to AI Overviews without isolating them from concurrent SERP changes produces misdiagnosis that leads to the wrong strategic response. The diagnostic process requires separating AI Overview impact from the noise of continuous SERP evolution.
Step one: segment queries by confirmed AI Overview presence versus absence
The foundation of any CTR decline diagnosis is segmenting your query portfolio into two groups: queries where AI Overviews are confirmed present and queries where they are not. This segmentation requires third-party SERP monitoring tools because Google Search Console does not distinguish between AI Overview and non-AI-Overview impressions.
Major SEO platforms now include AI Overview tracking in their SERP feature reports. In Ahrefs’ Site Explorer, the Organic Keywords report includes an AI Overview filter in the SERP features menu. Semrush’s Position Tracking tool flags AI Overview presence for tracked keywords. seoClarity provides dedicated AI Overview monitoring across large keyword sets.
The segmentation methodology involves exporting your Search Console query data and matching each query against SERP monitoring data to tag AI Overview presence. For accuracy, use daily SERP checks rather than weekly snapshots, as AI Overviews appear intermittently for some queries, rotating in and out based on Google’s confidence thresholds.
Once segmented, compare CTR trends for the two groups over the same time period. If the AI Overview group shows a steeper CTR decline than the non-AI-Overview group, AI Overviews are a contributing factor. If both groups decline at similar rates, the primary cause is something other than AI Overviews, such as broader user behavior shifts, competitive changes, or other SERP features.
The statistical control matters. A sample of fewer than 100 queries per segment produces unreliable comparisons. For enterprise sites tracking thousands of queries, segment sizes are typically sufficient. For smaller sites, combine queries into topic clusters and compare cluster-level CTR trends between AI Overview-affected and unaffected clusters.
Seer Interactive’s data illustrates why this segmentation is critical. Their study found that AI Overview queries had an organic CTR of 0.61% compared to 1.62% for non-AI-Overview queries. But non-AI-Overview queries were also declining, dropping 41% year-over-year. Without the segmentation, a practitioner looking at blended CTR would attribute the entire decline to AI Overviews when nearly half the decline affects queries with no AI Overview at all.
Step two: control for concurrent SERP feature changes on the same queries
A query that gained an AI Overview may have simultaneously gained additional SERP features. People Also Ask expansions, featured snippet format changes, shopping carousels, local pack insertions, and ad layout modifications all independently suppress organic CTR. Attributing the entire CTR decline on a query to the AI Overview when three other features also appeared produces an inflated impact estimate.
The multi-feature control methodology involves tracking all SERP feature changes alongside AI Overview introduction for each query. SERP monitoring tools report which features are present for each tracked keyword. By logging feature changes over time, you can identify queries where the AI Overview was the only change versus queries where multiple features changed simultaneously.
Amsive’s research into AI Overview CTR found that keywords triggering both AI Overviews and featured snippets saw the largest CTR decline, averaging -37% compared to AI-Overview-only queries. This SERP feature stacking effect means that multi-feature queries overstate AI Overview impact if you do not control for the additional features.
The isolation technique uses queries where only the AI Overview changed as the cleanest diagnostic signal. These “AI-Overview-only change” queries provide the most reliable estimate of AI Overview-specific CTR impact. Queries where multiple features changed simultaneously should be flagged as ambiguous and excluded from the primary diagnostic calculation.
For queries with concurrent feature changes, estimating per-feature CTR impact requires historical CTR curves for each feature type. Advanced Web Ranking publishes CTR curves segmented by SERP feature presence. By applying these baseline CTR adjustments for non-AI-Overview features, you can estimate the residual CTR decline attributable to the AI Overview specifically. This is an approximation, not a precise measurement, but it narrows the attribution range significantly.
Step three: apply seasonal and trend baselines to distinguish cyclical decline from structural decline
CTR fluctuates with seasonal search behavior, competitive entry, and query reformulation trends. A CTR decline that begins in Q4 may reflect holiday shopping pattern shifts rather than AI Overview expansion. Without a seasonal baseline, structural AI Overview impact becomes indistinguishable from cyclical variation.
The baseline methodology uses year-over-year CTR comparison adjusted for SERP feature changes. Pull the same query set’s CTR for the current period and the same period one year prior. If AI Overviews were not present for those queries one year ago, the year-over-year delta on AI-Overview-present queries, minus the year-over-year delta on AI-Overview-absent queries, isolates the AI Overview contribution.
This difference-in-differences approach accounts for any site-wide or SERP-wide trends affecting all queries. If your non-AI-Overview queries declined 15% year-over-year and your AI-Overview queries declined 45% year-over-year, the AI Overview-attributable decline is approximately 30 percentage points. The remaining 15% reflects broader trends affecting all queries.
Constructing a seasonal CTR baseline requires at least 12 months of historical data. For queries with strong seasonal patterns, such as tax-related or holiday-related queries, use two or three years of data to establish a reliable seasonal curve. Compare the current period’s deviation from the seasonal baseline against the expected AI Overview impact range from published studies.
The statistical test for distinguishing structural from cyclical decline involves tracking whether the CTR decline persists beyond the seasonal pattern. Cyclical declines reverse when the seasonal cycle turns. Structural declines persist through full seasonal cycles. If AI-Overview-affected queries show sustained CTR depression through a full seasonal cycle while non-affected queries recover, the structural diagnosis is confirmed.
Step four: validate with device-level and geographic segmentation where AI Overview rollout varies
AI Overviews rolled out at different rates across devices, countries, and query categories, creating natural experiments for isolating their impact. Comparing CTR trends for the same queries across segments with different AI Overview exposure levels provides an additional validation layer.
Device segmentation exploits the fact that AI Overviews expanded on mobile significantly faster than on desktop in some categories. Advanced Web Ranking data shows AI Overview frequency on mobile increased 474.9% year-over-year. If the same query shows a steeper CTR decline on mobile than desktop, and mobile has higher AI Overview presence for that query, the correlation strengthens the AI Overview attribution.
Geographic segmentation uses the staggered international rollout of AI Overviews. AI Overviews launched in the US first and expanded to 200+ countries over time. Comparing CTR for the same English-language queries between a market with full AI Overview deployment and one with limited deployment isolates the AI Overview variable while holding content quality and ranking constant.
Query category segmentation leverages the fact that AI Overviews appear at different rates across topic areas. Health, finance, and technology queries see higher AI Overview frequency than niche B2B queries in some categories. Comparing CTR trends between high-AI-Overview-frequency categories and low-frequency categories within the same site provides an internal control.
Each segmentation dimension adds a validation layer. If device, geographic, and category segmentation all point to AI Overviews as the primary contributor, the diagnostic confidence increases. If one dimension contradicts the others, additional investigation is warranted, as the conflicting signal may indicate a confounding factor that the primary analysis missed.
The diagnostic limitation: perfect attribution is impossible without Google-provided AI Overview impression data
No current diagnostic methodology can definitively attribute a specific percentage of CTR decline to AI Overviews versus other factors. All methods produce estimates with confidence intervals, not precise measurements. This limitation is structural, not methodological: Google Search Console does not provide AI Overview impression data separately.
The core gap is that Search Console aggregates all impressions without indicating whether the user saw an AI Overview on the SERP where the impression occurred. Without this data, every diagnostic approach relies on matching Search Console data against third-party SERP monitoring, which introduces timing mismatches, sampling differences, and coverage gaps.
Third-party SERP monitors check keyword presence periodically, typically daily, from specific locations. AI Overviews can appear for a query from one location and not another, or appear at one time of day and not another. A keyword flagged as having an AI Overview in the tool’s daily check may not have had one for the majority of impressions that Search Console recorded that day. This creates false-positive AI Overview attribution.
The practical accuracy of current diagnostic methods places AI Overview-specific CTR impact within a range rather than a point estimate. For most implementations, the attribution range spans approximately plus or minus 10-15 percentage points around the estimated impact. A diagnosis suggesting “AI Overviews caused a 30% CTR decline” actually means “AI Overviews likely caused a 15-45% CTR decline, with the remaining decline attributable to other factors.”
For definitive attribution, Google would need to provide three data points currently missing from Search Console: whether an AI Overview appeared for each impression, whether the user interacted with the AI Overview before clicking (or not clicking) an organic result, and whether the page was cited within the AI Overview. Until this data exists, practitioners should treat all AI Overview CTR attribution as directional rather than precise, and frame diagnostic conclusions accordingly.
What minimum sample size is needed to produce a reliable CTR decline diagnosis for AI Overview attribution?
A minimum of 100 queries per segment (AI Overview present versus absent) is required for statistically meaningful comparison. For smaller sites that cannot reach this threshold at the individual query level, aggregate queries into topic clusters and compare cluster-level CTR trends between affected and unaffected groups. Enterprise sites tracking thousands of queries typically have sufficient volume, but niche sites may need 60-90 days of data accumulation before segmented analysis becomes reliable.
Can Google Search Console data alone diagnose AI Overview CTR impact without third-party SERP monitoring tools?
No. Google Search Console does not distinguish between impressions that included an AI Overview and those that did not. Without this segmentation, all CTR analysis reflects blended data that conflates AI Overview impact with other SERP changes. Third-party SERP monitoring tools such as Ahrefs, Semrush, or seoClarity are required to tag queries by AI Overview presence and enable the differential analysis that isolates AI Overview-specific CTR effects.
How frequently should AI Overview CTR diagnostics be repeated to distinguish structural decline from temporary fluctuation?
Run the full diagnostic quarterly at minimum, with monthly monitoring of key segments between full analyses. Structural AI Overview CTR decline persists through complete seasonal cycles, while temporary fluctuations reverse within one to two months. Tracking the same segmented query sets across four consecutive quarters provides the longitudinal data needed to confirm whether the decline is permanent or cyclical, and whether mitigation efforts are producing measurable recovery.
Sources
- Seer Interactive: AIO Impact on Google CTR, September 2025 Update — Segmented CTR data showing 61% decline for AI Overview queries and 41% decline for non-AI-Overview queries
- Amsive: Google AI Overviews CTR Study — SERP feature stacking analysis showing compounded CTR decline when AI Overviews coincide with featured snippets
- Ahrefs: AI Overviews Reduce Clicks by 58% (December 2025 Update) — Position-level CTR impact measurement methodology and branded versus non-branded segmentation