The common assumption is that a passing origin-level CrUX assessment means the entire site has good page experience scores for ranking purposes. This is dangerously wrong for large sites. The origin-level score is a traffic-weighted average. A site with a fast homepage and fast blog generating 60% of traffic can show “good” at the origin level while product detail pages, category pages, or checkout flows fail individually. Since Google uses URL-level or URL-group-level data when available, the failing URL groups are evaluated on their own poor performance, not the origin’s passing aggregate. The diagnostic challenge is identifying which URL groups fail and why.
Step 1: Extract URL-Group-Level CrUX Data
Google Search Console’s Core Web Vitals report is the primary tool for identifying URL-group-level failures. The report groups URLs with similar performance characteristics and labels each group as “Poor,” “Needs Improvement,” or “Good” for each metric (LCP, CLS, INP). Each group includes example URLs and a count of affected pages.
Export the full URL group list from Search Console. For each failing group, note which specific metric fails (a group may fail LCP but pass CLS and INP, or fail multiple metrics). This metric-specific failure identification is essential because each metric has different root causes and different fixes.
Supplement Search Console data with the CrUX API for individual URL-level data on high-traffic pages within failing groups. The API returns metric distributions for specific URLs, providing per-page granularity that Search Console’s group-level view does not. PageSpeed Insights also provides URL-level CrUX data alongside Lighthouse lab results, enabling side-by-side comparison of field and lab performance for specific pages.
For sites with BigQuery access, the monthly CrUX BigQuery dataset provides origin-level data that can be compared against the URL-level data from the API. The BigQuery dataset does not provide URL-level data directly, but it establishes the origin baseline against which URL-level divergences can be measured.
Step 2: Map Failed URL Groups to Page Templates and Traffic Volume
Each URL group in Search Console typically corresponds to a page template — a set of pages sharing the same rendering logic, layout structure, resource loading patterns, and third-party script configuration. The mapping from URL group to template reveals the engineering scope of the fix.
Common template-to-failure mappings include:
- Product detail pages failing LCP due to large hero images loaded without
fetchpriority, unoptimized image formats, or slow database-backed server rendering. - Category and listing pages failing CLS due to dynamically injected product cards, pagination elements that shift content, or ad slots within the product grid.
- Checkout and form pages failing INP due to heavy form validation libraries, payment provider scripts, and address autocomplete widgets competing for main-thread time.
- Article and blog pages failing CLS due to ad slot injection, lazy-loaded social embeds, or web font loading shifts.
Attach business metrics to each template: organic traffic volume, conversion rate, revenue per session, and average ranking position for target keywords. This business-value mapping transforms the performance diagnostic into a prioritized remediation plan. A failing URL group containing the site’s top 100 revenue-generating product pages demands immediate engineering attention. A failing group of legacy archive pages from 2018 can be deprioritized.
Step 3: Identify Template-Specific Performance Bottlenecks
Once the failing template is identified, the diagnostic narrows to that template’s specific performance characteristics. The workflow combines lab profiling with field data attribution.
Lab profiling: run Lighthouse and WebPageTest on 3-5 representative URLs from the failing group. Identify the LCP element, CLS shift sources, and Total Blocking Time (as an INP proxy) for each. Compare the lab results against representative URLs from passing groups on the same site. The comparison isolates template-specific differences: does the failing template load additional third-party scripts, use a larger hero image, include more complex JavaScript, or render a heavier DOM?
Field data attribution: deploy the web-vitals JavaScript library with the attribution build on the failing template. The attribution data reveals the specific LCP sub-parts (TTFB, resource load delay, resource load duration, element render delay), the specific elements causing CLS shifts (via the sources array in LayoutShift entries), and the specific interaction targets causing high INP. This field-specific data explains why real users on real devices experience poor performance that lab testing may not fully reproduce.
Cross-template comparison: the most diagnostic technique is comparing resource loading waterfalls between a failing template URL and a passing template URL on the same origin. The differences in the waterfall — additional script requests, larger image payloads, longer server processing time, more render-blocking resources — directly identify the bottleneck.
Validating the Masking Effect and Low-Traffic Data Gaps
Confirm the masking effect by analyzing traffic distribution across templates. If the origin passes CWV, identify which pages drive that passing score:
- Export Google Analytics page-level traffic data for the past 28 days (matching CrUX’s collection window).
- Categorize each page by template type.
- Calculate each template’s share of total site traffic.
If 60-70% of total Chrome traffic goes to the homepage and blog (both fast), and 30-40% goes to product and category pages (slow), the origin aggregate is dominated by the fast templates. The origin passes, but the product and category pages fail at their group level. Google evaluates the product pages using their group-level data (failing), not the origin data (passing).
This masking pattern is extremely common on large sites with heterogeneous page types. News sites where the homepage and article pages are fast but the archives and tag pages are slow. E-commerce sites where the homepage and top categories are optimized but the long-tail product pages are not. SaaS sites where the marketing pages are fast but the documentation and support pages are slow.
The diagnostic confirmation requires establishing that the passing origin score and the failing group scores are both accurate representations of their respective populations, not measurement artifacts. Verify by checking individual URL data points in PageSpeed Insights for representative pages from both passing and failing groups.
Low-traffic URL groups that do not generate sufficient Chrome traffic for group-level CrUX data are evaluated at the origin level. This creates an asymmetry with significant strategic implications:
High-traffic failing pages receive their own URL-level or group-level data showing poor performance. Google evaluates them on their actual (poor) performance. These pages have a negative page experience signal despite the origin passing.
Low-traffic failing pages without their own data inherit the origin’s passing assessment. Google evaluates them on the origin’s (good) performance. These pages receive a positive page experience signal despite their actual poor performance.
This asymmetry means that increasing traffic to previously low-traffic pages — through SEO improvements, content promotion, or paid campaigns — can trigger a counterintuitive outcome. As the page gains traffic and crosses the CrUX URL-level data threshold, it transitions from origin-level evaluation (passing, inherited) to URL-level evaluation (failing, actual). The page’s ranking signal worsens even though its actual performance has not changed.
Anticipate this transition when scaling traffic to underperforming templates. Before promoting low-traffic pages expected to cross the URL-level data threshold, ensure their template’s performance is optimized. Otherwise, the traffic growth campaign may produce organic ranking declines that offset the traffic gains from other channels.
Can a fast homepage mask poor CWV performance across the rest of the site at the origin level?
Yes. The origin-level CrUX score aggregates all page views weighted by traffic volume. A high-traffic homepage with excellent CWV can dominate the origin aggregate, producing a passing score even when product pages, category pages, or blog posts fail individually. This masking effect is why URL-group-level analysis is essential for sites with heterogeneous page templates.
Does Search Console’s Core Web Vitals report show URL-group-level data or origin-level data?
Search Console displays URL-group-level CWV data, grouping URLs by similar performance characteristics and page structure. This is more granular than origin-level but less specific than individual URL data. The report categorizes URLs into groups sharing similar CWV status (Good, Needs Improvement, Poor), making it the most accessible tool for identifying which page templates have performance problems.
Can improving one page template’s CWV pull the origin score from failing to passing?
Yes, if that template accounts for a sufficient share of total origin traffic. Origin-level CrUX is traffic-weighted, so fixing the highest-traffic template produces disproportionate impact on the aggregate. A template serving 40% of all page views that moves from poor to good CWV can shift the entire origin assessment, even if lower-traffic templates remain unoptimized.