You improved Core Web Vitals on your top 50 landing pages to well within the “good” threshold. Your origin-level CrUX data still shows the site failing. You expected the improvements to lift the overall assessment, but the origin-level score reflects all pages on the domain, including the hundreds of unoptimized pages dragging the aggregate down. Understanding the difference between origin-level and URL-level CrUX data — and knowing which one Google uses for ranking each page — determines whether your optimization effort actually translates into ranking signal improvement for the pages that matter.
Origin-Level and URL-Level CrUX Data Collection
CrUX origin-level data aggregates performance metrics from all pages on a single origin, defined as the combination of protocol, hostname, and port (e.g., https://www.example.com). Every page view from every opted-in Chrome user on any page within that origin contributes to a single distribution per metric. The 75th percentile of this combined distribution determines the origin’s CWV assessment.
The aggregation is traffic-weighted by design. A homepage receiving 100,000 monthly page views contributes proportionally more to the origin distribution than a long-tail product page receiving 50 views. This traffic weighting means high-traffic pages with poor performance disproportionately drag down the origin score, while low-traffic pages with excellent performance contribute minimally to the aggregate.
Origin-level data is always available in CrUX for origins that meet the minimum traffic threshold (the origin overall must receive sufficient Chrome traffic in the 28-day window). Google Search Console’s Core Web Vitals report defaults to showing origin-level data, and PageSpeed Insights displays origin-level data under the “Origin” tab. The CrUX BigQuery dataset publishes monthly origin-level data, while the CrUX API provides daily origin-level data based on the rolling 28-day window.
The origin-level assessment serves as the baseline evaluation for any page on the origin that lacks its own URL-level data. This makes the origin score the default page experience signal for the majority of URLs on most sites, because most individual URLs do not receive enough Chrome traffic to generate their own URL-level data.
CrUX also provides URL-level data for individual pages that receive sufficient Chrome traffic within the 28-day collection window. Google has not published the exact traffic threshold, but industry analysis and testing suggest URL-level data becomes available when a page reaches approximately 1,000 monthly page views from eligible Chrome users on a given device type (mobile or desktop).
When URL-level data exists, it reflects only the performance of that specific page, completely independent of the origin’s aggregate performance. A fast product page on a slow origin has its own URL-level data showing good CWV, even though the origin fails. Conversely, a slow page on a fast origin has its own URL-level data exposing its poor performance, even though the origin passes.
URL-level data availability creates an asymmetry across a site. High-traffic pages (homepage, top category pages, popular articles) typically have URL-level data and are evaluated individually. Long-tail pages (individual product pages, archive content, low-traffic blog posts) lack URL-level data and inherit the origin assessment. This means optimizing high-traffic pages produces two benefits: it improves those pages’ individual URL-level assessments and it improves the origin score (since those pages contribute heavily to the traffic-weighted aggregate), which in turn improves the assessment for all low-traffic pages evaluated at origin level.
URL Grouping: The Intermediate Aggregation Layer
Between origin-level and individual URL-level data, Google Search Console implements a URL grouping system that aggregates URLs with similar performance characteristics. These groups typically correspond to page templates — all product detail pages, all category pages, all blog articles — though the grouping logic is determined by Google’s systems and is not directly configurable by site operators.
Search Console groups URLs that provide similar user experiences into page groups, then reports CWV data at the group level. The groups are labeled by example URLs and identified as “Poor,” “Needs Improvement,” or “Good” for each metric. This grouping provides visibility into template-level performance patterns that neither origin-level nor individual URL-level data clearly reveals.
The URL grouping serves a diagnostic function in Search Console but also reflects how Google may evaluate page experience for pages that lack individual URL-level data but belong to a group with sufficient collective traffic. If a group of 5,000 product detail pages collectively generates enough Chrome traffic, the group’s aggregated performance may inform the page experience assessment for individual pages within that group, even though no single page in the group reaches the URL-level data threshold.
Google’s documentation does not explicitly confirm whether URL-group-level data is used as a fallback between URL-level and origin-level for ranking purposes. The position confidence on URL-group evaluation is reasoned rather than confirmed: the mechanism is architecturally consistent with how Google has described the fallback system, but Google has not published the exact hierarchy.
Which Granularity Google Uses for Ranking
Google has stated that the page experience ranking signal prefers the most granular data available. The fallback hierarchy operates as follows:
- URL-level data: if the specific page has sufficient Chrome traffic to generate its own CrUX data, Google evaluates that page using its individual URL-level assessment. The origin score is irrelevant for this page.
- URL-group-level data (probable): if the page lacks individual data but belongs to a URL group with sufficient collective data, the group assessment may apply.
- Origin-level data: if neither URL-level nor group-level data is available, the origin-level assessment applies as the default.
This hierarchy has significant implications for optimization strategy. A page with its own URL-level data showing “good” CWV receives a positive page experience signal regardless of the origin score. A page without URL-level data on a failing origin receives a negative page experience signal regardless of its actual performance. The page experience signal is determined by which data layer Google accesses, not by the page’s true performance.
The position confidence on the URL-level preference is confirmed: Google’s documentation and public statements from search engineers consistently describe URL-level data as preferred when available. The position confidence on origin-level as the default fallback is also confirmed. The intermediate group-level fallback is reasoned.
Practical Implications for Optimization Prioritization
The aggregation hierarchy creates a specific optimization strategy:
For high-traffic pages with URL-level data: optimize each page individually. These pages are evaluated on their own performance, so improvements directly affect their ranking signals. Monitor each page’s CrUX data through PageSpeed Insights or the CrUX API to confirm URL-level data availability and track improvement.
For medium-traffic pages in URL groups: optimize at the template level. Fixing a performance bottleneck in the product detail page template improves the group’s aggregate performance, which benefits all pages in the group. Search Console’s CWV report identifies which groups are failing and which metrics are responsible.
For low-traffic pages without URL-level data: optimize the origin score. Since these pages inherit the origin assessment, improving the origin requires improving performance on the high-traffic pages that dominate the origin aggregate. This creates a counterintuitive priority: to improve a low-traffic page’s page experience signal, fix performance on entirely different, higher-traffic pages.
The 28-day rolling window: CrUX data uses a 28-day rolling window updated daily. After deploying a performance improvement, small changes begin appearing within 2-3 days, but the full impact requires 28 days to completely roll through the dataset. During this transition, the CrUX data reflects a blend of pre-improvement and post-improvement user experiences. Patience during the rollover period prevents premature conclusions about optimization effectiveness.
The origin score matters most for sites with large long-tail URL populations. An e-commerce site with 200,000 product pages, of which only 500 have URL-level data, effectively has 199,500 pages evaluated at the origin level. Improving the origin score by fixing performance on the top 50 highest-traffic pages (which dominate the origin aggregate) propagates the benefit to all 199,500 long-tail pages.
How long does it take for CrUX data to reflect a performance improvement after deployment?
CrUX data is based on a rolling 28-day collection window, updated monthly in the public dataset and daily in the CrUX API. After deploying a performance fix, the improvement begins mixing into the 28-day window immediately but takes approximately 28 days to fully replace the older data. The CrUX API reflects changes faster than the BigQuery dataset because it updates daily rather than monthly.
Does CrUX aggregate data from logged-in and logged-out Chrome users separately?
No. CrUX does not distinguish between authenticated and unauthenticated sessions. All eligible Chrome page views from opted-in users contribute to the same origin-level and URL-level aggregations regardless of login state. This means performance differences between logged-in experiences (which may include heavier personalization) and anonymous visits are blended into a single distribution.
Can subdomains have separate CrUX origin-level scores from the root domain?
Yes. CrUX treats each origin (protocol + domain + port) as a separate entity. The origin https://blog.example.com has its own CrUX data independent of https://www.example.com. This means a slow blog subdomain does not contaminate the root domain’s origin-level CWV scores, but it also means each subdomain must independently meet data thresholds to have URL-level or origin-level data available.
Sources
- https://developer.chrome.com/docs/crux/methodology
- https://www.debugbear.com/blog/chrome-user-experience-report
- https://www.debugbear.com/blog/interpret-chrome-ux-report-data
- https://gtmetrix.com/blog/what-is-crux-and-why-should-i-care/
- https://developers.google.com/search/docs/appearance/core-web-vitals