A site running synthetic monitoring from 10 global locations shows consistent sub-200ms TTFB everywhere. Meanwhile, CrUX data reveals TTFB exceeding 800ms at the 75th percentile for users in Southeast Asia and parts of South America. The synthetic monitors never detect it because they test from well-connected data center IPs with optimized routing, while real users in those regions experience ISP-level routing inefficiencies, CDN cache misses at under-provisioned points of presence, and DNS resolution delays through regional resolvers. Diagnosing these geographic TTFB spikes requires field data segmentation and network-layer analysis that synthetic tools are structurally incapable of providing.
Why Synthetic Monitoring Misses Geographic TTFB Spikes
Synthetic monitoring agents run from data centers on premium network backbone connections with direct peering arrangements to major CDN providers. These data center connections use optimized BGP routing, low-latency interconnects, and often bypass the congested last-mile infrastructure that residential users traverse. A synthetic test from a Singapore data center hits the CDN’s Singapore POP through a direct peering link with sub-5ms latency. A real user in rural Indonesia connects through a residential ISP that may route traffic through a congested international link to a Jakarta POP, traverse multiple ISP hops before reaching the CDN edge, and resolve DNS through an ISP resolver that returns a suboptimal CDN edge location.
CrUX’s TTFB measurement captures the full navigation-level TTFB as experienced by real Chrome users: redirect time, DNS resolution, TCP connection, TLS negotiation, and server response time combined. As noted in CrUX documentation, TTFB in CrUX includes cold page loads, cached page loads, and loads from established connections — it is not a direct measure of server response time alone. This breadth of measurement is precisely why CrUX reveals geographic problems that synthetic tools miss. A synthetic test that re-uses connections and resolves DNS through fast resolvers systematically underestimates the TTFB that first-time visitors experience from their regional ISP connections.
Additionally, synthetic tools typically test the final URL, bypassing redirect chains that real users encounter through marketing links, HTTP-to-HTTPS upgrades, or www-to-non-www normalization. These redirects add round trips that scale with the geographic distance to the redirect target, making them disproportionately costly for users in distant regions.
The gap between synthetic and field TTFB is not a measurement error in either system. It reflects a genuine difference in network conditions between data center connections and residential connections. For geographic TTFB diagnosis, CrUX provides the ground truth and synthetic monitoring provides the controlled comparison.
Step 1: Extract Geographic TTFB Distribution from CrUX BigQuery
The CrUX BigQuery dataset publishes monthly performance data with country-level segmentation for origins with sufficient traffic volume. Querying the TTFB histogram table by country identifies which regions show disproportionately poor performance relative to the global average.
A diagnostic query compares the 75th percentile TTFB across countries for a specific origin:
SELECT
country_code,
SUM(IF(bin.start < 800, bin.density, 0)) AS good_density,
SUM(IF(bin.start >= 800 AND bin.start < 1800, bin.density, 0)) AS needs_improvement,
SUM(IF(bin.start >= 1800, bin.density, 0)) AS poor_density
FROM
`chrome-ux-report.all.202501`,
UNNEST(experimental.time_to_first_byte.histogram.bin) AS bin
WHERE
origin = 'https://example.com'
GROUP BY
country_code
ORDER BY
poor_density DESC
Countries where the “poor” density exceeds the global average by a significant margin are geographic TTFB hotspots. For finer granularity below the country level, custom RUM instrumentation is required. The Intl.DateTimeFormat().resolvedOptions().timeZone API provides timezone-based approximation of user location without requiring IP geolocation services. Logging this alongside the Navigation Timing API’s TTFB decomposition (domainLookupEnd - domainLookupStart for DNS, connectEnd - connectStart for TCP, etc.) creates a sub-country diagnostic dataset.
CrUX BigQuery also supports filtering by form factor (mobile vs. desktop) and effective connection type, which can further isolate whether the geographic spike affects all users in a region or specifically mobile users on slower connections. The combination of country + form factor + connection type narrows the affected population and points toward likely root causes.
Step 2: Isolate the Network Layer Causing the Spike
TTFB is the sum of multiple sequential network phases, and each phase has different root causes and different mitigations. The Navigation Timing API decomposes TTFB into measurable sub-phases that, when captured per geographic segment in RUM, reveal which network layer is responsible.
DNS resolution time (domainLookupEnd - domainLookupStart): elevated DNS time in a specific region suggests the ISP’s DNS resolver is slow, caching TTLs are too short for low-traffic regions, or the DNS provider’s authoritative nameservers are geographically distant. Mitigation involves DNS prefetching, increasing DNS TTLs, or switching to a DNS provider with better regional coverage.
TCP connection time (connectEnd - connectStart): elevated connection time indicates physical network distance or congestion between the user and the server/CDN edge. If the CDN POP in the region is geographically distant or under-provisioned, TCP connection setup requires more round trips across longer paths.
TLS negotiation time (requestStart - secureConnectionStart): TLS adds one or two additional round trips depending on protocol version. For users connecting from distant regions, each round trip is costlier. HTTP/3 with 0-RTT resumption eliminates repeat TLS overhead; HTTP/2 with TLS 1.3 reduces fresh connection setup to one round trip.
Server processing time (responseStart - requestStart): if this sub-phase is elevated for a specific region while other phases are normal, the CDN may be routing requests to a distant origin rather than serving from a regional edge cache. Cache miss rates and origin fetch patterns for the affected POP require CDN provider investigation.
Segmenting this decomposition by the geographic buckets identified in step 1 reveals whether the spike is DNS-driven (fixable with DNS infrastructure changes), connection-driven (fixable with CDN POP additions or protocol upgrades), or origin-driven (fixable with caching strategy or origin architecture changes).
Step 3: Evaluate CDN POP Coverage and Cache Hit Rates by Region
CDN providers offer analytics dashboards showing cache hit ratios and response times per point of presence. Two patterns emerge from POP-level analysis for geographic TTFB spikes:
Low cache hit rate at the regional POP: the POP serves insufficient traffic to maintain a warm cache, resulting in frequent origin fetches. Long-tail URLs that receive only a few requests per day from a given POP never achieve cache warmth. Each request incurs the full edge-to-origin round trip plus origin processing time. The mitigation is either increasing TTLs for long-tail content, implementing stale-while-revalidate to serve stale content while warming the cache, or using an origin shield architecture that concentrates origin fetches through a single intermediate cache layer.
High cache hit rate but high POP response time: the POP is serving cached content but the last-mile network between the POP and users in the region is slow. This may indicate the POP is physically distant from the user population (a POP in Sydney serving users in New Zealand via international links), the POP is overloaded relative to its capacity, or ISP peering with the CDN provider in that region is suboptimal. CDN providers can sometimes address this by establishing direct peering with regional ISPs or by deploying additional POP infrastructure.
Requesting CDN provider support for POP-level diagnostics provides definitive answers. Providers like Cloudflare, Fastly, and Akamai can provide per-POP cache hit rates, origin fetch latency, and connection metrics that are not exposed in their standard dashboards.
Step 4: Test with Region-Specific Conditions Using Real-User Replay
WebPageTest supports testing from region-specific locations (Mumbai, Jakarta, Sao Paulo, and many others) with configurable connection profiles. Testing from a regional location with a 4G mobile connection profile may reproduce what CrUX observes. If the WebPageTest result shows elevated TTFB matching the CrUX data, the problem is network distance, CDN coverage, or origin routing — all factors that data center location affects.
If WebPageTest from the regional location shows normal TTFB (because even regional test locations use data center connections), the problem is last-mile ISP-specific. The ISP may route traffic through an inefficient path, use a slow DNS resolver, or have congested peering with the CDN provider. Diagnosing ISP-specific problems requires deeper RUM analysis of affected users’ network conditions, specifically comparing TTFB sub-phase timing between users on different ISPs within the same geographic region.
For systematic reproduction, deploying lightweight RUM beacons that capture full Navigation Timing decomposition for users in the affected region, triggered on a sampling basis, provides the per-user data needed to distinguish between CDN-level problems (affecting all users in the region) and ISP-level problems (affecting users on specific networks within the region).
Limitations: Geographic TTFB Problems You Cannot Fix
Some geographic TTFB spikes originate from factors entirely outside the site operator’s and CDN provider’s control. Submarine cable latency between continents sets a physical minimum for inter-continental connections. ISP peering congestion during peak hours degrades connection quality for all destinations, not just the specific site. Government-level internet filtering in certain countries adds inspection latency to international traffic. Regional DNS resolver performance varies by ISP and cannot be controlled by the site operator.
CDN-side mitigations address CDN-related causes: adding POPs closer to affected populations, pre-warming caches for high-traffic content, enabling Anycast DNS to direct users to the nearest resolver, and upgrading to HTTP/3 (which reduces connection overhead through QUIC’s 0-RTT handshake). ISP-level routing issues can sometimes be mitigated by working with the CDN provider to establish direct peering with regional ISPs, but this option is only viable for very high-traffic properties where the CDN and ISP both have business incentive to optimize the interconnection.
For sites with significant traffic from regions where TTFB cannot be reduced through CDN optimization alone, the residual approach is ensuring that once the HTML arrives (however delayed), the LCP resource is discoverable and loadable as quickly as possible. Preloading the LCP image, inlining critical CSS, and minimizing the resource load delay sub-part of LCP compensate partially for elevated TTFB by optimizing the phases that follow it.
Can a single underperforming CDN point of presence cause a regional TTFB spike in CrUX?
Yes. If a CDN POP serving a specific region has stale cache, misconfigured routing, or hardware issues, all users routed through that POP experience elevated TTFB. CrUX aggregates these experiences into the regional distribution. CDN analytics dashboards showing per-POP cache hit rates and response times identify the offending edge location, and the fix is either purging the POP cache or contacting the CDN provider about infrastructure issues.
Does CrUX BigQuery data allow filtering TTFB by connection type within a geographic region?
The standard CrUX BigQuery tables provide TTFB distributions by origin and by country, but they do not cross-tabulate connection type with geography. Custom RUM instrumentation using the Network Information API’s effectiveType property is required to determine whether regional TTFB spikes concentrate on specific connection types. This distinction matters because a spike affecting only 2G/3G users suggests a network-layer issue, while a spike across all connection types points to CDN or origin problems.
Can DNS resolver differences between regions explain TTFB spikes that CDN logs do not show?
Yes. Different regions may route through different DNS resolvers with varying cache TTLs and resolution speeds. A region where the dominant ISP uses a slow recursive resolver adds DNS lookup time to TTFB before the request even reaches the CDN. CDN logs record timing only after the connection arrives, so DNS resolution overhead is invisible in CDN analytics. Comparing DNS lookup times using the Navigation Timing API’s domainLookupEnd minus domainLookupStart in field data isolates this cause.