What is the mechanism by which total JavaScript payload size affects INP even when the initial page load appears fast and LCP is within the good threshold?

Sites loading 1MB+ of JavaScript routinely pass LCP with sub-2-second scores because the JavaScript loads asynchronously and does not block the initial paint. Yet these same sites fail INP at the 75th percentile. The total JavaScript payload affects INP through a mechanism that operates after page load: parsing, compilation, and execution of deferred scripts consume main-thread time that competes with event handler processing during user interactions. The more JavaScript on the page, the more likely that user interactions coincide with main-thread JavaScript work, increasing input delay and processing time for every interaction.

Parse and Compile Costs That Persist After Page Load

Every byte of JavaScript delivered to the browser must be parsed (converting source text to an abstract syntax tree) and compiled (generating executable bytecode or machine code) before execution. Modern JavaScript engines like V8 (Chrome) use background parsing and lazy compilation to defer some of this work, but significant parsing still occurs on the main thread, particularly for immediately-invoked modules and top-level code.

A 1MB JavaScript bundle requires approximately 150-300ms of main-thread parsing time on a mid-tier mobile device (Snapdragon 600-series, 4GB RAM), even when the script loads with the defer attribute and does not block rendering. This parsing work creates long tasks — main-thread occupations exceeding 50ms that the Performance API identifies as blocking. Any user interaction that occurs during a parsing long task experiences elevated input delay, the first phase of INP measurement.

The parse cost scales roughly linearly with payload size but is amplified by code complexity. Dense, highly nested code with many closures and dynamic patterns requires more parsing work per byte than simple, flat code. Framework-generated bundles (React, Angular, Vue production builds) tend toward higher complexity per byte due to module system overhead, component wrappers, and state management abstractions.

V8’s background compilation can offload some work to background threads, but the compiled code must still be “finalized” on the main thread. For large bundles, this finalization step itself creates main-thread blocking that can coincide with user interactions. The net result is that even with modern engine optimizations, total JavaScript payload directly determines the total main-thread cost that competes with interaction processing.

Main-Thread Competition During Script Initialization

After parsing and compilation, JavaScript modules execute their top-level code: registering event listeners, initializing state management stores, running framework bootstrapping (React hydration, Vue mounting, Angular change detection initialization), establishing WebSocket connections, and fetching initial data. These initialization tasks execute on the main thread and can span hundreds of milliseconds to several seconds on complex single-page applications.

During this initialization window, every user interaction must wait for the current long task to complete before the browser can begin processing the interaction’s event handler. If a user taps a navigation link while React is hydrating a component tree, the tap’s event handler does not begin executing until hydration completes its current synchronous segment. The resulting input delay is the time remaining in the hydration task.

The initialization window’s duration and density of long tasks are directly proportional to the total JavaScript payload. A page loading 200KB of JavaScript completes initialization in under 500ms on most devices, leaving few windows where interactions are blocked. A page loading 1.2MB of JavaScript may spend 2-4 seconds initializing, during which any interaction has a high probability of colliding with a long task.

SpeedCurve’s research has documented that conversion rates are approximately 10% higher at 100ms INP versus 250ms INP on mobile. redBus reported a 7% sales increase after optimizing their INP, demonstrating direct revenue impact from reducing main-thread contention during interactions.

Garbage Collection Pressure from Large JavaScript Heaps

Larger JavaScript payloads create larger runtime heap allocations. Every object, array, function closure, and string created during initialization and ongoing execution occupies heap memory. The browser’s garbage collector must periodically pause main-thread execution to identify and reclaim unused memory allocations.

On devices with limited RAM (2-4GB, common on mid-tier Android phones in the CrUX 75th percentile population), garbage collection runs more frequently because memory pressure is higher. Each GC pause is a main-thread blocking event that can range from 5ms (minor GC) to 100ms+ (major GC with compaction). These pauses are invisible in Lighthouse testing (which runs on a powerful machine with ample RAM) but contribute measurably to field INP on memory-constrained devices.

The relationship between payload size and GC frequency is not strictly linear — it depends on allocation patterns, object lifetimes, and memory management in the application code. But larger payloads consistently produce more GC pressure because more objects are created during initialization and more live references must be traversed during collection. The result is a higher baseline of main-thread interruptions throughout the page session, each one a potential INP contributor.

Why Async and Defer Do Not Solve the INP Problem

The async and defer attributes prevent JavaScript from blocking HTML parsing and initial rendering. This is why LCP can pass despite large payloads — the critical rendering path completes before deferred scripts execute. But these attributes only change when the script executes relative to DOM parsing. They do not reduce the main-thread work the script performs.

A deferred 500KB script still executes 500KB worth of code on the main thread after the DOM is ready. It still creates long tasks during execution. It still competes with interaction processing. The timing changes (post-initial-render instead of render-blocking), but the total main-thread cost is identical to a synchronously loaded script of the same size.

This distinction explains the common pattern of passing LCP with failing INP. The async/defer optimization successfully moved JavaScript work out of the LCP timeline but did not eliminate it from the page’s lifetime. The work now occurs during the interaction window — the exact timeframe INP measures.

Dynamic import() partially addresses this by loading code on demand rather than at page load. But dynamically imported code still must be parsed, compiled, and executed on the main thread when it loads. If a user interaction triggers a dynamic import of a 100KB module, the parsing and execution of that module contribute to the interaction’s processing time, directly inflating INP for that specific interaction.

The Practical Threshold: When JavaScript Size Becomes an INP Problem

The INP impact of JavaScript payload is device-dependent, making universal thresholds imprecise. However, empirical data from the HTTP Archive and performance monitoring platforms provides directional guidance.

Pages exceeding 500KB of JavaScript (transfer size, compressed) show significantly higher INP failure rates at the 75th percentile than pages under 300KB. The 43% INP failure rate across the web as of 2025 correlates strongly with the growth in average JavaScript payload, which now exceeds 500KB on the median mobile page.

On flagship devices (Snapdragon 8-series, Apple A15+, 8GB+ RAM), 1MB of JavaScript may produce acceptable INP because the fast CPU processes parsing and initialization quickly and the ample RAM reduces GC pressure. On mid-tier devices (Snapdragon 600-series, MediaTek Helio, 3-4GB RAM), the same payload creates systematic INP failures because every phase takes 2-4x longer and GC runs more frequently.

Since CrUX measures the 75th percentile, and the 75th percentile includes mid-tier devices, the practical JavaScript budget for INP compliance must target mid-tier device performance, not flagship device performance. A page that passes INP on a developer’s flagship phone may fail consistently on the devices that define the 75th percentile in CrUX.

Reducing JavaScript Payload as an INP Strategy

The highest-leverage INP optimization for JavaScript-heavy sites is reducing the total amount of JavaScript that executes on any given page:

Code splitting per route: modern bundlers (Webpack, Vite, esbuild) support splitting the application bundle into route-specific chunks. A product detail page loads only product-page JavaScript, not checkout logic, account management, or admin utilities. Each page loads the minimum JavaScript required for its functionality.

Tree shaking: build-time elimination of unused exports removes code that is imported but never called. Tree shaking is most effective with ES module syntax (import/export) and less effective with CommonJS (require).

Third-party script auditing: third-party scripts (analytics, advertising, chat, A/B testing) collectively add 200-500KB of JavaScript on the median page with significant third-party presence. Auditing each script’s business value against its main-thread cost, using the Long Animation Frames API for per-script attribution, identifies scripts that can be removed, deferred to post-first-interaction, or replaced with lighter alternatives.

Server Components and partial hydration: React Server Components, Astro islands, and similar architectures eliminate client-side JavaScript for UI sections that do not require interactivity. A product description rendered as a Server Component sends zero JavaScript to the client for that component, reducing the total payload and main-thread initialization cost.

The goal is not zero JavaScript but a payload small enough that its main-thread cost does not systematically block user interactions at the 75th percentile of devices visiting the site.

Does tree-shaking eliminate enough unused JavaScript to meaningfully improve INP?

It depends on the codebase. Tree-shaking removes exported functions that are never imported, but many JavaScript bundles contain side-effect-laden modules that tree-shaking cannot eliminate. Framework runtime code, polyfills, and utility libraries often ship their full payload regardless of which functions are actually called. Measuring the actual tree-shaking reduction using bundle analysis tools reveals whether the technique is producing meaningful savings or if manual code splitting is required.

Can dynamic import() reduce INP even though it adds network latency when a feature is first used?

Yes, in most cases. Dynamic import defers parsing and compilation of non-critical code until the user needs it, reducing the background main-thread cost that competes with interaction processing. The network latency for the dynamic chunk only affects the first use of that feature. For interactions with non-imported code, the main thread has more available time because the deferred code was never parsed, producing lower INP across all other interactions.

Does JavaScript compression (gzip/brotli) reduce INP since the transferred payload is smaller?

No. Compression reduces transfer size and network download time but does not reduce parse and compile time, which depend on the uncompressed JavaScript size. A 300KB gzip-compressed bundle that decompresses to 1.2MB still requires parsing and compiling 1.2MB of JavaScript on the main thread. Compression improves LCP by reducing download duration but does not address the post-download CPU costs that contribute to INP.

Sources

Leave a Reply

Your email address will not be published. Required fields are marked *