You heard that Core Web Vitals only measure page load performance, so your dashboard application with its complex interactive features should be unaffected. Your team deprioritized INP optimization because the first interaction — a simple navigation click — was always fast. Then your CrUX data showed INP failures across the entire origin. INP does not measure the first interaction. That was FID, which Google retired in March 2024. INP measures every discrete interaction during the entire page session and reports the worst one (or near-worst, at the 98th percentile for pages with many interactions). For long-session applications, INP is the most consequential Core Web Vital precisely because it captures the slow interactions that accumulate during extended use.
INP’s Session-Wide Scope and the Persistent FID Mental Model
The Event Timing API underlying INP records every discrete user interaction throughout the page’s lifetime: clicks, taps, and keyboard inputs. Scrolling is excluded because scroll events are handled by the browser’s compositor thread and do not compete for main thread resources. Hover events are also excluded unless they trigger discrete pointer events.
Each qualifying interaction’s total duration is calculated as the sum of input delay, processing time, and presentation delay. INP selects from this complete set of interactions using a near-worst-case methodology. For pages with fewer than 50 interactions in a session, INP is the single worst interaction. For pages with 50 or more interactions, INP is the interaction at the 98th percentile, which effectively ignores one outlier per 50 interactions. This exclusion prevents a single anomalous interaction (such as one coinciding with a garbage collection pause) from defining the metric, while still capturing the consistently slow interactions that reflect genuine responsiveness problems.
This session-wide scope means every interactive element on every page is a potential INP contributor: navigation links, buttons, form inputs, dropdown menus, accordion toggles, data table sort headers, search fields, filter checkboxes, and any other element that responds to clicks or key presses. The metric does not privilege any interaction position in the session. The hundredth interaction is measured with exactly the same rigor as the first.
The Cloudflare blog analysis of INP adoption found that many sites passing FID comfortably discovered INP scores 3-5x higher than their former FID values, specifically because mid-session interactions were now measured. The sites had not degraded; the measurement had expanded to capture responsiveness problems that always existed but were invisible to FID’s first-interaction-only scope.
Why Long-Session Applications Are More Exposed to INP Failures
Dashboards, single-page applications, web-based email clients, project management tools, content management interfaces, and data analytics platforms share a common trait: users remain on a single page for extended periods, accumulating hundreds of interactions per session. Each interaction is an opportunity for an INP failure, and several degradation patterns cause interactions to become progressively slower during long sessions.
Memory leaks are a primary degradation driver. Event listeners attached during SPA navigation that are not cleaned up during route changes accumulate over time. Each leaked listener adds processing overhead to subsequent interaction events. DOM nodes retained by closure references prevent garbage collection, growing the heap and increasing the frequency and duration of garbage collection pauses that coincide with user interactions.
Growing DOM trees from dynamically added content increase the cost of style recalculation and layout for every interaction. A dashboard that appends new data rows to a table without virtualizing the display accumulates DOM nodes throughout the session. An interaction that triggers a style change at the session start affects 50 nodes; the same interaction 30 minutes later affects 500 nodes, with proportionally longer processing time.
Background data fetching on periodic intervals (WebSocket updates, polling loops, periodic API calls) competes for main thread time with user interactions. When a fetch response handler executes synchronously on the main thread and coincides with a user interaction, the interaction’s input delay increases by the handler’s execution duration. This contention pattern worsens as more data sources are subscribed during a session.
Event listener accumulation in SPA frameworks that do not properly tear down component-level listeners during virtual navigation creates compound handler chains. A button that had one click listener at page load may have three identical listeners after three SPA navigations to the same route, tripling the processing time for that interaction.
These degradation patterns never affected FID because FID measured only the first interaction, which occurred on a freshly initialized page with minimal DOM, no accumulated state, and no background task contention. INP exposes the full trajectory of responsiveness throughout the session.
First Input Delay explicitly measured only the first discrete interaction, by design. Google’s documentation for FID stated this clearly, and the metric was understood as a measure of initial page interactivity — how quickly the page became responsive after load.
When INP replaced FID in March 2024, many practitioners carried the “first interaction” mental model forward. Several factors reinforce this misconception. The metric name — “Interaction to Next Paint” — does not explicitly signal session-wide measurement the way “Cumulative Layout Shift” signals accumulation. SEO-focused documentation that has not been updated since the transition may still reference FID-era optimization strategies focused on initial load. Lab tools like Lighthouse do not measure INP at all (there is no lab equivalent for INP; it is a field-only metric), which means teams relying on lab testing never encounter INP in their development workflow.
The practical consequence is that teams optimize initial page interactivity — reducing Total Blocking Time, deferring non-critical JavaScript, code-splitting the initial bundle — while ignoring the complex event handlers in features used later in the session. A dashboard that loads quickly but takes 400ms to re-sort a data table after the user has been working for 10 minutes will pass every lab test and fail INP in field data.
How CrUX Aggregation Makes Session-Wide INP Visible
CrUX collects one INP value per page view — the worst (or 98th percentile) interaction from that session — and aggregates these values across all page views for the URL or origin over a 28-day rolling period. The 75th percentile of these aggregated per-session INP values determines the assessment.
This two-level aggregation means the reported INP reflects a specific intersection: the worst interaction in 75% of sessions. If 70% of sessions involve only simple interactions (navigation clicks, link taps) with fast INP, but 30% of sessions involve power-user workflows (complex filtering, bulk operations, data manipulation) with slow INP, the 75th percentile captures the power-user sessions.
Sites serving both casual visitors and power users must optimize for the power user interaction patterns, not the casual visitor’s simple clicks. The casual visitor’s session contributes a low INP value that helps the overall distribution, but it cannot compensate for enough power-user sessions pushing the 75th percentile past 200ms.
For origin-level assessment, CrUX aggregates across all URLs. A site with 100 pages where 95 pass INP and 5 fail (such as a dashboard page, a product configurator, a search results page with complex filters, an account settings page, and a data export page) may still fail at the origin level if those 5 pages receive sufficient traffic to influence the 75th percentile across the origin’s combined page view population.
The Actionable Implication: Audit Every Interactive Component, Not Just Page Load
INP auditing must cover every interactive element on every page template, with particular attention to components that trigger expensive operations:
- Data table sorting and filtering: sorting 1,000 rows triggers array manipulation and full re-render
- Filter application on product listings: faceted search state changes re-render product grids
- Modal dialogs with dynamic content: opening a modal that fetches and renders content on demand
- Accordion expansion with lazy rendering: expanding a section that initializes a complex component
- Form validation on complex inputs: real-time validation that queries APIs or performs computation
- Search with autocomplete: each keystroke triggering suggestion computation and dropdown rendering
- Drag and drop reordering: continuous pointer events with DOM mutation per frame
The audit scope extends to interactions that occur minutes into a session. This requires either automated interaction testing (Playwright or Puppeteer scripts that simulate user workflows) combined with the web-vitals library’s INP attribution, or RUM instrumentation that captures per-interaction attribution across the full session lifecycle.
For long-session applications, session-scoped RUM monitoring is not optional. CrUX provides the aggregate signal that a problem exists, but identifying which interaction on which page during which phase of the user’s workflow produces the failing INP requires per-interaction field data that only custom RUM instrumentation can provide.
Does closing a browser tab before the session ends still report INP to CrUX?
Yes. The web-vitals library and Chrome’s internal reporting both use the visibilitychange event to finalize and report the INP value when a user navigates away or closes the tab. The INP score reflects all interactions that occurred up to that point. Tabs closed mid-session report a valid INP based on the worst interaction observed during the active portion.
Do keyboard interactions on form inputs contribute to INP the same way as click interactions?
Yes. INP measures all discrete interaction types equally: clicks, taps, and key presses. Each keystroke in a form input is a separate interaction. If a keypress handler triggers expensive validation logic, autocomplete lookups, or state updates, each keystroke produces its own INP candidate. Form-heavy pages with synchronous validation on every keypress are particularly vulnerable to high INP.
Does prefetching the next page improve INP on the current page?
No. Prefetching (using speculation rules or link rel=”prefetch”) loads resources for a future navigation but does not affect the main-thread responsiveness of the current page. If prefetch requests compete for bandwidth or trigger main-thread JavaScript parsing, they can worsen INP on the current page. Ensuring prefetch operations use idle-time scheduling or are limited in scope prevents this negative interaction.