The question is not whether server-side rendering improves LCP. The question is why the LCP element identity can change when you switch rendering strategies, and why that change can make the metric worse even though the page objectively loads faster. When hero text moves from client-rendered to server-rendered, the browser sees that text element earlier in the loading timeline. But LCP measures the largest contentful paint, not the first contentful paint — and by surfacing the text earlier, you may cause the browser to stop considering a later, larger element (like a hero image) as the LCP candidate, only to then re-assign LCP when that image eventually renders. The net effect is a higher LCP timestamp despite a faster text appearance. This article explains the LCP candidate selection mechanism that creates this counterintuitive outcome.
How LCP Candidate Selection Creates Shifting Measurement Targets
The LCP API reports the render time of the largest content element painted so far, and it updates this candidate as new, larger elements render. Each time a larger element finishes painting, the browser dispatches a new largest-contentful-paint performance entry. The final entry before user interaction or the page becoming hidden determines the reported LCP value.
On a page with both text and a hero image, the sequence matters. When hero text is client-side rendered, it appears only after JavaScript execution — often after the hero image has already painted. In this scenario, if the image is the larger element and renders first, the image becomes the sole LCP candidate. The text, arriving later but smaller, does not displace it. The LCP timestamp captures the image paint time.
The Largest Contentful Paint Element May Change During Hydration
When the same text is server-rendered, it paints significantly earlier — before the image completes loading. If the text block occupies a larger visual area than the image at the moment of paint (which is common with large headings above a still-loading image placeholder), the text becomes the initial LCP candidate. When the hero image subsequently finishes loading and renders at a larger size, the browser dispatches a new LCP entry with the image’s paint timestamp. The final LCP value is the image’s render time, which has not changed from the CSR version. But in the CSR version, the image was already the LCP candidate at its original paint time, while in the SSR version, the LCP was “good” momentarily (the text) before being updated to a later timestamp (the image).
The web.dev LCP documentation confirms this behavior: only the element’s initial size and position in the viewport are considered, and changes to element size or position after initial render do not generate new LCP candidates. An element can only become a candidate after it has rendered and is visible. This means the order in which elements render directly determines whether the final LCP timestamp improves or worsens after an SSR migration, even when the user experience is objectively better.
Hydration Overhead and Its Impact on Render Timing
SSR frameworks deliver pre-rendered HTML from the server, but the page must still hydrate — the process of attaching JavaScript event listeners, re-initializing component state, and making the static HTML interactive. During hydration, the main thread is occupied with JavaScript execution, which can delay the rendering of subsequent elements.
If the hero image depends on a JavaScript-driven lazy loader, a framework component that only triggers image loading post-hydration, or a dynamically inserted src attribute, the image request does not begin until hydration completes. In a fully client-rendered approach, all rendering follows a single JavaScript-driven pipeline, and the image loading can be integrated into the same execution flow. With SSR, the HTML arrives early, but the image loading may be deferred until the framework finishes hydrating the component tree.
Angular 18 addressed this partially through partial hydration (introduced at ng-conf 2024), which allows non-critical components to defer hydration while above-the-fold content hydrates immediately. Lab testing showed 45% LCP improvements on real-world applications using this approach. Next.js streaming SSR and React Server Components similarly reduce the hydration cost by limiting the JavaScript that needs to execute before interactive rendering begins.
The practical implication is that SSR migrations must ensure the LCP image is either present in the server-rendered HTML as an <img> element with a src attribute (making it discoverable by the preload scanner before hydration) or explicitly preloaded via <link rel="preload">. If the image depends on client-side JavaScript to initiate loading, SSR provides no LCP benefit for that element and may worsen it by adding hydration overhead to the critical path.
The Document Size Tradeoff: Larger HTML Means Slower First Byte to Parse Completion
Server-rendering the hero text and surrounding content adds HTML payload to the initial document response. For pages that were previously thin HTML shells with client-rendered content (a common SPA pattern), the increase in document size can be substantial — full server-rendered pages often contain 3-10x more HTML bytes than their shell counterparts.
This increased document size creates two secondary effects on LCP. First, the larger HTML response takes longer to transfer, particularly on bandwidth-constrained mobile connections. While TTFB (time to first byte) may remain similar if the server begins streaming the response promptly, the time to transfer the complete HTML document extends. The browser must receive and parse enough of the document to discover critical resources like CSS files, font files, and image elements referenced in the HTML.
Second, a larger document means the HTML parser and preload scanner process more bytes before reaching elements deep in the markup. If the LCP image is referenced later in the document, its discovery is delayed relative to the thinner shell approach where JavaScript initiated all resource loading simultaneously. The resource load delay sub-part of LCP may increase because CSS and font files compete for bandwidth with the larger HTML document during the initial loading phase.
Google’s guidance on SSR acknowledges this tradeoff explicitly: SSR’s primary advantage for LCP is that image resources become discoverable from the HTML source without waiting for JavaScript, but this benefit assumes the server responds quickly. Additional server processing time required to generate the full HTML can increase TTFB, and the recommendation is that this tradeoff is usually worth it because server processing times are within the site operator’s control.
Diagnosing LCP Element Identity Changes Between SSR and CSR
The web-vitals JavaScript library’s attribution build reports the specific DOM element selected as the LCP candidate through the attribution.lcpEntry.element property. Deploying this instrumentation before and after an SSR migration creates the dataset needed to determine whether the LCP regression is a measurement artifact or a genuine performance degradation.
The diagnostic decision tree is straightforward:
- If the LCP element changed (for example, from an
<h1>to an<img>), the regression is likely a measurement artifact of candidate selection timing. The user experience improved (text appeared earlier), but the metric worsened because the browser now assigns LCP to a larger element that renders later. This scenario does not require reverting the SSR migration — the optimization is working as intended, and the metric will stabilize or improve once the image loading is also optimized.
- If the LCP element stayed the same but the LCP timestamp increased, the regression is real. The most likely cause is increased TTFB from server rendering overhead or hydration-delayed image loading. Profiling the server rendering time and comparing the resource loading waterfall before and after migration identifies the specific bottleneck.
- If the LCP element stayed the same and LCP decreased, the migration succeeded for the intended reason — the server-rendered content provided the browser with earlier resource discovery.
Capturing both the LCP element identity and the four LCP sub-part timings (TTFB, resource load delay, resource load duration, element render delay) in RUM data makes this diagnosis deterministic rather than speculative.
Genuine SSR Performance Regressions and Their Root Causes
The LCP candidate selection artifact described above is the more common explanation for post-SSR regressions, but genuine performance degradation does occur in specific architectural patterns.
The primary cause is SSR implementations that block the response stream while generating dynamic content. If the server must query a database, call external APIs, or execute complex template logic before sending the first byte of HTML, TTFB increases by the duration of that server-side processing. A page that previously served a 2KB HTML shell in 50ms and then loaded everything client-side might now serve a 50KB server-rendered document that takes 500ms to generate. The TTFB sub-part of LCP absorbs this entire increase.
Streaming SSR architectures solve this by flushing the <head> section and above-the-fold HTML to the browser before completing the full page render. The browser receives the <head> early enough to discover and begin loading CSS, fonts, and preloaded images while the server continues generating the remainder of the document. Next.js App Router, React’s renderToPipeableStream, and Nuxt’s streaming mode all support this pattern. Without streaming, the browser receives nothing until the server completes the entire rendering pass.
A second genuine regression pattern involves client-side rendering that was already well-optimized. If a CSR implementation used aggressive preloading, inlined critical CSS, and initiated image loading early in the JavaScript execution pipeline, converting to SSR may not provide any discovery advantage while adding the costs of HTML generation, document transfer, and hydration. SSR is not a universal improvement — it provides the most benefit when replacing unoptimized CSR that delays resource discovery.
Does streaming SSR avoid the document size penalty that standard SSR introduces for LCP?
Yes, in most cases. Streaming SSR sends HTML chunks as they are generated, allowing the browser to begin parsing and discovering resources before the full document arrives. This reduces the time-to-first-byte-to-parse-completion gap that inflates LCP in standard SSR where the entire HTML payload must transfer before parsing starts. Streaming is particularly effective for large pages where full SSR produces document sizes exceeding 100KB.
Can partial hydration frameworks like Astro or Qwik eliminate the SSR hydration overhead that worsens LCP?
Partial hydration frameworks avoid hydrating static content sections entirely, which reduces the JavaScript execution cost that competes with LCP rendering on the main thread. Only interactive components hydrate, so the rendering budget previously consumed by full-page hydration becomes available for painting the LCP element sooner. The benefit is most measurable on pages where the LCP element is static text or an image surrounded by interactive widgets.
Does switching to SSR change which element the browser selects as the LCP candidate?
Frequently, yes. Under client-side rendering, the LCP element is often a placeholder or skeleton that displays before JavaScript injects the final content. SSR delivers the full content in initial HTML, causing the browser to select a different, typically larger element as the LCP candidate. This identity change means the measured LCP time reflects a different rendering pipeline, and direct before-and-after comparisons require verifying which element was measured in each scenario.