AVIF images can be 30-50% smaller than equivalent JPEGs at the same visual quality. On a flagship phone, this file size reduction translates directly to faster LCP because the smaller file downloads faster and the fast GPU decodes it quickly. On a low-end device with a Snapdragon 400-series chipset and no hardware-accelerated AVIF decoder, the same file downloads faster but decodes slower — sometimes significantly slower than a larger JPEG that the hardware can decode natively. The net LCP impact is negative: you saved 200ms on download and lost 400ms on decoding. Understanding the image decoding pipeline explains why file size reduction does not guarantee LCP improvement across the device spectrum.
The Image Decoding Pipeline and Where LCP Measurement Ends
LCP measures the time from navigation start to the moment the largest contentful element — frequently an image — is rendered (painted to the screen). For image elements, the rendering requires the image to be fully decoded from its compressed format into a raw bitmap in memory. The decoding pipeline operates in three sequential stages, all of which contribute to the total time LCP captures:
Stage 1 — Network transfer (download): the compressed image file travels from the CDN edge to the browser. Transfer duration depends on file size and available bandwidth. Smaller files (AVIF, WebP) complete this stage faster than larger files (JPEG, PNG) at equivalent visual quality.
Stage 2 — Decode (decompression): the browser converts the compressed bitstream into a raw pixel buffer. Decode duration depends on the format’s algorithmic complexity, the image dimensions, and the device’s hardware decoder availability. More sophisticated compression algorithms (AVIF’s AV1 intra-frame encoding) require more computation to decompress than simpler algorithms (JPEG’s DCT-based encoding).
Stage 3 — Render (compositing): the decoded bitmap is composited into the painted frame on screen. Render duration depends on image dimensions, layer complexity, and GPU compositing capacity.
LCP includes all three stages. The PerformanceElementTiming API reports both loadTime (when the network transfer completes) and renderTime (when the decoded image is painted). The delta between renderTime and loadTime captures the combined decode and render duration. Optimizing only Stage 1 (smaller files through better compression) while ignoring Stage 2 (format-specific decompression cost) can produce a net LCP regression if the decode cost increase exceeds the transfer cost savings.
Position confidence: Confirmed through the LCP specification, which explicitly includes image decode and render time in the metric’s measurement scope, and through the PerformanceElementTiming API documentation that exposes the timing breakdown.
Why Next-Gen Format Decoding Is Computationally Expensive
AVIF uses AV1 intra-frame encoding, which employs significantly more sophisticated compression techniques than JPEG’s baseline DCT encoding. These include larger variable-size transform blocks (up to 128×128 pixels versus JPEG’s fixed 8×8), a wider range of intra-prediction modes (over 50 directional modes versus JPEG’s zero), loop filtering and constrained directional enhancement filtering (post-processing that improves visual quality but adds decode computation), and optional film grain synthesis (generating noise during decode rather than storing it in the file). Each technique reduces file size by encoding visual information more efficiently, but each adds computational work during decoding.
WebP uses VP8 intra-frame encoding, which sits between JPEG and AVIF in complexity. VP8 uses 4×4 and 16×16 transform blocks with 4 intra-prediction modes and a simple loop filter. WebP’s decode complexity is lower than AVIF’s but higher than JPEG baseline’s. WebP lossy typically delivers files 25-34% smaller than equivalent JPEGs, while AVIF achieves approximately 50% reduction at comparable quality (speedvitals.com, 2025).
JPEG baseline uses a simple 8×8 DCT transform with no prediction modes and no post-processing filters. The algorithm is computationally straightforward, enabling extremely fast decoding even on minimal hardware. JPEG’s simplicity is its performance advantage: decades of hardware optimization have produced dedicated JPEG decode circuits in virtually every mobile SoC and GPU manufactured in the last 15 years.
The compression-versus-decode tradeoff is not symmetric. Moving from JPEG to WebP reduces file size by 25-34% while increasing decode complexity moderately. Moving from JPEG to AVIF reduces file size by 40-50% while increasing decode complexity substantially. The marginal file size savings diminish (going from 34% to 50%) while the marginal decode cost accelerates (going from moderate to substantial). This asymmetry means AVIF’s advantage over WebP is smaller in file size savings but larger in decode cost penalty — a ratio that favors WebP for LCP-critical images on devices where decode speed matters.
The Hardware Decoder Landscape Across Device Tiers
Whether a next-gen format helps or hurts LCP depends on whether the user’s device has a hardware decoder for that format. Hardware decoders process image decompression on dedicated silicon that operates independently of the CPU, completing the work in a fraction of the time software decoding requires.
AVIF hardware decoding requires AV1 decode capability in the device’s SoC. Hardware AV1 decoders became common in flagship chipsets from 2021-2022: Qualcomm Snapdragon 888 and later, MediaTek Dimensity 1000+ and later, Samsung Exynos 2100 and later, and Apple A15 and later. Mid-tier and budget chipsets from 2020 and earlier — Snapdragon 600-series, MediaTek Helio series, and older Exynos models — lack hardware AV1 decoders and fall back to software decoding on the CPU. Software AV1 decoding is functional but slow, particularly on the limited CPU cores in budget devices.
WebP hardware decoding relies on VP8 hardware decode capability, which is significantly more widespread. VP8 hardware decoders have been present in most mobile SoCs from 2018 onward, including mid-tier and budget chipsets. This broader hardware support means WebP decoding is hardware-accelerated on a much larger proportion of the active device population than AVIF.
JPEG hardware decoding has near-universal support across all device tiers. Even the most budget-oriented chipsets include dedicated JPEG decode hardware because the format has been the web’s dominant image format for three decades.
The device-tier distribution of the site’s user base determines whether next-gen formats help or hurt LCP at the 75th percentile. If 75th percentile users are on devices from 2021+ with hardware AV1 decoders, AVIF’s file size reduction translates to LCP improvement. If 75th percentile users include significant numbers of devices from 2019-2020 without hardware AV1 support, AVIF’s decode penalty may negate or exceed its file size benefit.
Measuring Decode Time in the Field
Field measurement of decode time is essential for making data-driven format selection decisions. The PerformanceElementTiming API, which underpins LCP measurement, reports both loadTime and renderTime for the LCP element. The delta (renderTime - loadTime) captures the combined decode and render duration. Collecting this delta in RUM, segmented by device tier (using navigator.deviceMemory and navigator.hardwareConcurrency as proxies), reveals whether decode time is a material LCP contributor for the site’s specific audience.
Diagnostic thresholds for the decode delta:
- Under 50ms across device tiers: decode time is not a meaningful LCP factor. Format selection should prioritize file size reduction (AVIF preferred).
- 50-150ms on mid-tier devices: decode time is a moderate LCP contributor. WebP may outperform AVIF on affected devices despite larger file sizes.
- Over 200ms on low-end devices: decode time is the dominant LCP factor for those users. Optimized JPEG or WebP should be served to those devices, with AVIF reserved for devices with confirmed hardware decoder support.
Chrome DevTools’ Performance panel also shows image decode operations in the flame chart, labeled “Image Decode.” The duration of these entries during LCP measurement provides lab-based confirmation of field observations. Position confidence: Confirmed through the PerformanceElementTiming API specification and Chrome DevTools documentation.
The Optimal Strategy: Format Selection by Device Capability
The ideal approach serves different formats to different devices, matching the format’s decode requirements to the device’s hardware capabilities:
Content negotiation via Accept header: the browser’s Accept request header declares supported image formats. If Accept includes image/avif, the device’s browser supports AVIF (though this does not confirm hardware decode support). If it includes image/webp but not image/avif, the browser supports WebP but not AVIF. CDN-based content negotiation inspects the Accept header and serves the most efficient supported format. The Vary: Accept response header ensures CDN edge caches maintain separate cached versions per format.
Client hints for device capability inference: the Sec-CH-UA-Platform and Sec-CH-UA-Platform-Version client hints provide operating system and version information that can be correlated with device capability databases. The Save-Data client hint indicates users who have opted for reduced data usage, suggesting network-constrained conditions where file size reduction (AVIF) may be more valuable despite decode cost.
The pragmatic default: for sites that cannot implement per-device format selection, serving WebP as the universal next-gen format with JPEG as the fallback provides the best risk-adjusted LCP outcome. WebP has broad hardware decoder support (2018+ devices), delivers meaningful file size reduction (25-34% over JPEG), and avoids the decode penalty risk that AVIF introduces on devices without hardware AV1 decoders. AVIF can be served selectively to users on confirmed-capable devices where the additional file size savings translate to faster LCP without decode cost risk.
Limitations: When File Size Wins Regardless of Decode Cost
On very slow networks (2G, slow 3G, congested cellular connections), the download time savings from smaller file sizes dominate even large decode cost penalties. A 50KB AVIF that takes 500ms to download on slow 3G and 300ms to decode in software still loads faster than a 150KB JPEG that takes 1,500ms to download and 50ms to decode. The total LCP for the AVIF (800ms) is half the JPEG’s (1,550ms).
This network-speed dependency means the optimal format choice is a function of both device capability (hardware decoder availability) and connection quality (bandwidth). High bandwidth + no hardware decoder favors JPEG or WebP (fast download anyway, fast decode). Low bandwidth + hardware decoder favors AVIF (maximum file size savings, fast hardware decode). Low bandwidth + no hardware decoder is the ambiguous case where the tradeoff depends on the specific file size difference and decode cost for the particular image.
This dual-variable optimization is why a one-size-fits-all format recommendation is technically unsound. Field data capturing both decode time (via PerformanceElementTiming) and network conditions (via navigator.connection.effectiveType) enables the data-driven format selection that maximizes LCP across the actual user population.
Does the decoding=”async” attribute on img elements prevent image decode time from affecting LCP?
No. The decoding=”async” attribute allows the browser to defer image decoding to avoid blocking other content rendering, but LCP is not recorded until the image is fully decoded and painted. Using decoding=”async” on the LCP image may actually delay the LCP timestamp because the browser deprioritizes the decode operation. For LCP-critical images, omit decoding=”async” or use decoding=”sync” to ensure the browser decodes the image as soon as the data is available.
Can progressive JPEG encoding improve perceived LCP even if the total decode time is similar to baseline JPEG?
No, not for LCP measurement purposes. Progressive JPEGs render a low-quality preview during download and refine quality in subsequent passes. However, LCP measures the final render of the full-quality image, not the initial low-quality preview. The browser reports the LCP timestamp when the complete image is painted, which means progressive encoding does not accelerate the LCP metric. Progressive encoding may improve perceived visual loading but has no effect on the measured LCP value.
How can you determine whether a specific user’s device has a hardware AV1 decoder before serving AVIF?
Direct hardware decoder detection is not available through browser APIs. The Accept header confirms AVIF format support but does not distinguish hardware from software decoding. The practical approach uses device-tier proxies: navigator.deviceMemory and navigator.hardwareConcurrency identify low-end devices likely lacking hardware AV1 support. Alternatively, collect field decode-time data segmented by device characteristics and serve AVIF only to device segments where measured decode performance confirms fast decoding.