You converted your hero images to AVIF and saw file sizes drop by 40%. Your Lighthouse LCP improved because the test ran on a fast machine with hardware AV1 decoding. Your CrUX LCP got worse. The problem was not the format itself but the encoding profile: your AVIF encoder used maximum compression (speed 0 or 1 in libaom), which produces the smallest files but creates images that are computationally expensive to decode. On devices without hardware AV1 decoders, the high-compression AVIF took longer to decode than the larger JPEG took to download and decode combined. The encoding profile — not just the format selection — determines whether AVIF helps or hurts LCP.
How AVIF Encoding Speed Settings Affect Decode Complexity
AVIF encoders expose parameters that control the tradeoff between compression efficiency (smaller files) and encoding/decoding complexity. The most influential parameter is the speed setting, which in libaom (the reference AV1 encoder) ranges from 0 to 10, where 0 produces the smallest files with the slowest encoding and most complex bitstreams, and 10 produces the largest files with the fastest encoding and simplest bitstreams.
Lower speed values enable the encoder to search more exhaustively for optimal compression decisions: evaluating more partition sizes, testing more intra-prediction modes, applying stronger loop filtering, and exploring more rate-distortion optimization paths. Each additional optimization the encoder applies produces a bitstream that requires the decoder to perform the corresponding inverse operation. The encoder’s exhaustive search at speed 0 creates a bitstream that is maximally compressed but also maximally complex to decompress.
The decode cost increase from lower speed settings is not proportional to the file size improvement. Moving from speed 6 to speed 4 in libaom typically reduces file size by 5-8% while increasing software decode time by 30-50%. Moving from speed 4 to speed 0 may reduce file size by an additional 10-15% while increasing software decode time by 100-200%. The marginal file size savings shrink while the marginal decode cost accelerates — a diminishing-returns curve that makes maximum-compression encoding profiles particularly poor choices for LCP-critical images served to diverse device populations.
The asymmetry is documented in AV1 codec benchmarking: the highest compression presets save relatively few additional bytes compared to moderate presets, but the decode complexity increase is substantial because the encoder enables increasingly exotic coding tools that produce increasingly expensive-to-decode bitstreams. Position confidence: Confirmed through libaom documentation and AV1 codec specification, which describe the relationship between encoding speed presets and decode complexity.
The Encoding Parameters That Most Affect Decode Speed
Beyond the general speed parameter, specific encoding decisions independently affect decode complexity:
Tile partitioning. AVIF supports dividing the image into tiles that can be decoded independently (potentially in parallel). More tiles add per-tile header overhead and reduce compression efficiency, but enable parallel decoding on multi-core devices. Fewer tiles maximize compression but force sequential decoding. For LCP-critical images on multi-core devices, moderate tile counts (2-4 tiles for hero-sized images) balance compression with decode parallelism.
Film grain synthesis. AV1 supports encoding film grain parameters as metadata rather than encoding the grain itself into the pixel data. The encoder analyzes the source image’s noise pattern, strips it during encoding (producing a cleaner, more compressible image), and stores grain synthesis parameters that the decoder uses to regenerate noise during decode. This technique produces significantly smaller files for photographic content with visible grain, but adds per-pixel computation during decode as the decoder generates and applies the noise pattern. On software decoders with limited CPU budget, film grain synthesis can add 50-100ms of decode time to a hero image.
Transform block size. AV1 supports transform blocks up to 64×64 pixels (and in some profiles, 128×128). Larger transform blocks are more efficient for compressing smooth image regions, reducing file size. But larger transforms require more memory during decode (the decoder must allocate buffers proportional to block size) and more computation per block. On memory-constrained devices (2-4GB RAM), large transform blocks increase both decode time and memory pressure.
Loop filter strength. AV1’s deblocking filter, constrained directional enhancement filter (CDEF), and loop restoration filter improve visual quality at lower bitrates by smoothing compression artifacts after initial decode. Stronger filtering improves quality-per-byte but adds per-pixel post-processing work during decode. Each additional filtering pass adds decode time proportional to image dimensions.
An AVIF encoding profile optimized purely for file size typically maximizes all of these parameters: maximum compression speed, film grain synthesis enabled, large transform blocks, and strong loop filtering. The resulting images are the smallest possible files but also the most expensive to decode — a combination that helps LCP on hardware-decoder-equipped devices and hurts LCP on software-decoder devices.
The Device-Tier Asymmetry of AVIF Decode Performance
On devices with hardware AV1 decoders (typically flagship SoCs from 2021 onward), the encoding profile produces minimal decode time variation. Hardware decoders process even complex bitstreams quickly because the decode operations are implemented in fixed-function silicon optimized for AV1’s specific transform, prediction, and filtering algorithms. A maximally compressed AVIF that takes the hardware decoder 15ms to decode would take the same hardware decoder 12ms at a moderate compression level — a 3ms difference that is irrelevant to LCP.
On devices relying on software decoding (mid-tier and budget devices, older flagships prior to 2021), the encoding profile dramatically affects decode time. Software decoding executes the same algorithms on general-purpose CPU cores that also handle JavaScript execution, layout computation, and event processing. A highly compressed AVIF that decodes in 20ms on a Snapdragon 8 Gen 1’s hardware decoder may take 300-500ms on a Snapdragon 665’s CPU-based software decoder. The same image at a moderate compression level might software-decode in 150-200ms — still slower than hardware but substantially faster than the maximum-compression version.
Since CrUX measures the 75th percentile, and the 75th percentile for many global sites includes mid-tier devices from 2019-2021 without hardware AV1 decoders, the software-decode performance of the AVIF encoding profile often determines the CWV outcome. A profile optimized for minimum file size produces the worst software-decode performance, creating a scenario where the “most optimized” image produces the worst LCP for the users who define the CrUX score.
This device-tier asymmetry explains a common diagnostic pattern: Lighthouse shows excellent LCP (because the developer’s machine or Google’s testing infrastructure has hardware AV1 decoders), but CrUX shows degraded LCP after AVIF migration (because 75th percentile real users are on software-decode devices). The lab-field discrepancy is not a measurement error — it reflects a real performance difference caused by the encoding profile’s decode cost on the actual device population.
Calibrating AVIF Encoding for Decode Speed
The optimal AVIF encoding for LCP-critical images prioritizes decode speed alongside file size, using a moderate compression profile that sacrifices some compression efficiency for significantly faster decoding.
Specific encoding parameter recommendations for LCP-critical hero images:
- Speed 4-6 in libaom (or equivalent in other encoders like libavif, cavif, or rav1e): moderate compression that avoids the most complex coding tools while still producing meaningful file size reduction over JPEG. Speed 6 typically produces files 35-40% smaller than JPEG; speed 0 produces files 45-50% smaller. The 5-10% additional savings from speed 0 is not worth the 2-3x decode time increase on software decoders.
- Film grain synthesis disabled: for hero images where decode speed is critical, encoding the grain into the pixel data rather than synthesizing it during decode eliminates the per-pixel decode computation. The file size increase from disabling grain synthesis is typically 5-15% for photographic content.
- Moderate tile count: 2-4 tiles for images over 500px in either dimension, enabling parallel decode on multi-core devices. Single-tile encoding maximizes compression but prevents decode parallelism.
- Transform block size limited to 32×32 or smaller: avoiding 64×64 blocks reduces memory pressure during decode with modest file size increase.
Validation through field measurement: encode the hero image at multiple speed settings (e.g., speed 2, 4, 6, 8), deploy each version to a subset of traffic, and measure the decode delta (renderTime - loadTime from PerformanceElementTiming) segmented by device tier. The speed setting that minimizes total LCP (download time + decode time) at the 75th percentile device tier is the optimal profile for that site’s audience.
Alternatively, test decode speed on a representative mid-tier device by connecting it via USB to Chrome DevTools and profiling the page load with the Performance panel. The “Image Decode” entry in the flame chart shows the exact decode duration for each image, enabling direct comparison between encoding profiles.
When JPEG or WebP Remains the Better Choice for LCP Images
For hero images that are the LCP element on pages serving a global audience with significant low-end device representation, optimized JPEG or WebP may produce better LCP at the 75th percentile than any AVIF encoding profile.
Optimized JPEG (progressive encoding, quality 75-80, chroma subsampling 4:2:0, Huffman optimization) benefits from near-universal hardware decoder support across all device tiers. A well-optimized JPEG at quality 75 is typically 15-25% larger than a moderate-compression AVIF but decodes 5-10x faster on software decoders. For sites where the 75th percentile device tier lacks hardware AV1 decoders, the download time increase from the larger JPEG is smaller than the decode time savings from hardware-accelerated JPEG decoding.
WebP (quality 75-80, with VP8 hardware decode support on devices from 2018+) provides a middle ground: files 25-34% smaller than JPEG with decode complexity lower than AVIF. VP8 hardware decoders are present on a broader range of devices than AV1 hardware decoders, making WebP the safer next-gen format choice for LCP-critical images. Performance testing data consistently shows WebP as the format that optimizes for real-time decoding on mobile devices, especially on lower-end hardware (crystallize.com, speedvitals.com, 2025).
AVIF’s advantage is clearest for non-LCP images below the fold where decode time does not affect CWV metrics, and for audiences predominantly on hardware-decoder-equipped devices where AVIF’s superior compression translates directly to faster LCP. The emerging best practice segments format selection by image role: WebP or optimized JPEG for LCP-critical hero images, AVIF for below-the-fold content images where compression savings reduce data transfer without CWV timing impact.
Limitations: No Universal Encoding Profile Exists
The optimal AVIF encoding profile depends on three variables that differ across sites:
- Image content type: photographs with complex textures benefit more from high compression than graphics with flat colors and sharp edges. Film grain synthesis helps photographs but is irrelevant for screenshots or illustrations.
- Target device population: a US-focused site with predominantly flagship device users can use more aggressive compression than a globally-focused site with significant budget-device traffic.
- Network conditions: audiences on fast connections (fiber, 5G) gain little from AVIF’s extra compression over WebP, while audiences on slow connections (3G, congested LTE) gain proportionally more from every kilobyte saved.
An encoding profile calibrated for one combination of these variables will underperform for a different combination. Site-specific testing with field data — measuring actual LCP outcomes per encoding profile per device tier per network condition — is the only reliable calibration method. Generic guidance to “use AVIF for everything” or “always use maximum compression” ignores the decode-speed variable that determines whether the format conversion actually improves LCP for the users who define the CrUX score.
Can you use different AVIF encoding profiles for the same image served to different device tiers?
Yes. The CDN or image processing pipeline can generate multiple AVIF derivatives of the same image at different speed settings, then serve the appropriate version based on device-tier signals. Use Client Hints (Sec-CH-UA-Platform, device-memory) to identify low-end devices and serve the faster-decoding moderate-compression AVIF. Serve maximum-compression AVIF to devices with known hardware AV1 decoders. This approach requires additional storage and CDN configuration but optimizes LCP across the full device spectrum.
Does AVIF encoding speed affect visual quality at the same file size, or only decode performance?
Both. Lower speed settings (higher compression) produce better visual quality at the same file size because the encoder searches more exhaustively for optimal compression decisions. At speed 0, the encoder finds coding choices that preserve more detail per byte than speed 6. However, the quality difference between speed 4 and speed 0 at the same target file size is typically imperceptible to human viewers, while the decode cost difference is substantial on software decoders.
How do image CDNs like Cloudinary and imgix handle AVIF encoding speed settings by default?
Most image CDNs use moderate encoding speed presets (typically speed 4-6 equivalent) that balance compression efficiency against processing cost and decode speed. CDN providers optimize for throughput since they encode millions of images daily, making maximum-compression speed 0 encoding impractical at scale. This default behavior generally produces AVIF files that decode reasonably fast on most devices. Override the default only after field-testing confirms that a different speed setting improves LCP for your specific audience.
Sources
- https://speedvitals.com/blog/webp-vs-avif/
- https://crystallize.com/blog/avif-vs-webp
- https://unifiedimagetools.com/en/articles/avif-webp-jpegxl-comparison-2025
- https://imagerobo.com/blogs/image-formats-comparison-jpeg-webp-avif
- https://coretoolshub.com/blog/avif-vs-webp-vs-jpeg-which-image-format-should-you-use-in-2025/