Measuring and Optimizing Web Site Performance: Metrics, Collection Techniques, and Evaluation
This article explains how ByteDance measures website performance, describing key front‑end metrics such as FP, FCP, LCP, TTI, TBT, FID, and MPFID, the APIs used to collect them, and how to evaluate overall site health and guide targeted optimizations.
Author: Peng Li, APM R&D Engineer at Volcano Engine. Joined ByteDance in 2020, responsible for front‑end monitoring SDK development and platform data consumption.
Background
Do you know how many users leave before the first screen appears? Poor performance harms business goals; for example, BBC found that each additional second of load time costs them 10% of users. High‑performance sites attract and retain users, which is crucial for conversion.
This article introduces how ByteDance measures site performance internally and how performance monitoring is used to locate online performance issues.
How to Measure Site Performance
Site performance cannot be judged solely by page load or render speed; it must consider the entire lifecycle from start of loading to page close, i.e., the user’s perceived performance. Even a fast‑rendering page feels slow if interactions are unresponsive.
Performance is generally divided into two categories: first‑screen performance (from load start to stable interaction) and runtime performance (from stable state to page close).
First‑Screen Performance
Since 2012 the Web Performance Working Group defined a loading‑process model that measures each stage of page load. Key timestamps include:
domLoading – start of HTML parsing
domInteractive – DOM parsed, start loading sub‑resources
domComplete – document parsing finished
loadEventStart – load event triggered
Developers can use these timestamps, but users only perceive four stages: when rendering starts, when main content appears, when the page becomes interactive, and whether there is interaction delay.
When Rendering Starts: FP & FCP
FP (First Paint): the moment any pixel is painted.
FCP (First Contentful Paint): the moment the first meaningful content is painted.
Both metrics come from the Paint Timing standard.
When Main Content Is Rendered: FMP, LCP & SI
FMP (First Meaningful Paint): first time meaningful content is painted.
LCP (Largest Contentful Paint): when the largest element in the viewport becomes visible.
SI (Speed Index): measures visual loading speed; rarely used in production.
Industry tests show LCP closely matches FMP, while FMP is costly and unstable, so LCP is recommended.
When the Page Becomes Interactive: TTI & TBT
TTI (Time to Interactive): point when the page is visually complete and can reliably respond to user input.
TBT (Total Blocking Time): time between FCP and TTI spent in long tasks, quantifying main‑thread busy‑ness.
TTI alone does not show thread load; combined with TBT it reveals how long the page cannot respond.
Interaction Delay: FID & MPFID
FID (First Input Delay): time from a user's first interaction to the browser’s response.
MPFID (Max Potential First Input Delay): the longest possible delay for the first interaction during page load.
FID reflects the real user experience; MPFID is a theoretical worst‑case. FID is generally used for performance scoring.
Runtime Performance
Runtime performance is sensed via Long Tasks and Input Delay. Long Tasks are tasks longer than 50 ms on the main thread; tracking them helps locate stalls.
Long Tasks
If a task runs over 50 ms it is a Long Task. Correlating long tasks with user actions helps pinpoint the cause of stutter.
Input Delay
Input Delay originates from the Event Timing standard and is usually caused by long JavaScript execution.
How Performance Metrics Are Collected
Collecting Navigation Timing
Page‑load timestamps rely on the Navigation Timing API (now Navigation Timing 2). Example:
const timing = window.performance.timing;
performance.getEntriesByType('navigation');Collecting FP & FCP
FP and FCP can be obtained directly:
window.performance.getEntriesByType('paint');
// or
window.performance.getEntriesByName('first-paint');
window.performance.getEntriesByName('first-contentful-paint');If the page has not painted yet, a PerformanceObserver can listen for paint events:
const observer = new PerformanceObserver(list => {
const perfEntries = list.getEntries();
// process entries
});
observer.observe({entryTypes: ['paint']});Collecting LCP
LCP is observed via PerformanceObserver:
new PerformanceObserver(entryList => {
for (const entry of entryList.getEntries()) {
console.log('LCP candidate:', entry.startTime, entry);
}
}).observe({type: 'largest-contentful-paint', buffered: true});The final LCP is the last reported value before user interaction.
Collecting FMP
FMP requires algorithmic estimation; ByteDance approximates it by finding the moment with the most drastic DOM‑structure change using a MutationObserver.
Collecting TTI & TBT
TTI is derived by locating a quiet window after FCP (no long tasks, ≤2 concurrent GET requests, lasting ≥5 s). The end of the last long task before that window is TTI; TBT is the sum of blocking time between FCP and TTI.
Collecting FID & MPFID
FID is observed with PerformanceObserver:
new PerformanceObserver((list, obs) => {
const firstInput = list.getEntries()[0];
const firstInputDelay = firstInput.processingStart - firstInput.startTime;
const firstInputDuration = firstInput.duration;
const targetId = firstInput.target ? firstInput.target.id : 'unknown-target';
// process delay and duration
obs.disconnect();
}).observe({type: 'first-input', buffered: true});MPFID is the longest Long Task after FCP.
Evaluating Overall Site Performance
Google provides baseline values for each metric, but they evolve over time and differ across platforms (e.g., iOS vs Android). ByteDance aligns its internal thresholds with Google’s recommendations.
Overall site satisfaction also considers visual‑stability metrics such as CLS. ByteDance’s satisfaction score removes SI and TBT from the Lighthouse weighting.
How to Optimize Site Performance
Optimizations target the dependencies of each metric. For example, improving TTI focuses on reducing FCP, minimizing request time, and eliminating long tasks.
Targeted, impact‑driven optimization—reproducing the user’s loading timeline to locate bottlenecks—yields the fastest results.
Using Online Monitoring to Locate Problems
Front‑end monitoring platforms collect performance metrics, resource waterfall data, and long‑task logs, enabling reconstruction of the user’s loading experience.
When multiple metrics are poor, the waterfall can reveal that most time is spent fetching resources, suggesting measures such as reducing JS bundle size or lazy‑loading unused code.
ByteDance’s monitoring solution is now available on Volcano Engine, offering real‑time data, alerts, clustering, and detailed diagnostics for white‑screen, performance bottlenecks, and slow queries.
Related Resources
Web Performance Working Group : https://www.w3.org/webperf/
Paint Timing : https://w3c.github.io/paint-timing/
Event Timing : https://w3c.github.io/event-timing/
Navigation Timing : https://www.w3.org/TR/navigation-timing/
Navigation Timing 2 : https://www.w3.org/TR/navigation-timing-2/
ByteDance Terminal Technology
Official account of ByteDance Terminal Technology, sharing technical insights and team updates.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.