Understanding Web Performance Metrics and Implementing Monitoring with Sentry
This article explains the key web performance indicators such as FP, FCP, LCP, TTI, TBT, FID, CLS, how to obtain them using the Performance API and PerformanceObserver, and provides step‑by‑step guidance on configuring Sentry for automated performance monitoring, including sample JavaScript code for custom calculations.
Measuring a website's performance typically focuses on two aspects: first‑paint metrics that affect the initial user experience and the smoothness of interactions after the page loads. Good performance improves user acquisition and retention.
Traditional browser tools like performance , network and Lighthouse panels help analyze loading phases, but they cannot fully represent real‑world user conditions such as varied devices, networks, and traffic patterns. To capture authentic user experiences, performance monitoring services such as Sentry or Fundebug can collect first‑paint, interaction, and other metric data from actual users and visualize them for analysis.
Common Performance Optimization Metrics and How to Retrieve Them
The W3C defines a comprehensive set of performance data. Important user‑centric metrics include:
First Paint (FP) and First Contentful Paint (FCP) : FP marks the moment the browser renders anything on the screen; FCP marks the first non‑blank content (text, image, canvas, SVG). Both can be obtained via performance.getEntry , performance.getEntriesByName or a PerformanceObserver observing the paint entry type.
First Meaningful Paint (FMP), Speed Index (SI) and Largest Contentful Paint (LCP) : FMP (now deprecated) measured the first meaningful rendering, SI measures visual progress speed, and LCP records the time when the largest visible element is painted. LCP can be observed with performanceObserver on the largest-contentful-paint entry type.
Time to Interactive (TTI) and Total Blocking Time (TBT) : TTI indicates when the page becomes reliably interactive; TBT measures the cumulative time of long tasks between FCP and TTI. Both are reported by Lighthouse, but can be approximated manually using performanceObserver to collect long‑task entries and applying the quiet‑window algorithm.
First Input Delay (FID) and Max Potential FID (MPFID) : FID measures the latency of the first user interaction; MPFID is a theoretical worst‑case value (now deprecated). Both are observable via a PerformanceObserver on the first-input entry type.
Cumulative Layout Shift (CLS) : CLS quantifies unexpected layout movements throughout the page’s lifecycle. It can be observed with a PerformanceObserver on the layout-shift entry type.
Below are representative code snippets for retrieving these metrics.
var timing = window.performance.timing;
// Returned data format
{
navigationStart, // timestamp when the previous document finished
unloadEventStart, // timestamp when unload event started
// ... other timing properties ...
loadEventEnd // timestamp when load event finished
}Since window.performance.timing is deprecated, the modern API is:
var entries = window.performance.getEntriesByType('navigation');
// entries contain relative timestamps suitable for analysisFetching FP and FCP:
performance.getEntries().filter(item => item.name === 'first-paint')[0]; // FP
performance.getEntries().filter(item => item.name === 'first-contentful-paint')[0]; // FCP
performance.getEntriesByName('first-paint');
performance.getEntriesByName('first-contentful-paint');
var observer = new PerformanceObserver(function(list) {
list.getEntries().forEach(item => {
if (item.name === 'first-paint') { /* handle FP */ }
if (item.name === 'first-contentful-paint') { /* handle FCP */ }
});
});
observer.observe({type: 'paint'});Observing LCP:
new PerformanceObserver(entry => {
console.log('LCP candidate:', entry.startTime, entry);
}).observe({type: 'largest-contentful-paint', buffered: true});Calculating TTI manually (simplified):
function calculateTTI() {
// 1. Get FCP
var fcp = performance.getEntriesByName('first-contentful-paint')[0];
// 2. Find a 5‑second quiet window with ≤2 concurrent network requests
// 3. Locate the last long‑task before that window
// 4. TTI = end time of that long‑task (or FCP if none)
}Collecting long‑tasks:
let longTask = [];
new PerformanceObserver(entryList => {
entryList.getEntries().forEach(entry => {
longTask.push({
startTime: entry.startTime,
duration: entry.duration,
endTime: entry.startTime + entry.duration
});
});
}).observe({type: 'longtask', buffered: true});Implementing a request pool to detect quiet windows (simplified):
// Request pool
let pool = [];
let timer = null;
let tti;
function push(id) {
if (pool.length < 3 && !tti) {
timer = setTimeout(() => {
let fcp = performance.getEntriesByName('first-contentful-paint')[0];
tti = longTask.length ? longTask[longTask.length-1].endTime - fcp.startTime : fcp.startTime;
}, 5000);
} else {
clearTimeout(timer);
}
}
function pop(id) {
pool = pool.filter(item => item !== id);
if (pool.length < 3 && !tti) {
timer = setTimeout(() => {
let fcp = performance.getEntriesByName('first-contentful-paint')[0];
tti = longTask.length ? longTask[longTask.length-1].endTime - fcp.startTime : fcp.startTime;
}, 5000);
} else {
clearTimeout(timer);
}
}
let uniqueId = 0;Intercepting XHR requests:
const proxyXhr = () => {
const send = XMLHttpRequest.prototype.send;
XMLHttpRequest.prototype.send = function(...args) {
const requestId = uniqueId++;
push(requestId);
this.addEventListener('readystatechange', () => {
if (this.readyState === 4) {
pop(requestId);
}
});
return send.apply(this, args);
};
};Intercepting fetch requests:
function patchFetch() {
const originalFetch = fetch;
fetch = (...args) => {
return new Promise((resolve, reject) => {
const requestId = uniqueId++;
push(requestId);
originalFetch(...args).then(
value => { pop(requestId); resolve(value); },
err => { pop(requestId); reject(err); }
);
});
};
}Observing static‑resource loading via MutationObserver and cleaning up with PerformanceObserver :
const requestCreatingNodeNames = ['img','script','iframe','link','audio','video','source'];
function observeResourceFetchingMutations() {
const mutationObserver = new MutationObserver(mutations => {
mutations.forEach(mutation => {
if (mutation.type == 'childList' && mutation.addedNodes.length && requestCreatingNodeNames.includes(mutation.addedNodes[0].nodeName.toLowerCase())) {
push(mutation.addedNodes[0].href || mutation.addedNodes[0].src);
} else if (mutation.type == 'attributes' && (mutation.attributeName === 'href' || mutation.attributeName === 'src') && requestCreatingNodeNames.includes(mutation.target.tagName.toLowerCase())) {
push(mutation.target.href || mutation.target.src);
}
});
});
mutationObserver.observe(document, {attributes:true, childList:true, subtree:true, attributeFilter:['href','src']});
new PerformanceObserver(entryList => {
entryList.getEntries().forEach(entry => {
pop(entry.name);
});
}).observe({type:'resource', buffered:true});
}Sentry Performance Monitoring
To enable performance monitoring with Sentry, install the tracing integration:
yarn add @sentry/tracing
npm install --save @sentry/tracingThen configure Sentry during initialization:
import * as Sentry from "@sentry/react";
import { BrowserTracing } from "@sentry/tracing";
Sentry.init({
dsn: "https://[email protected]/0",
integrations: [new BrowserTracing()],
tracesSampleRate: 0.2 // adjust sampling rate (0‑1)
});The tracesSampleRate controls how many transactions are sent; setting it to 0 disables reporting, while 1 reports all. Adjusting the rate reduces backend load.
Sentry splits reported data into two categories:
Page‑load (pageload) : Collected after the initial load using window.performance.getEntries and PerformanceObserver . A timeout (default 1000 ms) ensures the first‑paint metrics are captured; the timeout can be customized via the idleTimeout option in BrowserTracing .
Navigation (navigation) : For SPA route changes, Sentry patches history.pushState , history.replaceState and window.onpopstate to detect navigation events, then reads new performance entries starting from the previous index.
These metrics are visualized in Sentry’s Performance panel, allowing developers to identify slow paints, long tasks, input delays, and layout shifts in real user sessions.
Conclusion
The article covered the essential web performance metrics, how to retrieve them via the Performance API, and detailed steps for configuring Sentry to automatically collect and visualize these metrics. Future articles will dive deeper into using Sentry’s Performance dashboard for actionable analysis.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.