Timing your code: the Performance API

Website performance is crucial for delivering a seamless user experience, and slow-loading sites can lead to user frustration and increased bounce rates. To address this, modern web browsers provide developers with the Performance API. With it, we can measure and track key performance indicators such as page load time, network latency, and resource timing.

The Performance API

The Performance API includes multiple APIs, so this article focuses on introducing the basic features for performance monitoring.

The API is evolving and there are many new features and deprecations to come. So you should regularly check out MDN or the W3C website for the most recent updates.

Table of contents

Open Table of contents

Availability

Most modern browsers support the Performance API (including IE11, IE10 and even IE9 has limited support). We can detect the API’s presence using:

if ('performance' in window) {
  // use Performance API
}

The API can be used in Web Workers, which provide a way to execute complex calculations in a background thread without halting browser operations.

Most API methods can be used in server-side Node.js with the standard perf_hooks module:

// Node.js performance
import { performance } from 'node:perf_hooks'
// or in Common JS: const { performance } = require('node:perf_hooks');

console.log(performance.now())

How the process works

The basics

There are a couple of things to consider regarding the performance measuring process with the Performance API:

  • The Performance API includes a performance buffer (or performance timeline) that stores performance entries. It automatically records various timing information related to the navigation, resources, and painting of the webpage. Some examples are:
    • Navigation: reports document navigation timing such as how much time it takes to load or unload a document.
    • Paint: reports key moments of document rendering (first paint, first contentful paint).
    • Resource: reports timing information for resources in a document such as requests, images, or scripts.
    • First-input: reports the first input delay (FID).
  • Adding entries: we can add entries to the performance buffer, such as marks or measures. These can be specific timing points or measurements that we want to track for performance analysis.
  • Working with results: finally we can retrieve and analyze the results by getting the entries from the buffer.

Additionally, we can also utilize a Performance Observer (more on that later), which provides a programmatic way to observe performance entries.

Adding and getting entries

We can add entries (see PerformanceEntry) with:

  1. performance.mark(): adds a mark (a named timestamp) to the performance buffer (see PerformanceMark).
    mark(name: string): void;
  2. performance.measure(): it is used to measure time between two marks. It adds a measure entry to the performance buffer (see PerformanceMeasure).
    measure(name: string startMark?: string, endMark?: string): PerformanceEntry;

    If startMark is not provided, the measure will start from the page load

    If endMark is not provided, the measure will end on the current time

We can retrieve entries with:

  1. performance.getEntries(): returns a list of entries currently present in the performance timeline.
    getEntries(): PerformanceEntry[];
  2. performance.getEntriesByName(): returns a list of PerformanceEntry objects based on the given name and entry type.
    getEntriesByName(name: string, entryType?: string): PerformanceEntry[];
  3. performance.getEntriesByType(): returns a list of PerformanceEntry objects of the given entry type.
    getEntriesByType(entryType: string): PerformanceEntry[];

    See this page for a list of valid performance entry type names.

All entries returned by the methods above will be in chronological order based on the entries’ startTime property.

Measuring

So basically, what we do to measure execution time of our code with the Performance API is:

  1. Set marks (or use automatically generated ones)
  2. Measure time between them
performance.mark('START')

// Code to be timed

performance.mark('END')
performance.measure('MEASURE', 'START', 'END')

const marks = performance.getEntriesByType('mark')
marks.forEach((entry) => {
  console.log(`${entry.name}: ${entry.startTime} ms`)
  // START: 46.19999998807907 ms
  // END: 46.30000001192093 ms
})

// Or taking a measure
const measure = performance.measure('measure', 'START', 'END')
console.log(`Duration: ${measure.duration} ms`) // Duration:  0.10000002384185791 ms

Why not Date.now()?

The Performance API includes a performance.now() method that returns a high resolution timestamp (DOMHighResTimeStamp) since the process responsible for creating the document starts (the page has loaded in a browser).

Both performance.now() and Date.now() are very similar, but with a few differences:

  1. performance.now() method has flaoting-point microsecond precision, whereas Date.now() has millisecond precision.
  2. While Date.now() depends on the system clock, performance.now() is based on the performance.timeOrigin property, which uses a monotonic clock offering reliable and consistent time measurements.

    A system clock can be influenced by adjustments, but a monotonic clock remains unaffected, offering reliable and consistent time measurements. More on this here.

const startPerf = performance.now()
const startDate = Date.now()

// Code to be timed

const endPerf = performance.now()
const endDate = Date.now()

console.log(`${endPerf - startPerf} ms`) // 319.22939997911453 ms
console.log(`${endDate - startDate} ms`) // 319 ms

Real-Time monitoring: the Performance Observer

What if we want to get the information of an entry instantly when that entry is added to the buffer? That’s when the PerformanceObserver comes into play.

As stated in MDN:

The PerformanceObserver interface is used to observe performance measurement events and be notified of new performance entries as they are recorded in the browser’s performance timeline.

A Performance Observer is particularly useful because it enables us to run a callback whenever new performance entries are added to the buffer. This allows us to have more control over when and how we collect performance data, enabling us to monitor specific events or metrics that are relevant to our application’s performance analysis.

How to

First, let’s create an instance of PerformanceObserver:

const observer = new PerformanceObserver(callback)

With:

callback(list: PerformanceObserverEntryList, observer: PerformanceObserver): void;

See PerformanceObserverEntryList and PerformanceObserver

On that instance we call it’s observe() method, which receives an options object:

observer.observe(options: PerformanceObserverInit): void;

type PerformanceObserverInit = {
  entryTypes?: string[];
  type?: string;
  buffered?: boolean;
  durationThreshold?: number;
}
  • buffered: indicates whether marks added before the creation of the observer instance should be added. Must be used only with the type option.
  • durationThreshold: minimum duration in milliseconds that an entry must have in order to trigger the observer’s callback function. Defaults to 104ms and is rounded to the nearest of 8ms. Lowest possible threshold is 16ms. May not be used together with the “entryTypes” option.
  • type: a single string specifying exactly one performance entry type to observe. May not be used together with the entryTypes option.
  • entryTypes: an array of strings, each specifying one performance entry type to observe. May not be used together with the type, buffered, or durationThreshold options.

See this page for a list of valid performance entry type names.

If no valid types are found, observe() has no effect.

Once the observe() method is called, it’ll keep on listening for entries to be added to the timeline.

For example:

// Resolves after 5 seconds
function resolveAfterFive() {
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve(`It's been 5 seconds`)
    }, 1000 * 5)
  })
}

// Create the observer
const observer = new PerformanceObserver((entries, observer) => {
  entries.getEntries().forEach((entry) => {
    console.log(`${entry.startTime} ms - ${entry.name}`)
    // Or store the results in a database, for example
  })
})

observer.observe({ type: 'mark' }) // Tell it to observe for marks

performance.mark('START_ACTION') // Add first mark (observer callback gets invoked)
resolveAfterFive().then((res) => {
  performance.mark('END_ACTION') // After 5 seconds, add second mark (observer callback gets invoked again)
})

What can we measure?

Paint timings

Paint (or render) refers to the process of converting the render tree into on-screen pixels. In client-side JavaScript, we can measure rendering performance using the PerformancePaintTiming API.

Most browsers automatically add marks for two key moments related to painting:

  • First Paint (FP): This is the time when anything is rendered on the screen. It’s important to note that the marking of the first paint is optional, and not all browsers report it.
  • First Contentful Paint (FCP): This marks the time when the first bit of DOM text or image content is rendered on the screen. It indicates when the user sees meaningful content.

FP and FCP values may not be ready if executed before the page has fully loaded. We can either wait for the window.load event or utilize a PerformanceObserver.

const observer = new PerformanceObserver((list) => {
  list.getEntries().forEach((entry) => {
    console.log(`The time to ${entry.name} was ${entry.startTime} milliseconds.`)
    // Logs "The time to first-paint was 386.7999999523163 milliseconds."
    // Logs "The time to first-contentful-paint was 400.6999999284744 milliseconds."
  })
})

observer.observe({ type: 'paint', buffered: true })

In addition to FP and FCP, there is another important metric called Largest Contentful Paint (LCP). The LCP is provided by the LargestContentfulPaint API.

  • Largest contentful paint (LCP): the render time of the largest image or text block that is visible within the viewport. It is recorded from the moment the page starts loading.

The LCP is a crucial metric as it represents the time it takes for the most significant content to become visible to the user.

Resources timings

Browsers automatically records network timings for resources loaded by a webpage (such as images, scripts, stylesheets, etc.). The PerformanceResourceTiming interface provides detailed timing information about the loading of these resources.

// Get all resource timing entries
let resourceEntries = performance.getEntriesByType('resource')

// Loop through the resource entries
resourceEntries.forEach(function (entry) {
  console.log('Resource URL: ' + entry.name)
  console.log('Start time: ' + entry.startTime)
  console.log('Duration: ' + entry.duration)
})

The PerformanceNavigationTiming provides detailed timing information related to the navigation of a webpage, such as when a page is loaded or when the user navigates to a different URL.

This information can be used to measure page load times, identify performance issues, and optimize the navigation experience for users.

Here are some of the key properties available:

  • navigationStart: the time when the navigation process started.
  • unloadEventStart, unloadEventEnd: timings related to the unloading of the previous document (if applicable).
  • redirectStart, redirectEnd: timings related to any redirects that occurred during navigation.
  • domInteractive: the time when the HTML document is parsed and the DOM (Document Object Model) is ready for manipulation.
  • domComplete: the time when the parsing of the HTML document is complete, and all resources (such as images, scripts, etc.) have finished loading.
  • loadEventStart, loadEventEnd: timings related to the load event, which occurs when all resources (including subresources) have finished loading.

Common problems to watch out for include:

  • A long delay between unloadEventEnd and domInteractive. This could indicate a slow server response.
  • A long delay between domContentLoadedEventStart and domComplete. This could indicate that page start-up scripts are too slow.
  • A long delay between domComplete and loadEventEnd. This could indicate the page has too many assets or several are taking too long to load.
let navigationEntry = performance.getEntriesByType('navigation')[0]
let serverResponseDelay = navigationEntry.domInteractive - navigationEntry.unloadEventEnd

console.log('Server Response Delay: ' + serverResponseDelay + ' milliseconds') // Server Response Delay: 44.10000002384186 milliseconds

Bonus: timerify() method

In the context of a server-side Node application, the Performance API provides us with a timerify() method. It allows us to create a timer that measures the execution time of a specific function or code block without having to add marks or take measures. This can be useful when we want to track the performance of certain operations in our Node.js applications.

Let’s take a look at an example to see how it works:

const { performance, PerformanceObserver } = require('perf_hooks')

function slowOperation() {
  // Simulating a slow operation
  for (let i = 0; i < 1000000000; i++) {
    // Do some computations
  }
}

const timerifiedSlowOperation = performance.timerify(slowOperation)

timerifiedSlowOperation()

const observer = new PerformanceObserver((list) => {
  list.getEntries().forEach((entry) => {
    console.log(`Execution time: ${entry.duration} milliseconds`)
  })
})

observer.observe({ entryTypes: ['function'] })