openstatus logoPricingDocsDashboard

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function

Mar 14, 2024 | by Thibault Le Ouay Ducasse | [education]

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function

In our previous article, we compared the latency of various cloud providers but did not include Vercel. This article will compare the latency of Vercel Serverless Function with Vercel Edge Function.

We will test a basic Next.js application with the app router. Below is the code for the routes:

import { NextResponse } from "next/server";

export const dynamic = "force-dynamic";

export const maxDuration = 25; // to trick and not using the same function as the other ping route

export async function GET() {
  return NextResponse.json({ ping: "pong" }, { status: 200 });
}

export async function POST(req: Request) {
  const body = await req.json();
  return NextResponse.json({ ping: body }, { status: 200 });
}

We have 4 routes, 3 using the NodeJS runtime and one is using Edge runtime.

  • /api/ping is using the NodeJS runtime
  • /api/ping/warm is using the NodeJS runtime
  • /api/ping/cold is using the NodeJS runtime
  • /api/ping/edge is using the Edge runtime

Each route have a different maxDuration, it's a trick to avoid bundling the functions in the same physical functions.

Here is the repository of the application.

Vercel Serverless Function - NodeJS runtime

They are using the NodeJS 18 runtime. We have access to all the nodejs API. Our function are deployed in a single location: iad1 - Washington, D.C., USA.

Upgrading to Node.js 20 could enhance cold start performance, but it's still in beta.

We analyzed the header of each request and observe that all requests are processed in a data center near our location before being routed to our serverless location.

  • ams -> fra1 -> iad1
  • gru -> gru1 -> iad1
  • hkg -> hkg1 -> iad1
  • iad -> iad1 -> iad1
  • jnb -> cpt1 -> iad1
  • syd -> syd1 -> iad1

We never encountered a request routed to a different data center, and we never hit the Vercel cache.

Warm - /api/ping/warm

100% UPTIME

0# FAILS

12,090# PINGS

246ms P50

305ms P75

442ms P90

563ms P95

855ms P99

Vercel warm p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP75P95P99
πŸ‡³πŸ‡± ams
191ms782ms869ms
πŸ‡ΊπŸ‡Έ iad
77ms358ms767ms
πŸ‡­πŸ‡° hkg
308ms470ms959ms
πŸ‡ΏπŸ‡¦ jnb
390ms522ms1003ms
πŸ‡¦πŸ‡Ί syd
260ms347ms886ms
πŸ‡§πŸ‡· gru
275ms369ms705ms

We are pinging this functions every 5 minutes to keep it warm.

Cold - /api/ping/cold

100% UPTIME

0# FAILS

2,010# PINGS

859ms P50

933ms P75

1,004ms P90

1,046ms P95

1,156ms P99

Vercel cold p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP75P95P99
πŸ‡³πŸ‡± ams
870ms958ms1015ms
πŸ‡ΊπŸ‡Έ iad
761ms822ms894ms
πŸ‡­πŸ‡° hkg
934ms1024ms1073ms
πŸ‡ΏπŸ‡¦ jnb
1016ms1128ms1211ms
πŸ‡¦πŸ‡Ί syd
892ms996ms1044ms
πŸ‡§πŸ‡· gru
894ms994ms1173ms

We are pinging this functions every 30 minutes to ensure the functions will be scaled down.

Cold Roulette - /api/ping

100% UPTIME

0# FAILS

6,036# PINGS

305ms P50

791ms P75

914ms P90

972ms P95

1,086ms P99

Vercel roulette p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP75P95P99
πŸ‡³πŸ‡± ams
338ms872ms986ms
πŸ‡ΊπŸ‡Έ iad
216ms777ms831ms
πŸ‡­πŸ‡° hkg
504ms948ms1063ms
πŸ‡ΏπŸ‡¦ jnb
803ms1027ms1139ms
πŸ‡¦πŸ‡Ί syd
516ms914ms1027ms
πŸ‡§πŸ‡· gru
673ms916ms1040ms

We are pinging this functions every 10 minutes. It's an inflection point where we never know if the function will be warm or cold.

Vercel Edge Function

Vercel Edge Functions is using the Edge Runtime. They are deployed globally and executed in a datacenter close to the user.

They have limitations compared to the NodeJs runtime, but they have a faster cold start.

We analyzed the request header and found that the X-Vercel-Id header indicates the request is processed in a datacenter near the user.

  • ams -> fra1
  • gru -> gru1
  • hkg -> hkg1
  • iad -> iad1
  • jnb -> cpt1
  • syd -> syd1

Edge - /api/ping/edge

100% UPTIME

0# FAILS

6,042# PINGS

106ms P50

124ms P75

152ms P90

178ms P95

328ms P99

Vercel edge p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP75P95P99
πŸ‡³πŸ‡± ams
152ms203ms373ms
πŸ‡ΊπŸ‡Έ iad
133ms168ms259ms
πŸ‡­πŸ‡° hkg
125ms162ms272ms
πŸ‡ΏπŸ‡¦ jnb
148ms210ms349ms
πŸ‡¦πŸ‡Ί syd
110ms146ms347ms
πŸ‡§πŸ‡· gru
144ms240ms348ms

We are pinging this functions every 10 minutes.

Conclusion

Runtimep50p95p99
Serverless Cold Start859ms1,046ms1,156ms
Serverless Warm246ms563ms855ms
Edge106ms178ms328ms

Globablly Edge functions are approximately 9 times faster than Serverless functions during cold starts, but only 2 times faster when the function is warm.

Edge functions have similar latency regardless of the user's location. If you value your users and have a worldwide audience, you should consider Edge Functions.

Create an account on OpenStatus to monitor your API and get notified when your latency increases.

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function | openstatus