Monitoring latency: Vercel Serverless Function vs Vercel Edge Function
Mar 14, 2024 | by Thibault Le Ouay Ducasse | [education]

In our previous article, we compared the latency of various cloud providers but did not include Vercel. This article will compare the latency of Vercel Serverless Function with Vercel Edge Function.
We will test a basic Next.js application with the app router. Below is the code for the routes:
import { NextResponse } from "next/server";
export const dynamic = "force-dynamic";
export const maxDuration = 25; // to trick and not using the same function as the other ping route
export async function GET() {
return NextResponse.json({ ping: "pong" }, { status: 200 });
}
export async function POST(req: Request) {
const body = await req.json();
return NextResponse.json({ ping: body }, { status: 200 });
}
We have 4 routes, 3 using the NodeJS runtime and one is using Edge runtime.
/api/pingis using the NodeJS runtime/api/ping/warmis using the NodeJS runtime/api/ping/coldis using the NodeJS runtime/api/ping/edgeis using the Edge runtime
Each route have a different maxDuration, it's a trick to avoid bundling the
functions in the same physical functions.
Here is the repository of the application.
Vercel Serverless Function - NodeJS runtime
They are using the NodeJS 18 runtime. We have access to all the nodejs API. Our function are deployed in a single location: iad1 - Washington, D.C., USA.
Upgrading to Node.js 20 could enhance cold start performance, but it's still in beta.
We analyzed the header of each request and observe that all requests are processed in a data center near our location before being routed to our serverless location.
ams->fra1->iad1gru->gru1->iad1hkg->hkg1->iad1iad->iad1->iad1jnb->cpt1->iad1syd->syd1->iad1
We never encountered a request routed to a different data center, and we never hit the Vercel cache.
Warm - /api/ping/warm
100% UPTIME
0# FAILS
12,090# PINGS
246ms P50
305ms P75
442ms P90
563ms P95
855ms P99
| Region | Trend | P75 | P95 | P99 |
|---|---|---|---|---|
| π³π± ams | 191ms | 782ms | 869ms | |
| πΊπΈ iad | 77ms | 358ms | 767ms | |
| ππ° hkg | 308ms | 470ms | 959ms | |
| πΏπ¦ jnb | 390ms | 522ms | 1003ms | |
| π¦πΊ syd | 260ms | 347ms | 886ms | |
| π§π· gru | 275ms | 369ms | 705ms |
We are pinging this functions every 5 minutes to keep it warm.
Cold - /api/ping/cold
100% UPTIME
0# FAILS
2,010# PINGS
859ms P50
933ms P75
1,004ms P90
1,046ms P95
1,156ms P99
| Region | Trend | P75 | P95 | P99 |
|---|---|---|---|---|
| π³π± ams | 870ms | 958ms | 1015ms | |
| πΊπΈ iad | 761ms | 822ms | 894ms | |
| ππ° hkg | 934ms | 1024ms | 1073ms | |
| πΏπ¦ jnb | 1016ms | 1128ms | 1211ms | |
| π¦πΊ syd | 892ms | 996ms | 1044ms | |
| π§π· gru | 894ms | 994ms | 1173ms |
We are pinging this functions every 30 minutes to ensure the functions will be scaled down.
Cold Roulette - /api/ping
100% UPTIME
0# FAILS
6,036# PINGS
305ms P50
791ms P75
914ms P90
972ms P95
1,086ms P99
| Region | Trend | P75 | P95 | P99 |
|---|---|---|---|---|
| π³π± ams | 338ms | 872ms | 986ms | |
| πΊπΈ iad | 216ms | 777ms | 831ms | |
| ππ° hkg | 504ms | 948ms | 1063ms | |
| πΏπ¦ jnb | 803ms | 1027ms | 1139ms | |
| π¦πΊ syd | 516ms | 914ms | 1027ms | |
| π§π· gru | 673ms | 916ms | 1040ms |
We are pinging this functions every 10 minutes. It's an inflection point where we never know if the function will be warm or cold.
Vercel Edge Function
Vercel Edge Functions is using the Edge Runtime. They are deployed globally and executed in a datacenter close to the user.
They have limitations compared to the NodeJs runtime, but they have a faster cold start.
We analyzed the request header and found that the X-Vercel-Id header indicates
the request is processed in a datacenter near the user.
ams->fra1gru->gru1hkg->hkg1iad->iad1jnb->cpt1syd->syd1
Edge - /api/ping/edge
100% UPTIME
0# FAILS
6,042# PINGS
106ms P50
124ms P75
152ms P90
178ms P95
328ms P99
| Region | Trend | P75 | P95 | P99 |
|---|---|---|---|---|
| π³π± ams | 152ms | 203ms | 373ms | |
| πΊπΈ iad | 133ms | 168ms | 259ms | |
| ππ° hkg | 125ms | 162ms | 272ms | |
| πΏπ¦ jnb | 148ms | 210ms | 349ms | |
| π¦πΊ syd | 110ms | 146ms | 347ms | |
| π§π· gru | 144ms | 240ms | 348ms |
We are pinging this functions every 10 minutes.
Conclusion
| Runtime | p50 | p95 | p99 |
|---|---|---|---|
| Serverless Cold Start | 859ms | 1,046ms | 1,156ms |
| Serverless Warm | 246ms | 563ms | 855ms |
| Edge | 106ms | 178ms | 328ms |
Globablly Edge functions are approximately 9 times faster than Serverless functions during cold starts, but only 2 times faster when the function is warm.
Edge functions have similar latency regardless of the user's location. If you value your users and have a worldwide audience, you should consider Edge Functions.
Create an account on OpenStatus to monitor your API and get notified when your latency increases.