Back

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function
TD
Thibault Le Ouay Ducasse

Mar 14, 20245 min read

In our previous article, we compared the latency of various cloud providers but did not include Vercel. This article will compare the latency of Vercel Serverless Function with Vercel Edge Function.

We will test a basic Next.js application with the app router. Below is the code for the routes:

import { NextResponse } from "next/server";
 
export const dynamic = "force-dynamic";
 
export const maxDuration = 25; // to trick and not using the same function as the other ping route
 
export async function GET() {
  return NextResponse.json({ ping: "pong" }, { status: 200 });
}
 
export async function POST(req: Request) {
  const body = await req.json();
  return NextResponse.json({ ping: body }, { status: 200 });
}

We have 4 routes, 3 using the NodeJS runtime and one is using Edge runtime.

  • /api/ping is using the NodeJS runtime
  • /api/ping/warm is using the NodeJS runtime
  • /api/ping/cold is using the NodeJS runtime
  • /api/ping/edge is using the Edge runtime

Each route have a different maxDuration, it's a trick to avoid bundling the functions in the same physical functions.

Here is the repository of the application.

Vercel Serverless Function - NodeJS runtime

They are using the NodeJS 18 runtime. We have access to all the nodejs API. Our function are deployed in a single location: iad1 - Washington, D.C., USA.

Upgrading to Node.js 20 could enhance cold start performance, but it's still in beta.

We analyzed the header of each request and observe that all requests are processed in a data center near our location before being routed to our serverless location.

  • ams -> fra1 -> iad1
  • gru -> gru1 -> iad1
  • hkg -> hkg1 -> iad1
  • iad -> iad1 -> iad1
  • jnb -> cpt1 -> iad1
  • syd -> syd1 -> iad1

We never encountered a request routed to a different data center, and we never hit the Vercel cache.

Warm - /api/ping/warm

uptime

100%

fails

0#

total pings

12,090#

p50

246ms

p75

305ms

p90

442ms

p95

563ms

p99

855ms

We are pinging this functions every 5 minutes to keep it warm.

Cold - /api/ping/cold

uptime

100%

fails

0#

total pings

2,010#

p50

859ms

p75

933ms

p90

1,004ms

p95

1,046ms

p99

1,156ms

We are pinging this functions every 30 minutes to ensure the functions will be scaled down.

Cold Roulette - /api/ping

uptime

100%

fails

0#

total pings

6,036#

p50

305ms

p75

791ms

p90

914ms

p95

972ms

p99

1,086ms

We are pinging this functions every 10 minutes. It's an inflection point where we never know if the function will be warm or cold.

Vercel Edge Function

Vercel Edge Functions is using the Edge Runtime. They are deployed globally and executed in a datacenter close to the user.

They have limitations compared to the NodeJs runtime, but they have a faster cold start.

We analyzed the request header and found that the X-Vercel-Id header indicates the request is processed in a datacenter near the user.

  • ams -> fra1
  • gru -> gru1
  • hkg -> hkg1
  • iad -> iad1
  • jnb -> cpt1
  • syd -> syd1

Edge - /api/ping/edge

uptime

100%

fails

0#

total pings

6,042#

p50

106ms

p75

124ms

p90

152ms

p95

178ms

p99

328ms

We are pinging this functions every 10 minutes.

Conclusion

Runtimep50p95p99
Serverless Cold Start8591,0461,156
Serverless Warm246563855
Edge106178328

Globablly Edge functions are approximately 9 times faster than Serverless functions during cold starts, but only 2 times faster when the function is warm.

Edge functions have similar latency regardless of the user's location. If you value your users and have a worldwide audience, you should consider Edge Functions.

Create an account on OpenStatus to monitor your API and get notified when your latency increases.