openstatus logoPricingDocsDashboard

Hono on Vercel: A Performance Deep Dive into Fluid Compute

Aug 27, 2025 | by Thibault Le Ouay Ducasse | [engineering]

Hono on Vercel: A Performance Deep Dive into Fluid Compute

This article details how to deploy a new Hono server on Vercel and monitor it using OpenStatus, with a focus on observing the impact of Vercel's Fluid Compute. We'll compare the performance of a "warm" server, which is regularly pinged, against a "cold" server that remains idle.

Our Setup

Our Setup

First, we set up our Hono server using Vercel's zero-configuration deployment:

  1. We created a new Hono project: pnpm create hono@latest.`
  2. We navigated into the new directory: cd new-directory
  3. We followed Vercel's zero-configuration deployment instructions for Hono backends.
  4. We deployed the application using vc deploy.

We repeated this process to create two identical servers. One server is designated as "warm," receiving a request every minute to prevent it from going idle. The other is "cold," and we only send a request to it once per hour to observe the impact of cold starts. Both servers were hosted in the IAD1 region.

Next, we configured monitoring with openstatus. We created a new monitor with the following YAML configuration.

This is the configuration for the "cold" server:

# yaml-language-server: $schema=https://www.openstatus.dev/schema.json

"hono-cold":
  active: true
  assertions:
  - compare: eq
    kind: statusCode
    target: 200
  description: Monitoring Hono App on Vercel
  frequency: 1h
  kind: http
  name: Hono Vercel Cold
  public: true
  regions:
  - arn
  - ams
  - atl
  - bog
  - bom
  - bos
  - cdg
  - den
  - dfw
  - ewr
  - eze
  - fra
  - gdl
  - gig
  - gru
  - hkg
  - iad
  - jnb
  - lax
  - lhr
  - mad
  - mia
  - nrt
  - ord
  - otp
  - phx
  - scl
  - sea
  - sin
  - sjc
  - syd
  - yul
  - yyz
  request:
    headers:
      User-Agent: OpenStatus
    method: GET
    url: https://hono-cold.vercel.app/
  retry: 3

We have deployed it using the openstatus cli.

openstatus monitors apply

Our metrics

These are our metrics for both cold and warm deployments from the last 24 hours.

Warm

100% UPTIME

0# FAILING

47,520# PINGS

171ms P50

275ms P75

343ms P90

417ms P95

524ms P99

hono warm p50 latency between 18. Augh and 25. Aug 2025 aggregated in a 1h window.
RegionTrendP75P95P99
🇳🇱 ams
168ms192ms276ms
🇸🇪 arn
182ms207ms318ms
🇺🇸 atl
129ms198ms355ms
🇨🇴 bog
378ms461ms576ms
🇮🇳 bom
278ms306ms399ms
🇺🇸 bos
100ms123ms203ms
🇫🇷 cdg
137ms164ms281ms
🇺🇸 den
218ms264ms331ms
🇺🇸 dfw
164ms202ms289ms
🇺🇸 ewr
122ms162ms221ms
🇦🇷 eze
277ms455ms496ms
🇩🇪 fra
148ms172ms264ms
🇲🇽 gdl
492ms618ms683ms
🇧🇷 gig
210ms359ms428ms
🇧🇷 gru
183ms249ms348ms
🇭🇰 hkg
315ms398ms586ms
🇺🇸 iad
127ms201ms399ms
🇿🇦 jnb
358ms505ms543ms
🇺🇸 lax
145ms169ms270ms
🇬🇧 lhr
133ms159ms279ms
🇪🇸 mad
319ms358ms451ms
🇺🇸 mia
153ms185ms251ms
🇯🇵 nrt
243ms431ms484ms
🇺🇸 ord
132ms176ms247ms
🇷🇴 otp
264ms294ms374ms
🇺🇸 phx
184ms208ms311ms
🇨🇱 scl
345ms501ms549ms
🇺🇸 sjc
124ms146ms237ms
🇺🇸 sea
153ms179ms278ms
🇸🇬 sin
403ms469ms571ms
🇦🇺 syd
285ms416ms463ms
🇨🇦 yul
101ms123ms186ms
🇨🇦 yyz
128ms152ms243ms

Cold

100% UPTIME

0# FAILING

792# PINGS

212ms P50

333ms P75

439ms P90

529ms P95

639ms P99

Vercel edge p50 latency between 18. Augh and 25. Aug 2025 aggregated in a 1h window.
RegionTrendP75P95P99
🇳🇱 ams
212ms249ms310ms
🇸🇪 arn
246ms396ms550ms
🇺🇸 atl
178ms236ms376ms
🇨🇴 bog
488ms571ms732ms
🇮🇳 bom
328ms589ms650ms
🇺🇸 bos
131ms186ms401ms
🇫🇷 cdg
277ms436ms578ms
🇺🇸 den
270ms458ms552ms
🇺🇸 dfw
227ms401ms512ms
🇺🇸 ewr
181ms246ms370ms
🇦🇷 eze
415ms535ms649ms
🇩🇪 fra
163ms338ms408ms
🇲🇽 gdl
619ms739ms852ms
🇧🇷 gig
376ms468ms548ms
🇧🇷 gru
220ms376ms483ms
🇭🇰 hkg
357ms535ms679ms
🇺🇸 iad
183ms374ms507ms
🇿🇦 jnb
391ms483ms636ms
🇺🇸 lax
165ms211ms293ms
🇬🇧 lhr
170ms307ms431ms
🇪🇸 mad
353ms440ms683ms
🇺🇸 mia
187ms238ms337ms
🇯🇵 nrt
320ms498ms692ms
🇺🇸 ord
181ms212ms331ms
🇷🇴 otp
288ms325ms368ms
🇺🇸 phx
204ms239ms401ms
🇨🇱 scl
392ms541ms595ms
🇺🇸 sjc
139ms181ms316ms
🇺🇸 sea
202ms232ms364ms
🇸🇬 sin
574ms794ms958ms
🇦🇺 syd
318ms365ms472ms
🇨🇦 yul
119ms140ms182ms
🇨🇦 yyz
150ms230ms431ms

Analysis and Discussion

When we compare our results, the warm server's performance is significantly faster, as expected. Its p99 latency is 524ms, while the cold server's p99 latency is 639ms. This 115ms difference highlights the overhead of a cold start. However, when we compare this to a similar test we ran with the previous Node.js runtime, the performance is notably better.

Read our blog post: Monitoring latency: Vercel Serverless Function vs Vercel Edge Function

The Good

  • Excellent Developer Experience (DX): Deploying a Hono server on Vercel is incredibly simple, requiring just a couple of commands. The zero-configuration setup is a major plus for developers.
  • Performance Improvements: Fluid Compute provides a tangible improvement over the previous Vercel Node.js runtime. It reduces the impact of cold starts and makes the serverless experience more efficient.

The Bad

  • Deprecation of Edge Functions: Vercel has deprecated its dedicated Edge Functions in favor of a unified Vercel Functions infrastructure that uses Fluid Compute. While this unifies the platform, it might force a transition for existing projects.
  • Cost Considerations: While Fluid Compute aims for efficiency, the "warm" server, which is active for roughly 8 minutes and 20 seconds per day, translates to over 400 minutes of usage per month. This could exceed the free tier's limits, depending on the specific CPU time and memory usage, requiring a paid plan.
  • Complexity: The new pricing model, which combines active CPU time, provisioned memory, and invocations, can be more complex to track and predict than the simpler invocation-based pricing of the past.

Conclusion

In conclusion, deploying a Hono server on Vercel offers excellent developer experience. However, the deprecation of Edge Functions and the complexity of the new pricing model are potential drawbacks.

Create an account on OpenStatus to monitor your API and get notified when your latency increases.