Hono on Vercel: A Performance Deep Dive into Fluid Compute
Aug 27, 2025 | by Thibault Le Ouay Ducasse | [engineering]

This article details how to deploy a new Hono server on Vercel and monitor it using OpenStatus, with a focus on observing the impact of Vercel's Fluid Compute. We'll compare the performance of a "warm" server, which is regularly pinged, against a "cold" server that remains idle.
Our Setup
Our Setup
First, we set up our Hono server using Vercel's zero-configuration deployment:
- We created a new Hono project:
pnpm create hono@latest.` - We navigated into the new directory:
cd new-directory - We followed Vercel's zero-configuration deployment instructions for Hono backends.
- We deployed the application using
vc deploy.
We repeated this process to create two identical servers. One server is designated as "warm," receiving a request every minute to prevent it from going idle. The other is "cold," and we only send a request to it once per hour to observe the impact of cold starts. Both servers were hosted in the IAD1 region.
Next, we configured monitoring with openstatus. We created a new monitor with the following YAML configuration.
This is the configuration for the "cold" server:
# yaml-language-server: $schema=https://www.openstatus.dev/schema.json
"hono-cold":
active: true
assertions:
- compare: eq
kind: statusCode
target: 200
description: Monitoring Hono App on Vercel
frequency: 1h
kind: http
name: Hono Vercel Cold
public: true
regions:
- arn
- ams
- atl
- bog
- bom
- bos
- cdg
- den
- dfw
- ewr
- eze
- fra
- gdl
- gig
- gru
- hkg
- iad
- jnb
- lax
- lhr
- mad
- mia
- nrt
- ord
- otp
- phx
- scl
- sea
- sin
- sjc
- syd
- yul
- yyz
request:
headers:
User-Agent: OpenStatus
method: GET
url: https://hono-cold.vercel.app/
retry: 3
We have deployed it using the openstatus cli.
openstatus monitors apply
Our metrics
These are our metrics for both cold and warm deployments from the last 24 hours.
Warm
100% UPTIME
0# FAILING
47,520# PINGS
171ms P50
275ms P75
343ms P90
417ms P95
524ms P99
| Region | Trend | P75 | P95 | P99 |
|---|---|---|---|---|
| 🇳🇱 ams | 168ms | 192ms | 276ms | |
| 🇸🇪 arn | 182ms | 207ms | 318ms | |
| 🇺🇸 atl | 129ms | 198ms | 355ms | |
| 🇨🇴 bog | 378ms | 461ms | 576ms | |
| 🇮🇳 bom | 278ms | 306ms | 399ms | |
| 🇺🇸 bos | 100ms | 123ms | 203ms | |
| 🇫🇷 cdg | 137ms | 164ms | 281ms | |
| 🇺🇸 den | 218ms | 264ms | 331ms | |
| 🇺🇸 dfw | 164ms | 202ms | 289ms | |
| 🇺🇸 ewr | 122ms | 162ms | 221ms | |
| 🇦🇷 eze | 277ms | 455ms | 496ms | |
| 🇩🇪 fra | 148ms | 172ms | 264ms | |
| 🇲🇽 gdl | 492ms | 618ms | 683ms | |
| 🇧🇷 gig | 210ms | 359ms | 428ms | |
| 🇧🇷 gru | 183ms | 249ms | 348ms | |
| 🇭🇰 hkg | 315ms | 398ms | 586ms | |
| 🇺🇸 iad | 127ms | 201ms | 399ms | |
| 🇿🇦 jnb | 358ms | 505ms | 543ms | |
| 🇺🇸 lax | 145ms | 169ms | 270ms | |
| 🇬🇧 lhr | 133ms | 159ms | 279ms | |
| 🇪🇸 mad | 319ms | 358ms | 451ms | |
| 🇺🇸 mia | 153ms | 185ms | 251ms | |
| 🇯🇵 nrt | 243ms | 431ms | 484ms | |
| 🇺🇸 ord | 132ms | 176ms | 247ms | |
| 🇷🇴 otp | 264ms | 294ms | 374ms | |
| 🇺🇸 phx | 184ms | 208ms | 311ms | |
| 🇨🇱 scl | 345ms | 501ms | 549ms | |
| 🇺🇸 sjc | 124ms | 146ms | 237ms | |
| 🇺🇸 sea | 153ms | 179ms | 278ms | |
| 🇸🇬 sin | 403ms | 469ms | 571ms | |
| 🇦🇺 syd | 285ms | 416ms | 463ms | |
| 🇨🇦 yul | 101ms | 123ms | 186ms | |
| 🇨🇦 yyz | 128ms | 152ms | 243ms |
Cold
100% UPTIME
0# FAILING
792# PINGS
212ms P50
333ms P75
439ms P90
529ms P95
639ms P99
| Region | Trend | P75 | P95 | P99 |
|---|---|---|---|---|
| 🇳🇱 ams | 212ms | 249ms | 310ms | |
| 🇸🇪 arn | 246ms | 396ms | 550ms | |
| 🇺🇸 atl | 178ms | 236ms | 376ms | |
| 🇨🇴 bog | 488ms | 571ms | 732ms | |
| 🇮🇳 bom | 328ms | 589ms | 650ms | |
| 🇺🇸 bos | 131ms | 186ms | 401ms | |
| 🇫🇷 cdg | 277ms | 436ms | 578ms | |
| 🇺🇸 den | 270ms | 458ms | 552ms | |
| 🇺🇸 dfw | 227ms | 401ms | 512ms | |
| 🇺🇸 ewr | 181ms | 246ms | 370ms | |
| 🇦🇷 eze | 415ms | 535ms | 649ms | |
| 🇩🇪 fra | 163ms | 338ms | 408ms | |
| 🇲🇽 gdl | 619ms | 739ms | 852ms | |
| 🇧🇷 gig | 376ms | 468ms | 548ms | |
| 🇧🇷 gru | 220ms | 376ms | 483ms | |
| 🇭🇰 hkg | 357ms | 535ms | 679ms | |
| 🇺🇸 iad | 183ms | 374ms | 507ms | |
| 🇿🇦 jnb | 391ms | 483ms | 636ms | |
| 🇺🇸 lax | 165ms | 211ms | 293ms | |
| 🇬🇧 lhr | 170ms | 307ms | 431ms | |
| 🇪🇸 mad | 353ms | 440ms | 683ms | |
| 🇺🇸 mia | 187ms | 238ms | 337ms | |
| 🇯🇵 nrt | 320ms | 498ms | 692ms | |
| 🇺🇸 ord | 181ms | 212ms | 331ms | |
| 🇷🇴 otp | 288ms | 325ms | 368ms | |
| 🇺🇸 phx | 204ms | 239ms | 401ms | |
| 🇨🇱 scl | 392ms | 541ms | 595ms | |
| 🇺🇸 sjc | 139ms | 181ms | 316ms | |
| 🇺🇸 sea | 202ms | 232ms | 364ms | |
| 🇸🇬 sin | 574ms | 794ms | 958ms | |
| 🇦🇺 syd | 318ms | 365ms | 472ms | |
| 🇨🇦 yul | 119ms | 140ms | 182ms | |
| 🇨🇦 yyz | 150ms | 230ms | 431ms |
Analysis and Discussion
When we compare our results, the warm server's performance is significantly faster, as expected. Its p99 latency is 524ms, while the cold server's p99 latency is 639ms. This 115ms difference highlights the overhead of a cold start. However, when we compare this to a similar test we ran with the previous Node.js runtime, the performance is notably better.
Read our blog post: Monitoring latency: Vercel Serverless Function vs Vercel Edge Function
The Good
- Excellent Developer Experience (DX): Deploying a Hono server on Vercel is incredibly simple, requiring just a couple of commands. The zero-configuration setup is a major plus for developers.
- Performance Improvements: Fluid Compute provides a tangible improvement over the previous Vercel Node.js runtime. It reduces the impact of cold starts and makes the serverless experience more efficient.
The Bad
- Deprecation of Edge Functions: Vercel has deprecated its dedicated Edge Functions in favor of a unified Vercel Functions infrastructure that uses Fluid Compute. While this unifies the platform, it might force a transition for existing projects.
- Cost Considerations: While Fluid Compute aims for efficiency, the "warm" server, which is active for roughly 8 minutes and 20 seconds per day, translates to over 400 minutes of usage per month. This could exceed the free tier's limits, depending on the specific CPU time and memory usage, requiring a paid plan.
- Complexity: The new pricing model, which combines active CPU time, provisioned memory, and invocations, can be more complex to track and predict than the simpler invocation-based pricing of the past.
Conclusion
In conclusion, deploying a Hono server on Vercel offers excellent developer experience. However, the deprecation of Edge Functions and the complexity of the new pricing model are potential drawbacks.
Create an account on OpenStatus to monitor your API and get notified when your latency increases.