Hono on Vercel: A Performance Deep Dive into Fluid Compute

This article details how to deploy a new Hono server on Vercel and monitor it using OpenStatus, with a focus on observing the impact of Vercel's Fluid Compute. We'll compare the performance of a "warm" server, which is regularly pinged, against a "cold" server that remains idle.
Our Setup
Our Setup
First, we set up our Hono server using Vercel's zero-configuration deployment:
- We created a new Hono project:
pnpm create hono@latest
.` - We navigated into the new directory:
cd new-directory
- We followed Vercel's zero-configuration deployment instructions for Hono backends.
- We deployed the application using
vc deploy
.
We repeated this process to create two identical servers. One server is designated as "warm," receiving a request every minute to prevent it from going idle. The other is "cold," and we only send a request to it once per hour to observe the impact of cold starts. Both servers were hosted in the IAD1
region.
Next, we configured monitoring with openstatus. We created a new monitor with the following YAML configuration.
This is the configuration for the "cold" server:
We have deployed it using the openstatus cli.
Our metrics
These are our metrics for both cold and warm deployments from the last 24 hours.
Warm
uptime
100
%
failing
0
#
total pings
47,520
#
p50
171
ms
p75
275
ms
p90
343
ms
p95
417
ms
p99
524
ms
Cold
uptime
100
%
failing
0
#
total pings
792
#
p50
212
ms
p75
333
ms
p90
439
ms
p95
529
ms
p99
639
ms
Analysis and Discussion
When we compare our results, the warm server's performance is significantly faster, as expected. Its p99 latency is 524ms, while the cold server's p99 latency is 639ms. This 115ms difference highlights the overhead of a cold start. However, when we compare this to a similar test we ran with the previous Node.js runtime, the performance is notably better.
Read our blog post: Monitoring latency: Vercel Serverless Function vs Vercel Edge Function
The Good
- Excellent Developer Experience (DX): Deploying a Hono server on Vercel is incredibly simple, requiring just a couple of commands. The zero-configuration setup is a major plus for developers.
- Performance Improvements: Fluid Compute provides a tangible improvement over the previous Vercel Node.js runtime. It reduces the impact of cold starts and makes the serverless experience more efficient.
The Bad
- Deprecation of Edge Functions: Vercel has deprecated its dedicated Edge Functions in favor of a unified Vercel Functions infrastructure that uses Fluid Compute. While this unifies the platform, it might force a transition for existing projects.
- Cost Considerations: While Fluid Compute aims for efficiency, the "warm" server, which is active for roughly 8 minutes and 20 seconds per day, translates to over 400 minutes of usage per month. This could exceed the free tier's limits, depending on the specific CPU time and memory usage, requiring a paid plan.
- Complexity: The new pricing model, which combines active CPU time, provisioned memory, and invocations, can be more complex to track and predict than the simpler invocation-based pricing of the past.
Conclusion
In conclusion, deploying a Hono server on Vercel offers excellent developer experience. However, the deprecation of Edge Functions and the complexity of the new pricing model are potential drawbacks.
Create an account on OpenStatus to monitor your API and get notified when your latency increases.