Back

Monitoring latency: Cloudflare Workers vs Fly vs Koyeb vs Railway vs Render

Monitoring latency: Cloudflare Workers vs Fly vs Koyeb vs Railway vs Render
TD
Thibault Le Ouay Ducasse

Feb 19, 202413 min read

⚠️ We are using the default settings for each provider and conducting datacenter to datacenter requests. A real-world application's results are going to be different. ⚠️

You want to know which cloud providers offer the lowest latency?

In this post, I compare the latency of Cloudflare Workers, Fly, Koyeb, Railway and Render using OpenStatus.

I deployed the application on the cheapest or free tier offered by each provider.

For this test, I used a basic Hono server that returns a simple text response.

const app = new Hono();
app.use("*", logger());
 
app.use("*", poweredBy());
 
app.get("/", (c) => {
  return c.text(
    "Just return the desired http status code, e.g. /404 🤯 \nhttps://www.openstatus.dev",
  );
});

You can find the code here, it’s open source 😉.

OpenStatus monitored our endpoint every 10 minutes from 6 locations located in Amsterdam, Ashburn, Hong Kong, Johannesburg, Sao Paulo and Sydney.

It's a good way to test our own product and improve it.

Let's analyze the data from the past two weeks.

Cloudflare workers

Cloudflare Workers is a serverless platform by Cloudflare. It lets you build new applications using JavaScript/Typescript. You can deploy up to 100 worker scripts for free, running on more than 275 network locations.

Latency metrics

uptime

100%

fails

0#

total pings

10,956#

avg

182ms

p75

138ms

p90

695ms

p95

778ms

p99

991ms

Timing metrics

RegionDNS (ms)Connection (ms)TLS Handshake (ms)TTFB (ms)Transfert (ms)
AMS17217270
GRU38213280
HKG19213290
IAD24114300
JNB1231681821850
SYD51111250

I can notice that Johannesburg's latency is about ten times higher than that of the other monitors.

Headers

From the Cloudflare request I can get the location of the workers that handle the request, with Cf-ray in the headers response.

Checker regionWorkers regionnumber of request
HKGHKG1831
SYDSYD1831
AMSAMS1831
IADIAD1831
GRUGRU1791
GRUGIG40
JNBAMS741
JNBMUC4
JNBHKG5
JNBSIN6
JNBNRT8
JNBEWR10
JNBCDG82
JNBFRA276
JNBLHR699
JNBAMS741

I can see all the request from JNB is never routed to a nearby data-center.

Apart from the strange routing error in Johannesburg, Cloudflare workers are fast worldwide.

I have not experienced any cold start issues.

Fly.io

Fly.io simplifies deploying and running server-side applications globally. Developers can deploy their applications near users worldwide for low latency and high performance. It uses a lightweight Firecracker VM to easily deploy Docker images.

Latency metrics

uptime

100%

fails

0#

total pings

10,952#

avg

1,471ms

p75

1,514ms

p90

1,555ms

p95

1,626ms

p99

2,547ms

Timing metrics

RegionDNS (ms)Connection (ms)TLS Handshake (ms)TTFB (ms)Transfert (ms)
AMS61814690
GRU50414310
HKG40514730
IAD30514700
JNB240514230
SYD30314890

The DNS is fast, our checker is attempting to connect to a region in the same data center, but our machine's cold start is slowing us down, leading to the high TTFB.

Here’s our config for Fly.io:

app = 'statuscode'
primary_region = 'ams'
 
[build]
  dockerfile = "./Dockerfile"
 
[http_service]
  internal_port = 3000
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0
  processes = ['app']
 
[[vm]]
  cpu_kind = 'shared'
  cpus = 1
  memory_mb = 256

The primary region of our server is Amsterdam, and the fly instances is getting paused after a period of inactivity.

The machine starts slowly, as indicated by the logs showing a start time of 1.513643778s.

2024-02-14T11:24:16.107 proxy[286560ea703108] ams [info] Starting machine

2024-02-14T11:24:16.322 app[286560ea703108] ams [info] [ 0.035736] PCI: Fatal: No config space access function found

2024-02-14T11:24:16.533 app[286560ea703108] ams [info] INFO Starting init (commit: bfa79be)...

2024-02-14T11:24:16.546 app[286560ea703108] ams [info] INFO Preparing to run: `/usr/local/bin/docker-entrypoint.sh bun start` as root

2024-02-14T11:24:16.558 app[286560ea703108] ams [info] INFO [fly api proxy] listening at /.fly/api

2024-02-14T11:24:16.565 app[286560ea703108] ams [info] 2024/02/14 11:24:16 listening on [fdaa:3:2ef:a7b:10c:3c9a:5b4:2]:22 (DNS: [fdaa::3]:53)

2024-02-14T11:24:16.611 app[286560ea703108] ams [info] $ bun src/index.ts

2024-02-14T11:24:16.618 runner[286560ea703108] ams [info] Machine started in 460ms

2024-02-14T11:24:17.621 proxy[286560ea703108] ams [info] machine started in 1.513643778s

2024-02-14T11:24:17.628 proxy[286560ea703108] ams [info] machine became reachable in 7.03669ms

OpenStatus Prod metrics

If you update your fly.toml file to include the following, you can get the zero cold start and achieve a better latency.

  min_machines_running = 1

This is our data for our production server deploy on Fly.io.

uptime

100%

fails

0#

total pings

12,076#

avg

61ms

p75

67ms

p90

164ms

p95

198ms

p99

327ms

We use Fly.io in production, and the machine never sleeps, yielding much better results.

Koyeb

Koyeb is a developer-friendly serverless platform that allows for global app deployment without the need for operations, servers, or infrastructure management. Koyeb offers a free Starter plan that includes one Web Service, one Database service. The platform focuses on ease of deployment and scalability for developers

Latency metrics

uptime

100%

fails

0#

total pings

10,955#

avg

539ms

p75

738ms

p90

881ms

p95

1,013ms

p99

1,525ms

Timing metrics

RegionDNS (ms)Connection (ms)TLS Handshake (ms)TTFB (ms)Transfert (ms)
AMS502171070
GRU13965754070
HKG482133210
IAD351121290
JNB2981117200
SYD971107110

Headers

The request headers show that none of our requests are cached. They contain cf-cache-status: dynamic. Cloudflare handles the Koyeb edge layer. https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls

Our requests follow this route:

Cf workers -> koyeb Global load balancer -> koyeb backend

Let’s see where did we hit the cf workers

Checker regionWorkers regionnumber of request
AMSAMS1866
GRUGRU504
GRUIAD38
GRUMIA688
GRUEWR337
GRUCIG299
HKGHKG1866
IADIAD1866
JNBJNB1861
JNBAMS1
SYDSYD1866

Koyeb Global Load Balancer region we hit:

Checker regionKoyeb Global Load Balancernumber of request
AMSFRA11866
GRUWAS11866
HKGSIN11866
IADWAS11866
JNBPAR14
JNBSIN11864
JNBFRA11
JNBSIN11866

I have deployed our app in the Frankfurt data-center.

Railway

Railway is a cloud platform designed for building, shipping, and monitoring applications without the need for Platform Engineers. It simplifies the application development process by offering seamless deployment and monitoring capabilities.

Latency metrics

uptime

99.991%

fails

1#

total pings

10,955#

avg

381ms

p75

469ms

p90

653ms

p95

661ms

p99

850ms

Timing metrics

RegionDNS (ms)Connection (ms)TLS Handshake (ms)TTFB (ms)Transfert (ms)
AMS921181580
GRU141151271780
HKG845542250
IAD7214650
JNB181931783190
SYD211081052800

Headers

The headers don't provide any information.

Railway is using Google Cloud Platform. It’s the only service that does not allow us to pick a specific region on the free plan. Our test app will be located to us-west1 Portland, Oregon. We can see that the latency is the lowest in IAD.

By default our app did not scale down to 0. It was always running. We don't have any cold start.

Render

Render is a platform that simplifies deploying and scaling web applications and services. It offers features like automated SSL, automatic scaling, native support for popular frameworks, and one-click deployments from Git. The platform focuses on simplicity and developer productivity.

Latency metrics

uptime

99.89%

fails

12#

total pings

10,946#

avg

451ms

p75

447ms

p90

591ms

p95

707ms

p99

902ms

Timing metrics

RegionDNS (ms)Connection (ms)TLS Handshake (ms)TTFB (ms)Transfert (ms)
AMS20271070
GRU61264070
HKG76263210
IAD15151290
JNB361611677200
SYD103147110

Headers

The headers don't provide any information.

I have deployed our app in the Frankfurt data-center.

According to the Render docs, the free tier will shut down the service after 15 minutes of inactivity. However, our app is being accessed by a monitor every 10 minutes. We should never scale down to 0.

Render spins down a Free web service that goes 15 minutes without receiving inbound traffic. Render spins the service back up whenever it next receives a request to process.

I think the failures are due to the cold start of our app. We have a default timeout of 30s and the render app takes up to 50s to start.We might have hit an inflection point between cold and warm.

Conclusion

Here are the results of our test:

ProviderUptimeFails PingTotal PingsAVG latency (ms)P75 (ms)P90 (ms)P95 (ms)P99 (ms)
CF Workers100010,956182138690778991
Fly.io100010,9521,4711,5141,5551,6262,547
Koyeb100010,9555367388811,0131,525
Railway99.991110,955381469653661850
Render99.891210,946451447591707902

If you value low latency, Cloudflare Workers are the best option for fast global performance without cold start issues. They deploy your app worldwide efficiently.

For multi-region deployment, check out Koyeb and Fly.io.

For specific region deployment, Railway and Render are good choices.

Choosing a cloud provider involves considering not just latency but also user experience and pricing.

We use Fly.io in production and are satisfied with it.

Vercel

I haven't included Vercel in this test. But we have a blog post comparing Vercel Serverless vs Edge vs Serverless. You can find it here.

If you want to monitor your API or website, create an account on OpenStatus.