openstatus logoPricingDashboard

What Is Synthetic Monitoring?

May 07, 2026 | by openstatus | [fundamentals]

Fake users, real signal. That's the trade synthetic monitoring makes - scripts that pretend to be real customers, executed on a schedule from external locations, so you find problems before your actual customers do.

It runs even when you have no traffic. It tests what you actually care about - the login flow, the checkout, the API endpoint your largest customer depends on - rather than waiting for someone to complain on Twitter.


How Synthetic Monitoring Works

A synthetic monitor is a script that runs on a schedule from your monitoring provider's infrastructure. The basic shape:

  1. Probe servers (in different regions) wake up on a schedule
  2. Each runs the configured check - hit a URL, drive a browser, run a sequence of API calls
  3. The result (success/failure, latency, response body) gets recorded
  4. If checks fail across multiple regions, alerts fire

The check itself can range from "GET /healthz, expect 200" to "load the homepage, fill in the signup form, complete payment, verify the receipt email" - all driven by a script.


Types of Synthetic Checks

1. HTTP / API Checks

The simplest and most common. Make a request to a URL, validate the response.

GET https://api.example.com/v1/health
Expect: 200, "ok" in body, response time < 500ms

Fast, cheap, easy to set up. The bread and butter of uptime monitoring. Doesn't catch frontend or JavaScript problems.

2. Browser Checks

A real headless browser (typically Chromium) loads your page, executes JavaScript, and can interact with the DOM. You can script clicks, form fills, and assertions.

Visit /login
Type "test@example.com" in #email
Type "..." in #password
Click button[type="submit"]
Assert URL is /dashboard within 5s

Catches problems that pure HTTP checks miss: broken JavaScript, third-party script failures, layout breaks, slow client-side rendering, cookie/auth bugs. Slower and more expensive to run, so used for critical user paths rather than blanket coverage.

3. Multi-Step Transaction Checks

A sequence of API calls that depend on each other, executed as one logical unit.

1. POST /auth/login -> capture token
2. GET /api/me with token -> assert user_id
3. POST /api/orders with token -> capture order_id
4. GET /api/orders/{order_id} -> assert status="pending"

Tests the actual paths your customers integrate against. Catches problems where each individual endpoint works fine but the chain is broken (token format change, race condition, downstream dependency).

4. Specialty Checks

  • DNS - confirm records resolve correctly and to the expected IP
  • SSL - validate certificate, check expiration date
  • TCP/UDP - confirm ports are listening
  • gRPC - check gRPC service health
  • Ping - basic network reachability

Each fills a specific gap. Most teams use them sparingly.


Synthetic vs Real User Monitoring (RUM)

Two complementary approaches to understanding service health:

Synthetic MonitoringReal User Monitoring (RUM)
Source of dataScripted probesActual user sessions
When it runsFixed scheduleOn every real user interaction
Works without traffic?YesNo
Deterministic?Yes - same script each timeNo - depends on user behavior
Catches what?Regressions, outages, third-party failuresReal-world conditions: devices, networks, geographies
Cost modelPer-checkPer-session or per-event

Synthetic answers "does this scenario work?" RUM answers "what are users actually experiencing?"

They complement each other. Use synthetic to catch issues before users do; use RUM to understand the long tail of problems specific to particular devices, browsers, or regions.


When to Use Synthetic Monitoring

Use it when:

  • You need to detect regressions before users do
  • Critical user paths must keep working (login, checkout, payment)
  • You have low or unpredictable traffic
  • You depend on third-party services and want to know when they break
  • You need pre-launch validation before a real-user rollout
  • Off-hours coverage matters (when no real users are around)

Skip it when:

  • The endpoint is purely internal and only used by other monitored services
  • The check would be a duplicate of an existing one
  • The cost (time + money) exceeds the value of the signal

The thing to optimize: signal per monitor. Fewer well-chosen synthetic checks beat a dashboard full of redundant ones.


A Practical Setup

A reasonable starting point for a mid-stage SaaS:

  • HTTP checks every 1 minute on: homepage, public API health endpoint, login endpoint, main authenticated API endpoints
  • Browser check every 10 minutes on: full login flow ending on the dashboard
  • Transaction check every 5 minutes on: API authentication sequence (token issue + authenticated call)
  • Specialty checks: SSL expiration warnings 30 days before, DNS validation hourly

That's roughly 10-15 monitors. Enough to catch real problems. Not so many that alert fatigue sets in.

For multi-region coverage, run each from at least 3 geographically distributed locations and require majority failure before alerting.


Common Mistakes

Monitoring too many things. Every monitor is an alert source. Every alert is an interruption. Be ruthless about pruning monitors that don't represent real customer impact.

No browser coverage on critical paths. If signup or checkout depends on JavaScript, an HTTP check on the page returning 200 means nothing. Use a browser check on the actual flow.

Single-region monitoring. A check that runs from one location lies to you when that location has network problems. Always require multi-region majority.

Treating synthetic results as ground truth. Synthetic tells you whether your scripted scenario works. It doesn't tell you whether the user in Indonesia on a flaky 4G connection is having a good time. Pair with RUM for that.

Not updating scripts when the app changes. Synthetic checks rot. A login flow check that's still using a deprecated form selector silently passes for weeks while users are actually broken. Treat synthetic scripts like code - review them, version them, update them when the underlying app changes.


The Bottom Line

Synthetic monitoring is how you find out something is broken before your customers tell you. The investment pays off the first time a synthetic browser check catches a failed deploy that an HTTP check would have shown as healthy.

Start with HTTP checks on critical endpoints. Add browser checks for user flows that matter. Run from multiple regions. Push results to your status page so users have an authoritative source when things break.


OpenStatus runs synthetic monitors - HTTP, TCP, DNS, and full browser checks - from multiple regions worldwide. Open-source, with built-in status page integration.

Try openstatus free

Start free. No credit card required.