Zuplo
API Observability

Know exactly what your APIs are doing

Real-time logs and analytics for every request through your gateway — searchable, filterable, and exportable, with consumer-level attribution out of the box.

Why this matters

Most API problems aren't found in dashboards — they're reported by customers

API teams ship faster every quarter, but visibility into the gateway itself usually lags behind. The result: longer outages, blame ping-pong between teams, and a backlog of "we'll add metrics later."

×

Slow MTTR

Without span-level timing across policies, handlers, and origins, every incident is a manual tour through stack traces and grepped log lines.

×

Black-box gateway

Your handler logs show what your code did — but not which inbound policy rate limited, blocked, or quietly added 200ms before the request ever got there.

×

No per-customer visibility

When a customer reports a problem — or a single consumer is hammering your API — can you tell which one, on which endpoint, in a single click?

×

Blind spots in AI traffic

Token spend, prompt-injection blocks, and semantic-cache hits all happen at the gateway — completely invisible to your application APM.

What you get

Observability that pays for itself the first time it saves an incident

Resolve issues 10× faster

Click from a P99 spike straight to the slow span. Logs, metrics, and (soon) traces in one place — with the API key, route, and policy chain already attached.

Understand every customer

Every log, metric, and trace is automatically tagged with the consumer that made the call. Find power users, abusers, and revenue drivers in a single query.

Keep your existing stack

12 prebuilt destinations plus any OTLP-compatible collector. Forward to multiple at once, ship custom payload shapes in TypeScript — nothing to deploy alongside the gateway, no rewrites, no vendor lock-in.

Logs

Stop hunting through stack traces

Every request through your gateway lands here in real time — searchable, filterable, and exportable. Find the failing call in seconds, with severity, latency, status, and the API key that made it already attached.

Logs
MagnifyingGlassIcon logs… ( / )
8 requests·1m buckets
11:0011:1511:3011:4512:00

Environment

main

Time Range

Severity

TimeStatusMethodPathLatencyLevel
12:00:40200GET/api/v2/users12msinfo
12:00:42429POST/api/v1/data0mswarning
12:00:44200GET/api/v2/products8msinfo
12:00:46500POST/api/v1/payments234mserror
12:00:48200GET/health1msinfo
12:00:50403DELETE/api/v1/users/423mserror
12:00:52200POST/api/v1/auth89msinfo
12:00:54429GET/api/v2/analytics0mswarning
Live log stream
Powerful search & filters
Per-request latency
Timeline histogram
Consumer attribution
Export & forward
Analytics

Skip the dashboard build-out

Track P50/P95/P99 latency, error rates, request volume, and per-customer traffic from day one — without standing up a separate metrics pipeline or a custom Grafana board. Hit your SLOs with statistical confidence and spot regressions before customers do.

API Analytics
Last 24hLIVE
Total Requests2.41M+12.3%
Avg Latency42ms-4.2%
Error Rate0.3%+0.1%
Active Consumers1.2k+3.1%
Request Volumereqs / min
EndpointRequestsP99Errors
GET/api/v2/users
847k12ms0.1%
POST/api/v1/auth
234k89ms0.8%
GET/api/v2/products
156k8ms0.0%
DELETE/api/v1/sessions
23k234ms2.1%
Real-time request volume
Consumer breakdowns
P50 / P95 / P99 latency
Geographic distribution
Error rate trends
Custom time ranges
Integrations

Keep the observability stack you already use

Forward logs, metrics, and traces to 12 prebuilt destinations or any OTLP-compatible collector — fan out to multiple at once, nothing to deploy alongside the gateway, and TypeScript hooks for any custom payload your team needs.

Observability Integrations
12 destinationsLIVE
Logs
10destinations
1,284events/s
Metrics
4destinations
847events/s
Tracing
4OTLP collectors
312events/s
DestinationSendsStatus
DatadogLogsMetricsConnected
New RelicLogsMetricsConnected
DynatraceLogsMetricsTracingConnected
SplunkLogsConnected
AWS CloudWatchLogsConnected
Google Cloud LoggingLogsConnected
LokiLogsConnected
Sumo LogicLogsConnected
Live activitystreaming
12:00:08MetricsDatadog+184
12:00:06TracingHoneycomb+312
12:00:04LogsSplunk+89
12:00:02MetricsNew Relic+156
9 logging plugins
4 metrics plugins
OpenTelemetry tracing
Any OTLP HTTP collector
Multi-destination fan-out
Custom log shapes (TS)
AI Gateway

The same observability for your AI traffic

Zuplo's AI Gateway runs on the same policy pipeline as the API Gateway, so every LLM call shows up in your logs, metrics, and traces — alongside your REST traffic. Plus AI-specific signals you can't get from a generic APM:

  • Token spend per project, model, and consumer — with budget guardrails to stop runaway costs.
  • Prompt-injection blocks and PII catches — every guardrail event logged with the offending prompt.
  • Time-to-first-byte and semantic-cache hit rates — the latency signals that actually matter for LLM UX.
  • Stream every call to Galileo, Comet Opik, or your own collector — full request/response traces with token counts and latency.

Tokens (24h)

4.82M

+18.4%

Spend (24h)

$312.46

72% of daily budget

TTFT P95

348ms

-12ms

Cache hit rate

31.4%

+2.1%

Prompt injection blocked /v1/chat · 12:42:08
What makes Zuplo different

Built for the way modern API teams actually work

Most observability stories stop at "we have integrations." Zuplo's stops at "we already wrote your data the way you need it."

Programmable, not configured

Custom log shapes, custom metric calculations, custom forwarding — all in plain TypeScript. Attach order IDs, tenant IDs, plan tier, or any business event to gateway telemetry without standing up a separate pipeline.

Built into the gateway, not bolted on

Observability runs inside the gateway runtime, not as a sidecar or daemon you maintain alongside it. Logs, metrics, and traces ship straight from the request to your destination across 300+ edge data centers — nothing to deploy, version, or keep in sync with the gateway release.

Per-API-key by default

Because Zuplo issues your API keys, every request is automatically attributed to its consumer — no instrumentation code, no manual tagging. Per-customer dashboards work on day one.

One pipeline for every protocol

REST, GraphQL, WebSockets, SOAP-over-HTTP, MCP servers, and AI Gateway calls all flow through the same observability stack. One story across your entire API surface.

Real questions, real answers

What teams use this for

“My P99 just spiked. Where's the bottleneck?”

Click the spike → in-portal trace → slowest span. Roughly 30 seconds from alert to root cause, without leaving Zuplo.

“Which customer is hammering us right now?”

Filter analytics by consumer. Top 10 by request count, error rate, or P95 latency — straight from the dashboard.

“Are we approaching the LLM budget?”

AI Gateway dashboards roll up token spend per project, model, and consumer — with budget guardrails to cut off runaway costs.

“Did my last deploy regress anything?”

Compare the same endpoint before and after a release. Latency distribution, error rate, and request volume side by side in seconds.

Coming soon

More observability is on the way

In-portal tracing, external API monitoring, threshold alerts, and breaking-change detection are all on the roadmap. Sign up to get early access the moment they ship.

See the public roadmap

Frequently Asked Questions

Common questions about API observability with Zuplo.

Ready to see your APIs in action?

Spin up a free Zuplo project and watch the logs and analytics start flowing in real time.