Running LLMs in production without proper observability is like flying blind. You need to understand what's happening with every request: how your models are performing, where tokens are being consumed, and whether your AI outputs meet quality standards.
That's why we're excited to introduce our new Galileo Tracing policy for the Zuplo AI Gateway. This simple drop-in policy gives you comprehensive observability for your LLM applications without writing a single line of instrumentation code.
What is Galileo?#
Galileo is an evaluation and observability platform designed to help developers improve their AI applications. It provides tools for logging traces, evaluating LLM outputs, running systematic experiments, and monitoring production AI systems. Galileo works with all major LLM providers and now you can use it directly from the Zuplo AI Gateway.
What the policy does#
The Galileo Tracing policy automatically captures detailed traces of every AI Gateway request and response. Each trace includes:
- Complete request and response data (prompts, completions, parameters)
- Token usage breakdowns (input, output, and total)
- Performance metrics (latency, throughput, error rates)
- Hierarchical trace structure for debugging complex workflows
How to set it up#
To get started using the policy:
- Sign up for a Galileo account and create a new project
- Generate an API key in your Galileo dashboard specifically for use with your AI Gateway app
- In your Zuplo AI Gateway app, click Policies, then Add Policy, and select Galileo Tracing
- Configure the policy with your Galileo API key, project ID, and log stream ID
Your AI Gateway requests will now automatically flow into Galileo for analysis.

What you can do with Galileo#
Once your traces are flowing into Galileo, you can:
Evaluate output quality: Use Galileo's built-in evaluation metrics to assess hallucinations, prompt adherence, toxicity, and other quality dimensions. Track these metrics over time to catch quality regressions before they reach users.
Run experiments: Test different models, prompts, or parameters systematically. Compare performance across variants with statistical confidence rather than relying on intuition.
Debug production issues: When something goes wrong, drill into individual traces to see exactly what prompts were sent, how the model responded, and where the failure occurred.
Monitor trends: Track performance, cost, and quality metrics across your entire AI application. Set up alerts when metrics drift outside acceptable ranges.
That's just to start. We highly recommend letting your AI Gateway app push data via this policy for a few days before you test things out so you can be sure to have enough information to work with.
Why this matters#
With the Galileo tracing policy combined with Zuplo's AI Gateway, you get production-grade AI infrastructure out of the box, meaning you can ship AI features with confidence, knowing you have the visibility to maintain quality and control costs.
Try it today for free#
You can get started with the Zuplo AI Gateway by signing up for a free account. Galileo also offers a free account that allows up to 5000 traces per month.
For more information on implementation, see the Galileo Tracing policy documentation.