---
title: "How to Use Block's goose AI Agent with Zuplo AI Gateway"
description: "Configure Block's open-source goose AI agent to route through Zuplo's AI Gateway for token-cost visibility, rate limits, and per-team usage analytics."
canonicalUrl: "https://zuplo.com/blog/2025/10/03/ai-gateway-with-goose"
pageType: "blog"
date: "2025-10-03"
authors: "martyn"
tags: "AI"
image: "https://zuplo.com/og?text=How%20to%20Use%20Block's%20goose%20AI%20Agent%20with%20Zuplo%20AI%20Gateway"
---
In this guide, I'll show you how to configure
[goose](https://block.github.io/goose/) to work with OpenAI through Zuplo
[AI Gateway](https://zuplo.com/ai-gateway) in just a few minutes.

<YouTubeVideo videoId="bL7Uny0ksNA" />

## What is goose?

[goose](https://block.github.io/goose/) (yes, that's deliberately lower case) is
an open-source AI agent built by
[Block](https://block.xyz/inside/block-open-source-introduces-codename-goose)
(the company behind Square and Cash App). It's a free alternative to commercial
AI coding assistants, released under the Apache 2.0 license. goose works as both
a CLI tool and a desktop app for automating engineering tasks — from writing and
debugging code to running shell commands and managing infrastructure. It
supports 15+ LLM providers including OpenAI, Anthropic, and Google, has
extensive [MCP (Model Context Protocol)](/features/mcp-servers) support, and
offers powerful extensibility through recipes. Whether you're working solo or in
a team, goose provides a flexible foundation for AI-powered automation without
vendor lock-in.

## Why Use an AI Gateway?

While goose works great out of the box, routing it through an AI gateway like
Zuplo gives you several advantages:

- **Cost visibility**: Track token usage and spending in real-time
- **Usage controls**: Set budget limits and thresholds
- **Team management**: Control access across your organization
- **Security policies**: Add rate limiting and other protections
- **Observability**: Monitor all LLM requests in one place

## Setting Up goose with Zuplo AI Gateway

### Step 1: Configure Your AI Gateway

First, set up your Zuplo AI Gateway with your
[OpenAI provider](https://zuplo.com/docs/ai-gateway/providers). Create an
[app](https://zuplo.com/docs/ai-gateway/apps) for goose and select your desired
model (such as GPT-4o). This is also where you can optionally configure usage
limits, budget warnings, or other policies.

Once created, grab your app's API key because you'll need this in the next step.

### Step 2: Configure goose

Run the configuration command:

```bash
goose configure
```

Then follow these steps:

1. Select **Configure providers**
2. Choose **OpenAI** as your provider
3. Replace the API key with the one from your Zuplo AI Gateway app
4. Update the host URL to point to your AI Gateway endpoint instead of OpenAI
   directly
5. Keep the base path as default (`v1/chat/completions`)
6. Confirm your default model selection

That's it! goose will validate the configuration and save it.

### Step 3: Test It Out

Run goose and try a query:

```bash
goose
```

Now, you're ready to start working with goose as your trusty AI agent. Ask
questions, make plans, or simply start building.

## See Your Usage in Real-Time

Head over to your Zuplo AI Gateway
[dashboard](https://zuplo.com/docs/ai-gateway/getting-started) while goose is
working. You'll immediately see:

- Number of requests made
- Token consumption broken down by input and output tokens
- Current spending per app and per team
- Time to first byte and other performance metrics

This visibility is invaluable for understanding your AI usage patterns and
controlling costs, especially in team environments. If you've set daily or
monthly budget limits, you'll see when an app approaches its threshold — the
gateway can enforce limits or notify you automatically so there are no surprise
bills.

For even deeper analysis, you can integrate with observability platforms like
[Comet Opik](https://zuplo.com/docs/ai-gateway/policies/comet-opik-tracing) to
get full trace-level visibility: latency breakdowns, prompt and response
payloads, and LLM-as-a-judge evaluation scoring on your agent's output quality.

## The Benefits

By routing goose through Zuplo AI Gateway, you get:

- **Immediate observability** into all your LLM interactions
- **Cost control** with the ability to set budgets and alerts
- **Team governance** when multiple people use goose (which they can do with
  their own Zuplo AI Gateway API keys)
- **Security policies** like rate limiting and prompt injection detection
- **No changes to your workflow** goose works exactly as before

The setup takes less than five minutes, and you gain enterprise-grade management
capabilities for your AI agent usage. If you're also using Anthropic's Claude
Code, the same gateway handles that too — see our
[guide to using Claude Code with Zuplo AI Gateway](/blog/ai-gateway-with-claude-code).

## Frequently Asked Questions

### Is goose free?

Yes. goose is completely free and open-source under the Apache 2.0 license. You
can run it with a local LLM through Ollama at zero cost. If you use a cloud
provider like OpenAI or Anthropic, you pay their standard API fees — which is
exactly where an AI Gateway helps you track and control that spend.

### Can I run goose against Anthropic and OpenAI through one gateway?

Absolutely. Zuplo AI Gateway supports
[multiple providers](https://zuplo.com/docs/ai-gateway/providers) including
OpenAI, Anthropic, Google, and Mistral. You can create separate apps for each
provider under the same team, giving you unified cost tracking and rate limits
across all of them.

### What happens when a budget limit is reached?

When an app hits its daily or monthly spend limit, the gateway blocks further
requests. goose will stop making calls until the limit resets or an admin raises
the threshold — so you're never caught off guard by runaway costs.

## Get Started

Ready to add visibility and control to your goose setup?
[Try Zuplo AI Gateway](https://portal.zuplo.com/signup?utm_source=goose-demo&utm_medium=web&utm_campaign=ai-week)
and see how easy it is to manage your LLM usage across all your tools and
applications.

## More from AI Week

This article is part of Zuplo's AI Week. A week dedicated to AI, LLMs and, of
course, APIs centered around the release of our
[AI Gateway](https://zuplo.com/ai-gateway).

You can find the other articles and videos from this week below:

- Day 1: [AI Gateway Overview](/blog/zuplo-ai-gateway) with Zuplo CEO, Josh
  Twist
- Day 2:
  [Is Spec-Driven AI Development the Future?](/blog/spec-driven-ai-development)
  with Guy Podjarny, CEO & Founder of Tessl
- Day 2:
  [Using AI Gateway with LangChain & OpenAI](/blog/ai-gateway-with-langchain)
  with John McBride, Staff Software Engineer at Zuplo
- Day 3:
  [Your AI Models Aren't Learning From Production Data](/blog/comet-ml-opik)
  with Gideon Mendels, CEO & Co-Founder of Comet ML
- Day 3:
  [Using Claude Code with Zuplo's AI Gateway](/blog/ai-gateway-with-claude-code)
  with Martyn Davies, Developer Advocate at Zuplo
- Day 4:
  [What Autonomous Agents Actually Need from Your APIs](/blog/what-autonomous-agents-actually-need-from-your-apis)
  with Emmanuel Paraskakis, CEO of Level250
- Day 4: [Using AI Gateway with goose AI agent](/blog/ai-gateway-with-goose)
  with Martyn Davies, Developer Advocate at Zuplo