Zuplo
AI

Token-Based Rate Limiting: How to Manage AI Agent API Traffic in 2026

Nate TottenNate Totten
March 8, 2026
10 min read

Learn why traditional request-based rate limiting fails for AI agents and how to implement token-based rate limiting strategies with practical examples.

If you run an API that serves AI agents or wraps an LLM provider, you’ve probably already noticed: a single AI agent request can cost 100x more than a typical human request, yet traditional rate limiters treat them all the same. One chat completion that burns through 8,000 tokens gets the same “1 request” tick as a lightweight metadata lookup. That gap between what you’re counting and what you’re paying for is exactly where token-based rate limiting comes in.

As AI agents become a dominant source of API traffic — with Gartner predicting that more than 30% of the increase in demand for APIs will come from AI and LLM tools by 2026 — the old approach of “100 requests per minute” is no longer enough. You need rate limits that reflect actual resource consumption: tokens processed, compute time used, and cost incurred.

This guide covers why traditional rate limiting breaks down for AI workloads, how token-based rate limiting works, and how to implement it in practice.

Why Traditional Rate Limiting Fails for AI Traffic

Standard API rate limiting works by counting requests within a time window. If a consumer exceeds their allotted count — say, 100 requests per minute — they get a 429 Too Many Requests response. This model works well when requests have roughly uniform cost, such as CRUD operations on a REST API.

AI agent traffic breaks this model in several ways.

Wildly Variable Request Cost

Two requests to the same LLM endpoint can differ by orders of magnitude in resource consumption. A prompt with 50 tokens and a prompt with 10,000 tokens both count as “1 request,” but the compute cost, latency, and provider charges are drastically different. If you rate limit purely by request count, a consumer sending a handful of massive prompts can exhaust your LLM budget while staying well under your request-per-minute limit.

Bursty, Non-Deterministic Traffic Patterns

AI agents don’t behave like human users clicking through a UI at a steady pace. An autonomous agent might chain 10-20 sequential API calls to complete a single task — tool lookups, retrieval-augmented generation queries, multi-step reasoning, and final completions — all in a rapid burst. If any call in that chain hits a rate limit, the entire agentic workflow fails. Traditional fixed windows and static thresholds aren’t built for this kind of traffic.

Difficulty Distinguishing Agents from Attacks

AI agent traffic patterns — high volume, bursty, automated — look remarkably similar to DDoS attacks or bot scraping. Without the ability to identify legitimate AI consumers by their API keys and usage patterns, a blunt request-count rate limiter might block your most valuable customers while letting low-volume abusers through unchecked.

Multi-Model, Multi-Provider Complexity

Modern AI applications often route requests across different models (GPT-4, Claude, Gemini, open-source models) based on task complexity. Each model has different token costs and rate limits. A single “requests per minute” policy can’t account for the 5-10x cost difference between a lightweight embedding call and a large-context reasoning request.

Request-Based vs. Token-Based Rate Limiting

The core difference is simple: request-based rate limiting counts API calls, while token-based rate limiting counts resource consumption.

Request-Based Rate Limiting

  • What it counts: Number of HTTP requests in a time window
  • Best for: Traditional REST APIs with uniform request costs
  • Limitation: Treats a 50-token request and a 10,000-token request identically

Token-Based Rate Limiting

  • What it counts: Total tokens (or other resource units) consumed in a time window
  • Best for: LLM APIs, AI gateways, and any API where request cost varies significantly
  • Advantage: Reflects actual resource consumption and cost

With token-based limiting, you might allow a consumer 100,000 tokens per hour instead of 100 requests per minute. A consumer making many small requests can make hundreds of calls, while a consumer sending massive prompts gets appropriately throttled after fewer requests. The limit tracks what actually matters: how much of your compute budget each consumer is using.

What Counts as a “Token”?

In LLM contexts, you typically track three categories:

  • Prompt tokens (input): The tokens in the user’s request, including system prompts and context
  • Completion tokens (output): The tokens generated by the model in its response
  • Total tokens: The sum of prompt and completion tokens

Most LLM providers return token counts in their response headers or body (e.g., OpenAI’s usage.total_tokens field). Your rate limiter can read these values after each response and deduct them from the consumer’s allowance.

For non-LLM APIs, the same concept applies to any variable-cost resource: compute units, file sizes, GPU seconds, or data transfer bytes.

Adaptive Rate Limiting Techniques for AI Traffic

Beyond simply switching from request counting to token counting, AI workloads benefit from more sophisticated approaches.

Dynamic Quotas

Instead of a fixed token allowance, adjust limits based on real-time conditions. During off-peak hours when your LLM provider has available capacity, you might allow higher token limits. During peak demand, limits tighten automatically. This is especially valuable for AI agents that can tolerate some scheduling flexibility.

Tiered Token Budgets

Different consumers need different token allowances. A free-tier developer experimenting with your API might get 10,000 tokens per day, while an enterprise customer running production AI agents gets 10 million. By tying token budgets to API key metadata (such as subscription tier), you can enforce differentiated limits automatically.

Sliding Windows over Fixed Windows

Fixed-window rate limits create a well-known problem: a consumer can use their entire budget at the boundary between two windows, effectively doubling their allowed rate. Sliding windows smooth out this burst by continuously calculating usage over a rolling time period, which better handles the unpredictable timing of AI agent requests.

Cost-Based Limiting

Take token-based limiting a step further by weighting tokens by their actual cost. A completion token from GPT-4 costs significantly more than one from a smaller model. By assigning cost multipliers to different models or operation types, you can implement a single dollar-denominated budget that accurately reflects your provider spend, regardless of which model a consumer uses.

Implementing Token-Based Rate Limiting with Zuplo

Zuplo provides multiple built-in mechanisms for implementing token-based rate limiting, from configuration-only policies to fully programmable custom logic. Here’s how to put the concepts above into practice.

Approach 1: Complex Rate Limiting with Token Meters

Zuplo’s Complex Rate Limiting policy is purpose-built for scenarios where request count doesn’t reflect actual cost. Instead of a single requestsAllowed counter, it supports multiple named limits — and you can programmatically control how much each request increments those counters.

Here’s a policy configuration that sets a per-user limit of 50,000 tokens per hour (note: the Complex Rate Limiting policy is available on enterprise plans and is free for development testing):

JSONjson
{
  "name": "token-rate-limit",
  "policyType": "complex-rate-limit-inbound",
  "handler": {
    "export": "ComplexRateLimitInboundPolicy",
    "module": "$import(@zuplo/runtime)",
    "options": {
      "rateLimitBy": "user",
      "timeWindowMinutes": 60,
      "limits": {
        "tokens": 50000
      }
    }
  }
}

By itself, this increments the tokens counter by 1 for each request — which isn’t useful yet. The key is pairing it with a custom outbound policy that reads the actual token count from the LLM provider’s response and sets the correct increment:

TypeScripttypescript
import { ComplexRateLimitInboundPolicy, ZuploContext } from "@zuplo/runtime";

export default async function trackTokenUsage(
  response: Response,
  request: Request,
  context: ZuploContext,
) {
  // Read the token usage from the LLM provider's response
  const body = await response.json();
  const totalTokens = body?.usage?.total_tokens ?? 1;

  // Set the actual token increment for this request
  ComplexRateLimitInboundPolicy.setIncrements(context, {
    tokens: totalTokens,
  });

  // Return the original response to the client
  return new Response(JSON.stringify(body), {
    status: response.status,
    headers: response.headers,
  });
}

With this setup, a request that consumes 500 tokens deducts 500 from the consumer’s hourly budget. A request that consumes 8,000 tokens deducts 8,000. The rate limiter now tracks what actually matters.

Approach 2: Quota Policy for Monthly Token Budgets

For longer-term token budgets (daily, weekly, or monthly), Zuplo’s Quota policy with custom meters is the right tool. Unlike rate limiting, which resets on short time windows, quotas track cumulative usage over billing periods.

Configure a monthly token quota:

JSONjson
{
  "name": "monthly-token-quota",
  "policyType": "quota-inbound",
  "handler": {
    "export": "QuotaInboundPolicy",
    "module": "$import(@zuplo/runtime)",
    "options": {
      "period": "monthly",
      "quotaBy": "user",
      "allowances": {
        "prompt_tokens": 500000,
        "completion_tokens": 200000
      }
    }
  }
}

Then, in your outbound policy or request handler, set the meter increments based on actual usage:

TypeScripttypescript
import { QuotaInboundPolicy, ZuploContext } from "@zuplo/runtime";

export default async function trackTokenQuota(
  response: Response,
  request: Request,
  context: ZuploContext,
) {
  const body = await response.json();
  const promptTokens = body?.usage?.prompt_tokens ?? 0;
  const completionTokens = body?.usage?.completion_tokens ?? 0;

  QuotaInboundPolicy.setMeters(context, {
    prompt_tokens: promptTokens,
    completion_tokens: completionTokens,
  });

  return new Response(JSON.stringify(body), {
    status: response.status,
    headers: response.headers,
  });
}

This gives you separate tracking for prompt and completion tokens — useful since many LLM providers charge different rates for input and output tokens.

Approach 3: Tiered Rate Limits by Consumer Tier

In most real-world scenarios, you want different consumers to have different limits based on their subscription tier. Zuplo’s dynamic rate limiting makes this straightforward by reading consumer metadata from API keys. This approach works with the standard Rate Limiting policy to set per-tier request allowances, and you can combine it with Approach 1’s setIncrements for true token-based counting.

First, store tier information in your API key consumer metadata. For example, a consumer’s metadata might look like:

JSONjson
{
  "tier": "enterprise",
  "monthlyTokenBudget": 10000000
}

Then, write a custom function that reads the tier and returns different rate limit settings:

TypeScripttypescript
import {
  CustomRateLimitDetails,
  ZuploContext,
  ZuploRequest,
} from "@zuplo/runtime";

export function tierRateLimitKey(
  request: ZuploRequest,
  context: ZuploContext,
  policyName: string,
): CustomRateLimitDetails {
  const tier = request.user?.data?.tier ?? "free";

  // Set different request limits based on consumer tier
  const limits: Record<
    string,
    { requestsAllowed: number; windowMinutes: number }
  > = {
    free: { requestsAllowed: 20, windowMinutes: 60 },
    pro: { requestsAllowed: 200, windowMinutes: 60 },
    enterprise: { requestsAllowed: 2000, windowMinutes: 60 },
  };

  const config = limits[tier] ?? limits.free;

  return {
    key: request.user?.sub ?? "anonymous",
    requestsAllowed: config.requestsAllowed,
    timeWindowMinutes: config.windowMinutes,
  };
}

Wire it up in your policies.json by setting rateLimitBy to "function" and pointing the identifier to your module:

JSONjson
{
  "name": "tiered-rate-limit",
  "policyType": "rate-limit-inbound",
  "handler": {
    "export": "RateLimitInboundPolicy",
    "module": "$import(@zuplo/runtime)",
    "options": {
      "rateLimitBy": "function",
      "requestsAllowed": 20,
      "timeWindowMinutes": 60,
      "identifier": {
        "module": "$import(./modules/rate-limiter)",
        "export": "tierRateLimitKey"
      }
    }
  }
}

This sets per-tier request allowances based on API key metadata. For full token-based dynamic limits, layer this alongside the Complex Rate Limiting approach from Approach 1 — use this policy for request-count guardrails and the complex policy for actual token consumption tracking.

Best Practices for Managing AI Agent Quotas

Successfully implementing token-based rate limiting requires more than just swapping your counter from requests to tokens. Here are practical guidelines for getting it right.

Separate AI and Human Traffic

Use API key authentication to identify which consumers are AI agents versus human users. Tag API keys with metadata indicating the consumer type, then apply different rate limiting policies to each. Human consumers might still use request-based limits, while AI agent keys get token-based limits.

Layer Multiple Limits

Don’t rely on a single rate limit. Combine short-term rate limits (tokens per minute) with long-term quotas (tokens per month) to handle both burst protection and budget enforcement. Zuplo supports multiple rate limiting policies on the same route — apply the longest duration window first, followed by shorter windows.

Return Token Usage in Response Headers

Help your AI agent consumers manage their own usage by returning token consumption data in response headers. This follows the emerging RateLimit header standard and lets well-behaved agents throttle themselves before hitting hard limits.

Monitor and Alert on Token Consumption

Token-based limits make cost anomalies more visible. Set up alerts for consumers whose token usage spikes unexpectedly — it might indicate a runaway agent loop, a prompt injection attack, or simply a customer that needs a higher tier. You can export usage data to your monitoring and analytics platform to track token consumption patterns and identify optimization opportunities.

Plan for Graceful Degradation

When an AI agent hits its token limit, provide a clear, structured error response that the agent can parse and handle programmatically. Include the limit, current usage, and reset time so the agent can queue or retry intelligently rather than failing silently. Zuplo’s custom 429 response example shows how to return detailed rate limit information using the RFC 7807 problem details format.

Consider Cost-Based Budgets for Multi-Model Routing

If your API routes requests to different models based on complexity, a flat token-per-minute limit may still be unfair. A consumer using a cheaper model shouldn’t be penalized at the same rate as one using a premium model. Assign cost weights per model and track spending in dollar-equivalent units rather than raw token counts.

The Bigger Picture: API Gateways as AI Gateways

The shift from request-based to token-based rate limiting is part of a larger transformation. Traditional API gateways focused on routing, authentication, and request-count limits. In 2026, the same gateways need to understand non-human consumers, enforce token-based limits, monitor agent behavior, and apply intelligent policies to AI-driven traffic.

Zuplo’s AI Gateway takes this further with built-in support for multi-provider LLM routing, hierarchical cost budgets, semantic caching, and prompt injection detection — all running at the edge across 300+ data centers. Whether you’re wrapping an LLM provider for external consumers or managing internal AI agent access, the gateway layer is where token-based rate limiting, cost control, and AI-specific security converge.

The APIs of 2026 aren’t just serving applications built by humans. They’re serving autonomous agents that consume resources in fundamentally different ways. Token-based rate limiting is how you keep those agents productive without letting them run up your bill.

Ready to implement token-based rate limiting for your AI traffic? Sign up for a free Zuplo account and start configuring token-aware policies in minutes — no infrastructure required.