If you think of your developer portal as a website that humans visit in a browser, you’re only seeing half the picture — literally. AI coding agents now account for nearly half of all traffic to API documentation sites, and that share is growing fast.
Mintlify’s March 2026 analysis of 30 days of traffic across documentation sites they power revealed a striking split: AI agents generate 45.3% of all requests, while traditional browser traffic accounts for 45.8%. The remaining traffic comes from other automated systems. Claude Code alone produced 199.4 million requests during the measurement period — more than Chrome on Windows.
This isn’t a future trend to prepare for. It’s the current reality. And it has practical implications for how you build, structure, and serve your API documentation.
The New Traffic Mix: What the Data Shows
The Mintlify data paints a clear picture of which AI agents are driving this traffic shift and how concentrated the market is.
Claude Code leads with 199.4 million requests, representing 25.2% of total documentation traffic and 55.8% of all AI agent traffic. Cursor follows with 142.3 million requests (18.0% of total, 39.8% of AI traffic). Together, these two tools account for 95.6% of identified AI agent traffic.
The remaining 4.4% is distributed across a growing roster of agents:
- OpenCode — 7.7 million requests (1.0%)
- Trae (ByteDance) — 4.6 million requests (0.6%)
- ChatGPT — 1.8 million requests (0.2%)
- NotebookLM (Google) — 1.4 million requests (0.2%)
- Manus — 0.5 million requests (0.1%)
The concentration at the top matters for planning purposes, but the long tail matters too. New AI coding tools are launching constantly, and each one is another machine reader hitting your documentation.
How AI Agents Read Your Docs Differently
Understanding why agents consume documentation differently from humans is the key to optimizing for both audiences.
Structure Over Prose
A human developer might skim a page, scan for code examples, and piece together the information they need from context clues and visual hierarchy. An AI agent doesn’t skim. It processes text sequentially, and it depends on explicit structure — headings, code blocks, parameter lists, and consistent formatting — to extract the right information.
When your documentation buries a critical parameter description inside a paragraph of explanatory text, a human can still find it. An agent is more likely to miss it or misinterpret the context. Structured formats like OpenAPI specifications, JSON schemas, and well-organized Markdown give agents the signal-to-noise ratio they need.
Machine-Readable Metadata
AI agents benefit enormously from explicit metadata that tells them what a page
is about before they read the full content. This is why conventions like
llms.txt — a
machine-readable directory of your documentation placed at your site’s root —
are gaining traction. Think of it as a sitemap for AI systems: it lists your
documentation pages with descriptions so agents can efficiently locate relevant
content without crawling every page.
Context Windows and Token Efficiency
AI agents operate within context windows — fixed limits on how much text they can process at once. When an agent pulls your documentation into its context, every word counts. Dense, well-structured content that communicates information efficiently is far more useful than verbose explanations padded with marketing language.
This has a practical consequence: the same documentation qualities that make your API easy for AI agents to consume — concise descriptions, structured data, clear examples — also make it better for human developers who want answers fast.
Silent Failure
Here’s the most important difference: when a human developer can’t find what they need in your docs, they file a support ticket, ask on a forum, or send an email. When an AI agent fails, it doesn’t tell you. As Mintlify put it, agents just “move on, or worse, make something up.” There’s no bounce rate alert and no angry support ticket — just a developer somewhere who got a wrong answer and blames your product for it.
Optimizing for AI agent consumption isn’t just about traffic statistics. It’s about preventing invisible failures that erode trust in your API.
The Machine-Readable Stack: OpenAPI and MCP
Two standards form the foundation of making your API consumable by AI agents: OpenAPI specifications and the Model Context Protocol (MCP).
OpenAPI as the Source of Truth
Your OpenAPI specification is already the most important artifact in your API ecosystem for machine readers. It provides the structured, schema-defined description of every endpoint, parameter, request body, and response that AI agents need to understand your API without reading prose documentation.
But most OpenAPI specs are incomplete. They define the structure of requests
and responses without explaining the semantics — when to use an endpoint, what
preconditions apply, what side effects it produces, and how operations relate to
each other. An AI agent looking at a bare-bones OpenAPI spec knows that
POST /orders accepts a JSON body with certain fields, but it doesn’t know when
to call it, what business rules apply, or what happens downstream.
The fix is treating your OpenAPI spec as a first-class product. Add detailed
description fields to every operation that explain intent, not just mechanics.
Include request and response examples that show realistic usage. Use summary
fields with action-oriented language like “Place a new order” rather than “POST
order endpoint.”
MCP as the Agent Interface
While OpenAPI describes your API statically, the Model Context Protocol (MCP) gives AI agents a live, structured interface to discover and invoke your API endpoints as tools.
Instead of an agent scraping your documentation page to figure out how to call your API, it can connect to an MCP server that exposes your API operations as callable tools with typed parameters and descriptions. The agent gets exactly the information it needs in a standardized format — no parsing HTML, no guessing at parameter types, no hoping the documentation is up to date.
MCP is particularly powerful because it works at the protocol level. Any AI client that supports MCP — whether it’s Claude Code, Cursor, or the next agent that hasn’t launched yet — can discover and use your API through the same standardized interface.
How They Work Together
OpenAPI and MCP aren’t competing standards. They’re complementary layers:
- OpenAPI is the specification layer — the comprehensive, versioned description of your API’s capabilities
- MCP is the interaction layer — the runtime protocol that lets agents discover and invoke those capabilities
The richest setup is one where your MCP server draws its tool definitions directly from your OpenAPI spec, so the two are always in sync. When you update an endpoint in your OpenAPI specification, the corresponding MCP tool updates automatically.
Practical Steps to Optimize Your Developer Portal
Here’s what you can do today to make your developer portal work for both human and machine readers.
1. Audit Your OpenAPI Specification
Start by assessing the quality of your OpenAPI spec from an agent’s perspective:
- Does every operation have a meaningful
descriptionthat explains when and why to use it? - Are all request parameters and response schemas fully defined with types, constraints, and examples?
- Do operation summaries use intent-revealing language?
- Is the spec complete — covering every public endpoint, not just the “important” ones?
If your spec is thin or incomplete, enriching it is the single highest-impact change you can make. Every improvement to your OpenAPI spec benefits your documentation, your developer portal, your API validation, and your MCP tooling simultaneously.
2. Add Machine-Readable Discovery
Make it easy for AI agents to find and navigate your documentation:
- Add an
llms.txtfile at your site root that maps out your documentation structure with descriptions - Ensure your OpenAPI spec is available at a predictable URL
- Consider providing a bundled markdown version of your documentation
(
llms-full.txt) for agents that want to ingest everything at once
3. Expose Your API Through MCP
If your API is used by developers who work with AI coding tools — and increasingly, that’s most developers — exposing it through MCP makes it directly discoverable and invocable from those tools.
This doesn’t require building an MCP server from scratch. Zuplo’s MCP Server Handler automatically transforms your API routes into MCP tools based on your OpenAPI specification. You configure which routes to expose, and the handler takes care of the protocol details — tool discovery, parameter schemas, authentication, and execution.
This is the per-route configuration. You also need a dedicated MCP server route
(typically at /mcp) that uses the mcpServerHandler and references the
operations to expose. See the
MCP Server Handler documentation
for the complete setup.
Every route you expose as an MCP tool becomes instantly discoverable by any MCP-compatible AI client, without the agent needing to parse your documentation at all.
4. Structure Content for Dual Audiences
When writing documentation that serves both humans and machines:
- Use consistent heading hierarchies so agents can navigate the document structure
- Put critical information in structured formats — parameter tables, code blocks, and schema definitions rather than inline prose
- Keep explanatory text concise and avoid burying technical details inside narrative paragraphs
- Include complete code examples with language tags, since agents use these as primary reference material
- Use descriptive link text instead of “click here” so agents understand the destination
5. Track and Differentiate Agent Traffic
You can’t optimize what you can’t measure. Start identifying AI agent traffic in your analytics by examining User-Agent strings, and set up separate tracking for machine versus human visitors. This data will tell you which documentation pages agents visit most, where they might be struggling, and how agent consumption patterns differ from human ones.
For API traffic itself, API key management becomes critical as agents increasingly consume APIs on behalf of developers. Per-consumer keys with metadata let you distinguish agent-driven requests from human-driven ones, apply appropriate rate limits, and track usage patterns across different consumer types.
How Zuplo’s Developer Portal Serves Both Audiences
Zuplo’s approach to developer portals addresses the dual-audience challenge by design, not as an afterthought.
The Zuplo Developer Portal is auto-generated from your OpenAPI specification and stays in sync as your API evolves. For human developers, this means interactive documentation with an API explorer, request/response schemas, and self-serve API key management. For AI agents, it means the documentation is always backed by a structured, complete OpenAPI spec that machines can parse directly.
The MCP Server Handler takes this a step further by exposing your API routes as MCP tools. AI agents don’t need to scrape your developer portal at all — they can discover and invoke your API through a standardized protocol, with the same authentication and rate limiting policies that apply to regular API traffic.
And because Zuplo’s gateway is the source of truth for both your API routing and your documentation, there’s no drift between what your docs say and what your API actually does. The OpenAPI spec that generates your developer portal is the same spec that defines your gateway routes and powers your MCP server. One spec, three audiences: human developers reading docs, AI agents parsing specifications, and AI clients calling tools through MCP.
The Documentation Imperative
The shift to AI-agent traffic isn’t slowing down. The March 2026 data shows near parity between human and machine readers, and the trajectory suggests agents will become the majority consumer of API documentation within the next year.
This isn’t a reason to panic — it’s a reason to invest in the fundamentals that serve both audiences. A complete OpenAPI specification, well-structured documentation, and machine-readable discovery mechanisms make your API better for everyone. The developer portals that thrive in this new landscape won’t be the ones that bolt on “AI support” as a feature. They’ll be the ones built on structured, spec-driven foundations that naturally serve both humans reading in a browser and agents parsing in a context window.
Start with your OpenAPI spec. Make it complete, descriptive, and accurate. Then
layer on machine-readable discovery with llms.txt and MCP. Your developer
portal is already your API’s front door — it’s time to make sure it opens for
every kind of visitor.