Gartner projects that by the end of 2026, 75% of API gateway vendors will integrate MCP features — up from a near-zero baseline in 2024. This isn’t a speculative forecast. It’s an acknowledgment of what’s already underway across the industry.
Every major API management platform — Kong, Gravitee, Tyk, Apigee, Azure API Management — has already shipped or announced MCP support in early 2026. The convergence of API gateways and AI agent infrastructure isn’t a future trend. It’s the present.
The shift is already happening
When Anthropic introduced the Model Context Protocol in late 2024, it gave AI agents a standardized way to discover and invoke external tools. In the months since, adoption has been remarkably fast. OpenAI, Google, and Microsoft have all embraced MCP-style architectures for tool integration. API gateway vendors are following suit because the demand is clear: enterprises need a managed layer between AI agents and the APIs they consume.
Gartner’s related projections reinforce the scale of this shift. They predict that 40% of enterprise applications will integrate task-specific AI agents by 2026, and that by 2028, 70% of software engineering teams building multimodel applications will use AI gateways — up from 25% in 2025. These numbers tell the same story: AI agents are becoming first-class consumers of your APIs, and the infrastructure to manage them needs to keep pace.
Bolting on MCP support isn’t enough
Here’s the part that gets lost in the excitement over vendor announcements: adding MCP compatibility to an existing API gateway doesn’t automatically make it ready for production AI agent traffic. MCP readiness requires more than a protocol adapter.
It requires:
- Authentication propagation — AI agents need to authenticate on behalf of users. Your gateway needs to handle OAuth flows, API key translation, and credential forwarding across MCP server boundaries without exposing secrets to the agent itself.
- Tool-level governance — Not every team should see every tool. Enterprises need the ability to create virtual MCP servers that expose curated subsets of tools per team, role, or application. Finance sees Stripe tools. Engineering sees GitHub tools. Everyone stays productive, nobody gets access they shouldn’t have.
- Security policies for agent traffic — AI agents interact with APIs differently than humans do. Prompt injection detection, PII redaction, and toxic content filtering need to be applied to MCP interactions just as they would to any API request. A single misconfigured agent can trigger unauthorized actions or leak sensitive data through unmonitored tool calls.
- Observability and audit trails — When an AI agent calls a tool on behalf of a user, you need to know what happened, who authorized it, and what data was involved. Full audit logging across MCP interactions isn’t a nice-to-have — the EU AI Act’s high-risk system requirements take full effect in August 2026, making traceability a compliance necessity.
- Rate limiting and cost control — Unmanaged agent loops can burn through API budgets fast. Rate limiting applied at the MCP gateway layer prevents runaway costs before they start.
- Monetization — As AI agents become primary consumers of your APIs, the ability to meter tool usage and attach billing plans directly to MCP server access becomes a real revenue opportunity. API teams need a gateway that can enforce usage quotas, track consumption per subscriber, and integrate with billing systems without requiring separate infrastructure.
Where Zuplo fits
Zuplo has been building at this intersection since before the Gartner prediction made it official.
Our MCP Server Handler lets you transform any API managed through Zuplo into a remote MCP server through straightforward configuration. It reuses your existing OpenAPI definitions, runs through your full policy pipeline (authentication, rate limiting, validation), and deploys globally on the edge — no separate infrastructure needed.
For enterprises managing MCP at scale, our MCP Gateway provides a centralized control plane across all your MCP servers. It handles the hard parts: auth translation between different authentication modes, virtual MCP servers with team-specific tool access, security policies across all MCP traffic, and full observability into every interaction.
Zuplo also supports monetized MCP servers — letting you attach billing plans directly to MCP server access. Because Zuplo’s developer portal and monetization layer sit inside the same gateway, you can meter tool calls, enforce subscription tiers, and expose a self-service portal where developers subscribe to your MCP server with a credit card. This makes Zuplo the only API gateway purpose-built to turn an MCP server into a commercial product, not just an internal tool.
This isn’t MCP support bolted onto an existing product. Zuplo’s programmable, edge-native architecture was designed for exactly this kind of protocol evolution — where new patterns emerge and you need to apply governance, security, access control, and monetization at the infrastructure layer without rebuilding from scratch.
What this means for API teams
If you’re managing APIs today, the Gartner projection confirms what many teams are already experiencing: AI agents are becoming a primary consumer of your APIs, and they need a different kind of management than human developers do.
The organizations that move early on MCP governance — not just MCP compatibility — will have a meaningful advantage. They’ll have the tooling to say yes to agent adoption across their teams while maintaining the security and visibility that enterprise environments demand.
If you want to see what MCP gateway readiness looks like in practice, explore our guide to managing MCP server access at scale or get started with Zuplo’s MCP Gateway.