Every API gateway vendor now claims to be an AI gateway. In the past two weeks alone, Kong launched Agent Gateway 3.14 with A2A support, Gravitee shipped 4.11 with MCP analytics dashboards, Tyk open-sourced AI Studio, and Apigee expanded its managed MCP server offering. If you are evaluating AI gateway solutions in 2026, you are navigating a crowded field of marketing claims.
This guide cuts through the noise. We compare five platforms — Zuplo, Kong, Gravitee, Tyk, and Apigee — across the capabilities that actually matter for AI workloads: MCP support, A2A protocol readiness, LLM proxy features, agent governance, developer experience, and pricing. Every claim is based on publicly documented features, not roadmap promises.
Why Every API Gateway Claims to Be an AI Gateway
The convergence is real. AI traffic now flows through the same infrastructure as traditional API traffic, and teams need authentication, rate limiting, and observability on both. The difference is in the approach: some platforms have purpose-built architectures for AI workloads, while others have bolted AI features onto existing gateway infrastructure as plugins or add-on modules.
That architectural distinction matters. Purpose-built AI gateway features tend to be deeper, better integrated, and easier to operate. Bolt-on approaches can create configuration complexity, feature gaps, and upgrade friction when the underlying gateway and the AI layer evolve at different speeds.
MCP Support Compared
The Model Context Protocol is the standard interface for AI agents to discover and call tools. MCP support at the gateway level determines how your organization manages, secures, and observes agent-to-tool communication.
Zuplo
Zuplo offers the most layered MCP approach with a dedicated MCP Gateway project type. This is not just an MCP proxy — it is a full governance layer for MCP servers across your organization.
- Centralized MCP server management — Register both internal and third-party MCP servers in one place. Create virtual MCP servers that expose only selected tools per team.
- RBAC per team — Segment which teams can access which MCP tools. Finance sees financial tools, engineering sees infrastructure tools.
- Audit logging — Centralized logs for every MCP interaction, giving compliance teams full visibility.
- MCP Server Handler — The API Gateway project type includes a built-in MCP Server Handler that automatically exposes your API endpoints as MCP tools without building a separate server.
Kong
Kong added MCP proxy capabilities in Gateway 3.12 (October 2025) and expanded them in 3.14. The approach is plugin-based: you add the MCP proxy plugin to your Kong Gateway routes.
- MCP proxy plugin — Routes MCP traffic through Kong with authentication and rate limiting applied via the standard plugin chain.
- Token exchange — 3.14 added support for swapping JWT tokens before accessing MCP servers, with scope-based tool filtering.
- Enterprise MCP gateway — Kong offers a separate enterprise MCP gateway offering beyond the proxy plugin, but this requires Konnect enterprise licensing.
Gravitee
Gravitee 4.11 introduced dedicated MCP analytics and an upgraded MCP Resource Server.
- MCP analytics dashboard — Real-time metrics for MCP request counts, gateway latency (p90 and p99), method distribution, and top tools by usage.
- MCP Resource Server v2 — Upgraded from proof-of-concept to enterprise-grade with full Client Credentials flows, Bearer token introspection, and client secret management.
- AI-powered PII filtering — A new policy that automatically detects and redacts personally identifiable information flowing through MCP tools.
- Requires Enterprise Edition with the AI Agent Management pack.
Tyk
Tyk AI Studio supports MCP toolchains as part of its broader AI governance architecture.
- MCP toolchain integration — AI Studio’s plugin ecosystem supports connecting MCP servers and domain-specific tooling.
- Extensible gateway — Custom pre- and post-processing plugins can be built for MCP traffic.
- AI Studio went open source in March 2026, so the MCP capabilities are available in the Community Edition.
Apigee
Apigee takes a unique approach: it auto-generates MCP servers from your existing API specifications.
- Zero-code MCP generation — Point Apigee at your API spec and it creates a managed MCP server automatically. No code changes, no separate server to deploy.
- OAuth 2.1 and OIDC — MCP endpoints support modern authentication standards out of the box.
- Cloud Data Loss Prevention integration — Classify and protect sensitive data flowing through MCP tools using Google Cloud’s DLP service.
- API Hub discovery — MCP tools are discoverable through Apigee API Hub alongside your traditional APIs.
- Locked to Google Cloud infrastructure.
A2A Protocol Support
The Agent-to-Agent (A2A) protocol enables AI agents to discover, negotiate with, and delegate tasks to each other over standard HTTP and JSON-RPC. A2A is newer than MCP and support is still emerging across the industry.
Kong — Most Mature
Kong 3.14 is the furthest ahead on A2A. The release includes a dedicated Agent Gateway with:
- AI A2A Proxy plugin — Detects and processes A2A requests using both JSON-RPC and REST protocol bindings. Rewrites agent card URLs to the gateway address.
- A2A observability — Prometheus and OpenTelemetry metrics with A2A-specific tracing spans. Konnect analytics support for A2A method calls, latencies, and task states.
- Structured A2A logging — Captures payloads and statistics on every A2A interaction, surfaced through Kong’s standard log plugins.
- Centralized A2A governance — Authentication, authorization, and rate limiting on all A2A traffic with tamper-evident audit trails.
Gravitee — Dedicated API Type
Gravitee 4.11 introduced a dedicated V4 API type for A2A communication.
- A2A API type — A standalone reactor architecture with HTTP selectors for flow routing and support for request and response flow phases.
- Token Exchange (RFC 8693) — Enables secure delegation between users and AI agents without impersonation, preserving a traceable delegation chain.
- Requires Enterprise Edition with the AI Agent Management pack.
Tyk, Apigee, and Zuplo
Tyk, Apigee, and Zuplo do not have native A2A protocol support as of April 2026. Since A2A is built on standard HTTP and JSON-RPC, A2A traffic can be proxied through any of these gateways with standard routing, authentication, and rate limiting applied — but without protocol-aware features like A2A-specific observability or agent card management.
LLM Proxy and Cost Controls
Every platform in this comparison can sit between your application and LLM providers. The differences are in routing intelligence, cost controls, and caching.
Zuplo
Zuplo’s AI Gateway is a dedicated project type for LLM traffic management.
- Multi-provider routing — Route to OpenAI, Anthropic, Google, Mistral, and other OpenAI-compatible providers from a single endpoint with automatic failover.
- Token-based rate limiting — Cap tokens per user, per application, or per time window. This goes beyond request counting to give you granular cost control.
- Semantic caching — Detects semantically similar prompts and returns cached responses, reducing both cost and latency.
- Budget enforcement — Set monthly spend limits with automatic cutoffs. Track usage by team for cost attribution.
- Runs at the edge across 300+ data centers, so caching and rate limiting happen close to your users.
Kong
Kong AI Gateway includes a comprehensive LLM proxy with dynamic model routing and precision token rate limiting in 3.14.
- Dynamic model routing — Route to different providers based on cost, latency, or capability. Supports Databricks, DeepSeek, and vLLM alongside major providers.
- Token rate limiting — Precision token-level rate limits added in 3.14.
- Semantic caching — Available through the AI proxy plugin chain.
- Custom guardrails plugin — 3.14 added integration with third-party guardrail services.
- Requires Konnect platform for full AI Gateway features.
Gravitee
Gravitee’s LLM capabilities are part of the broader APIM platform with dedicated analytics.
- LLM analytics dashboard — Real-time token usage and cost tracking in the APIM Console.
- AI-powered PII filtering — Automatically redacts sensitive data in both prompts and responses.
- Token Exchange for agent delegation — RFC 8693 support for secure delegation chains.
- Enterprise Edition required for AI capabilities.
Tyk
Tyk AI Studio provides a full-featured LLM governance layer.
- Multi-vendor routing — Policy-based model selection across OpenAI, Anthropic, Mistral, Vertex, Gemini, Ollama, and private models with automatic failover.
- Token-level metering — Attribution to teams, projects, and applications with hard spend caps.
- PII redaction and content filtering — Enforced at the gateway level.
- Cost-to-quality optimization — Routing strategies that automatically balance cost against quality.
- Community Edition (open source) available since March 2026; enterprise features require paid tier.
Apigee
Apigee’s AI capabilities are deeply integrated with Google Cloud services.
- Model Armor — Guards against prompt injection and jailbreaking attempts.
- Cloud DLP integration — Classify and protect sensitive data in AI traffic.
- Google Cloud-native observability — Metrics and logging through Google Cloud Monitoring and Cloud Logging.
- Limited to Google Cloud infrastructure and models available through Google’s ecosystem.
Agent Governance and Security
AI workloads introduce new security challenges: prompt injection, secret leakage, unauthorized tool access, and runaway costs. Here is how each platform addresses agent governance.
Zuplo
Zuplo treats AI security as a first-class concern across all three project types.
- Prompt Injection Detection — Uses a tool-calling LLM with a small, fast agentic workflow to detect poisoned or injected prompts in API responses. Available as a policy on any route.
- Secret Masking — Automatically detects and redacts secrets (API keys, tokens, credentials) in outbound responses before they reach downstream consumers or AI agents.
- MCP tool governance — The MCP Gateway enforces RBAC per team, controlling which agents can access which tools.
- Budget enforcement — Spend limits and token caps prevent runaway costs from agent loops.
Kong
Kong’s governance is distributed across the plugin chain and the new Agent Gateway.
- A2A audit trails — Tamper-evident logging of every A2A RPC call including caller identity, capabilities invoked, and outcomes.
- Custom guardrails — 3.14 added a plugin for integrating third-party guardrail services.
- Token exchange — Scope-based tool filtering ensures agents only access authorized MCP tools.
- Full governance features require Konnect enterprise licensing.
Gravitee
Gravitee’s governance spans its Agent Mesh architecture.
- AI-powered PII filtering — Bidirectional PII detection and redaction.
- MCP Resource Server v2 — Enterprise-grade authentication with client credentials flows and certificate management.
- Secure agent delegation — RFC 8693 Token Exchange preserves delegation chains without impersonation.
- Centralized policy engine — Shared across APIs and agent communications.
- Requires Enterprise Edition with AI Agent Management pack.
Tyk
Tyk AI Studio focuses on organizational governance.
- Complete audit trails — Logging of prompts, responses, and tool calls.
- PII redaction — Content filtering enforced at the gateway.
- Scoped tool access — RBAC with enterprise SSO integration.
- Hard spend caps — Per-team and per-application budget enforcement.
- Enterprise SSO and advanced RBAC require the paid tier.
Apigee
Apigee leverages Google Cloud’s security stack.
- Model Armor — Prompt injection and jailbreak detection.
- Cloud DLP — PII classification and protection integrated at the gateway.
- OAuth 2.1 for MCP — Modern authentication standards on all MCP endpoints.
- IAM integration — Fine-grained permissions through Google Cloud IAM.
- Locked to Google Cloud security and compliance certifications.
Developer Experience
The day-to-day experience of configuring, deploying, and debugging your AI gateway matters as much as the feature list. Here is where the platforms diverge significantly.
Zuplo
- TypeScript programmability — Write custom policies, handlers, and middleware in TypeScript with full IDE support and access to the npm ecosystem.
- GitOps-native — Your Git repository is the source of truth. Every push deploys automatically. Every pull request gets a live preview environment.
- Sub-minute deployments — Deploy globally to 300+ data centers in seconds.
- Three project types — Create separate projects for API Gateway, AI Gateway, and MCP Gateway workloads, each with purpose-built configuration and policies.
Kong
- Lua plugin ecosystem — Kong’s primary extension language is Lua, with support for Go, Python, and JavaScript plugins.
- decK CLI — Declarative configuration management through a CLI tool. In database mode, the database is the operational source of truth. Kong also supports a DB-less mode where declarative config files can be Git-managed.
- Konnect control plane — Centralized management UI for Kong Gateway instances.
- Infrastructure overhead — Requires managing NGINX, PostgreSQL, and data plane nodes even with the managed Konnect offering.
Gravitee
- Java-based platform — Built on Java, which means Java infrastructure management for self-hosted deployments.
- Policy Studio — Visual policy editor in the APIM Console.
- Template-based dashboards — Quick deployment of monitoring views for different API types including MCP and LLM.
- Enterprise Edition required — AI and agent management features are gated behind the enterprise tier.
Tyk
- Go-based gateway — Written in Go for performance, extended through Go plugins or JavaScript middleware.
- Open-source AI Studio — The Community Edition is open source as of March 2026, lowering the barrier to entry.
- Plugin ecosystem — Extensible architecture for custom model selection, guardrails, and processing logic.
- Self-hosted focus — Tyk Cloud exists but the platform is primarily designed for self-hosted deployment.
Apigee
- Google Cloud-native — Deeply integrated with GCP services, IAM, and monitoring.
- Zero-code MCP generation — The standout developer experience feature: point Apigee at an API spec and it generates a managed MCP server automatically.
- Steep learning curve — Powerful but verbose, with significant complexity for teams not already fluent in Google Cloud.
- Google Cloud lock-in — Available only on Google Cloud infrastructure.
Deployment Model and Edge Performance
Where your gateway runs affects latency, compliance, and operational complexity.
Zuplo
- Managed Edge — Serverless deployment across 300+ global data centers with near-zero cold starts. AI gateway features including semantic caching run at the edge.
- Managed Dedicated — Dedicated environments in your choice of cloud and region for data residency requirements.
- Self-Hosted — Run Zuplo on your own infrastructure when required.
Kong
- Self-hosted — Deploy anywhere: on-premises, any cloud, or Kubernetes. You manage the infrastructure.
- Konnect Cloud — Hybrid model where Kong manages the control plane but you run data plane nodes in your infrastructure.
- No edge-native deployment. Global distribution requires multi-region cluster management.
Gravitee
- Self-hosted — Open-source Community Edition runs on your infrastructure.
- Managed — Fully managed plans starting at $2,500 per month.
- Hybrid — Mix of managed control plane and self-hosted gateways.
- Java infrastructure requirements for self-hosted deployments.
Tyk
- Self-hosted — Primary deployment model. Deploy the Go-based gateway on your own infrastructure.
- Tyk Cloud — Managed offering available but the platform is designed primarily for self-hosted use.
- No edge-native deployment.
Apigee
- Google Cloud managed — Fully managed by Google. Available for Subscription, Pay-as-you-go, and Evaluation organizations.
- Data Residency — Supports Data Residency-enabled organizations within Google Cloud.
- Locked to Google Cloud regions.
Pricing Compared
Transparent pricing lets you model costs before committing. Here is how each platform approaches pricing for AI gateway capabilities.
Zuplo — Free tier with 100K requests per month, no credit card required. Builder plan at $25 per month with scalable request limits. Enterprise plans start at $1,000 per month annually. AI gateway and MCP gateway project types are available on every plan. Core security policies are included; some AI-specific policies like Prompt Injection Detection require an enterprise plan for production use but are free to try on any plan during development.
Kong — AI Gateway features require the Konnect platform. Konnect uses consumption-based pricing (per-service plus per-request) that can escalate quickly at volume. Agent Gateway and advanced AI features require enterprise licensing. No public pricing for AI-specific capabilities.
Gravitee — Open-source Community Edition is free but does not include AI or agent management features. Managed Starter Edition at $2,500 per month. Enterprise Edition with AI Agent Management pack requires custom pricing through sales.
Tyk — AI Studio Community Edition is open source and free to self-host. Enterprise features (SSO, advanced RBAC, dedicated support) require paid licensing with custom pricing. Self-hosted means you cover your own infrastructure costs.
Apigee — Available on Subscription, Pay-as-you-go, and Evaluation organizations within Google Cloud. Pricing is consumption-based and intertwined with broader Google Cloud billing. MCP features are included but Model Armor and Cloud DLP may incur additional Google Cloud service fees.
Decision Framework: When to Choose Each Platform
Choose Zuplo when you want a unified platform with the fastest path to production
Zuplo is the right choice if you need API management, AI gateway, and MCP governance in a single platform. The three purpose-built project types mean you are not fighting a monolithic gateway to do three different jobs. The free tier, sub-minute deployments, and TypeScript programmability make it the fastest path from zero to production AI gateway — deployed globally with built-in security and transparent pricing.
Best for: Startups shipping AI features, teams consolidating from multiple gateway tools, and organizations that want edge-native performance without infrastructure overhead.
Choose Kong when A2A protocol support is your top priority
Kong 3.14 has the most mature A2A implementation with a dedicated Agent Gateway, A2A-specific observability, and structured logging for agent-to-agent interactions. If your architecture relies heavily on autonomous agents communicating with each other, Kong’s A2A capabilities are ahead of the field.
Best for: Enterprises building multi-agent architectures where A2A governance and observability are critical requirements.
Choose Gravitee when you need unified API and event stream management
Gravitee’s Agent Mesh architecture spans APIs, events, and agents in a single platform. The 4.11 release adds strong MCP analytics and enterprise-grade MCP security. If you manage Kafka event streams alongside APIs and agent traffic, Gravitee’s unified approach reduces tooling sprawl.
Best for: Organizations managing APIs, event streams, and agent traffic that want a single governance layer across all three.
Choose Tyk when you need an open-source AI governance layer
Tyk AI Studio going open source in March 2026 makes it the most accessible self-hosted AI governance option. The extensible plugin architecture and community edition give you a foundation to build on without vendor lock-in. If your team has strong Go expertise and prefers self-hosted infrastructure, Tyk is a solid foundation.
Best for: Teams with DevOps capabilities that want self-hosted AI governance with open-source transparency.
Choose Apigee when you are committed to Google Cloud
Apigee’s zero-code MCP generation and deep Google Cloud integration make it the natural fit for GCP-centric organizations. The ability to turn existing API specs into managed MCP servers without writing code is a genuine differentiator. If your compliance requirements mandate Google Cloud and your team is fluent in GCP services, Apigee minimizes integration friction.
Best for: Google Cloud-native organizations that want managed MCP servers with minimal development effort.
Purpose-Built vs Bolt-On: Architectural Approaches Compared
Most competitors follow the same pattern: start with an API gateway, then bolt on AI features when the market demands it. The result is a single monolithic gateway trying to serve three different workloads — traditional API traffic, LLM proxy traffic, and MCP tool governance — through one configuration model and one deployment pipeline.
Zuplo takes a different approach. Each of the three project types — API Gateway, AI Gateway, and MCP Gateway — is independently deployed, independently configured, and optimized for its specific workload. Your API Gateway handles routing, authentication, and developer portal concerns. Your AI Gateway handles model routing, semantic caching, and budget enforcement. Your MCP Gateway handles tool governance, team segmentation, and audit logging.
This separation means you can evolve each layer independently. Upgrading your MCP governance policies does not risk breaking your API authentication configuration. Scaling your AI Gateway to handle a traffic spike does not affect your API Gateway’s resource allocation. Each project type has purpose-built policies and handlers designed for its workload, not generic plugins adapted from another context.
For teams evaluating AI gateways in 2026, the question is not just which platform has the longest feature list. It is which platform’s architecture will let you ship AI features into production quickly today and evolve your AI infrastructure confidently over the next two years.
Start building with Zuplo’s AI Gateway for free — no credit card required, deployed globally in seconds.