Before you evaluate specific API gateway products, you need to answer a more fundamental question: how do you want to deploy and operate your gateway? The deployment model you choose determines your infrastructure costs, operational burden, scalability ceiling, and long-term vendor flexibility. Choose wrong and you’re stuck managing Kubernetes clusters at 2 AM or locked into a single cloud provider’s pricing whims.
This guide breaks down the three primary API gateway deployment models — self-hosted, cloud-vendor managed, and edge-native managed — with a practical decision framework to help you pick the right one for your team.
In this guide:
- Self-Hosted API Gateways
- Cloud-Vendor Managed API Gateways
- Edge-Native Managed API Gateways
- Side-by-Side Comparison
- Decision Framework
- Total Cost of Ownership
- FAQ
Why the Deployment Model Matters More Than the Product
Most “how to choose an API gateway” articles jump straight to feature comparisons. That’s backwards. A gateway with every feature you want is useless if your team can’t operate it reliably, or if its deployment model creates hidden costs that dwarf the license fee.
The deployment model affects:
- Who’s on call when the gateway goes down
- How much infrastructure your team needs to provision and maintain
- Where your requests are processed and how that impacts latency
- What your total cost actually looks like (not just the sticker price)
- How locked in you are to a specific vendor or cloud provider
Let’s examine each model.
Self-Hosted API Gateways
Self-hosted gateways are software packages you download and run on your own infrastructure — whether that’s on-premises servers, virtual machines, or Kubernetes clusters in your cloud account.
How the Architecture Works
In a self-hosted deployment, you’re responsible for the entire stack:
- Data plane: The gateway instances that handle API traffic, typically deployed as containers or pods across multiple nodes for high availability
- Control plane: The management layer that distributes configuration to gateway instances, often requiring its own database (PostgreSQL, Cassandra, or etcd)
- Supporting infrastructure: Load balancers, DNS, TLS certificate management, monitoring, log aggregation, and backup systems
You deploy the gateway software into your VPC or data center, configure routing rules through a management API or config files, and manage the lifecycle of every component yourself.
Common Self-Hosted Options
- Kong Gateway (OSS): Built on NGINX and OpenResty, extended with Lua plugins. Requires PostgreSQL for its datastore in traditional mode (Cassandra support was removed in Kong 3.4), or can run DB-less with declarative config.
- Tyk Gateway (OSS): Go-based gateway with Redis as a required dependency. Dashboard and advanced analytics require the commercial license.
- KrakenD: Stateless Go-based gateway that uses declarative JSON configuration with no external database dependency.
- Envoy Proxy: High-performance C++ proxy commonly used in service mesh architectures. Requires a separate control plane (like Istio) for API gateway use cases.
- NGINX: Lightweight reverse proxy that can function as a basic API gateway with manual configuration. Advanced API management features require NGINX Plus.
What You Own Operationally
Running a self-hosted gateway means your team is responsible for:
- Provisioning and capacity planning: Sizing compute instances, configuring auto-scaling groups, and ensuring enough headroom for traffic spikes
- High availability: Deploying across multiple availability zones, managing failover, and testing disaster recovery procedures
- Upgrades and patching: Planning zero-downtime upgrades, testing new versions in staging, and applying security patches — often on an urgent timeline
- Monitoring and alerting: Setting up health checks, dashboards, and on-call rotations for the gateway infrastructure itself
- Security hardening: Managing TLS certificates, network policies, access controls, and vulnerability scanning
- Database operations: Backing up and maintaining the control plane database (for gateways that require one)
Cost Profile
The software license for open-source gateways is $0, but the total cost of ownership tells a different story:
- Infrastructure: Compute, storage, load balancers, and database instances typically run $200–$500/month for a minimal production setup
- DevOps labor: Ongoing maintenance, patching, scaling, and incident response consume 10–20 hours per month of senior engineering time
- Monitoring tooling: Observability platforms for the gateway infrastructure add $200–$800/month
- Incident response: When the gateway goes down, your on-call engineer handles it — not a vendor’s support team
Industry estimates put realistic self-hosted TCO at $2,000–$4,000+ per month for small-to-mid-size teams, and $50,000+ per year for moderately scaled enterprise deployments — before you even factor in the opportunity cost of engineering time diverted from your core product.
When Self-Hosted Makes Sense
- Strict compliance requirements: Air-gapped environments, data sovereignty regulations, or industries (financial services, defense) where external dependencies are restricted
- Existing platform engineering teams: Organizations with dedicated DevOps teams that already manage Kubernetes clusters and have operational runbooks in place
- Extreme customization needs: Use cases requiring deep modifications to the gateway’s core behavior beyond what plugins or policies can provide
Cloud-Vendor Managed API Gateways
Cloud-vendor managed gateways are fully hosted services offered by major cloud providers as part of their platform ecosystems.
How the Architecture Works
With a cloud-vendor managed gateway, the provider handles all infrastructure:
- Compute and scaling: The provider provisions and auto-scales the gateway instances — you never see or manage the underlying servers
- Single-region or multi-region: Typically deployed within a single cloud region by default, with multi-region requiring additional configuration and cost
- Tight cloud integration: Deep native integration with the provider’s other services (Lambda, Cloud Functions, IAM, VPC, monitoring)
You configure routes, policies, and integrations through the provider’s console, CLI, or Infrastructure as Code tools like Terraform or CloudFormation.
Common Cloud-Vendor Options
- AWS API Gateway: Supports REST, HTTP, and WebSocket APIs. Deeply integrated with Lambda, IAM, CloudWatch, and other AWS services. Pricing is per-request ($1.00–$3.50 per million depending on API type).
- Azure API Management: Full lifecycle API management with a developer portal, policy engine, and analytics. Multiple pricing tiers from consumption (pay-per-call) to premium (dedicated infrastructure).
- Google Apigee: Enterprise API management platform with analytics, developer portal, and monetization features. Available as a managed service on Google Cloud with custom enterprise pricing.
What You Own Operationally
Cloud-vendor gateways significantly reduce operational work compared to self-hosted:
- Infrastructure: The provider manages all underlying compute, networking, and scaling
- Upgrades: The provider handles patches and version updates
- Availability: The provider’s SLA covers uptime (typically 99.95% for production tiers)
You’re still responsible for:
- Configuration management: Defining routes, policies, and integrations (often through a web console or IaC templates)
- Cost monitoring: Per-request pricing can lead to bill shock at scale, especially with REST APIs at $3.50 per million requests plus data transfer fees
- Multi-region complexity: If you need global distribution, you’re managing separate gateway deployments per region with your own routing logic
- Feature limitations: Working within the provider’s feature set, which may not cover advanced use cases like custom authentication logic or complex request transformations
Cost Profile
Cloud-vendor gateways trade infrastructure costs for usage-based pricing:
- Per-request fees: AWS API Gateway charges $1.00 per million for HTTP APIs or $3.50 per million for REST APIs. These seem small until traffic scales.
- Data transfer: Outbound data transfer fees add $0.09/GB, which compounds with payload sizes.
- Supporting services: Caching, custom domains, WAF integration, and private API endpoints all carry additional per-hour or per-request charges.
- Bill unpredictability: A traffic spike or a misconfigured integration that generates retry loops can produce surprise bills that are difficult to forecast.
At 100 million REST API requests per month on AWS, the gateway alone costs roughly $350/month before data transfer — manageable but growing. At a billion requests, you’re looking at $3,500/month just for the gateway, plus data transfer and supporting service costs.
The Vendor Lock-In Problem
The biggest hidden cost of cloud-vendor gateways is lock-in:
- Service coupling: Your gateway configuration references provider-specific resources (Lambda ARNs, IAM roles, VPC endpoints) that don’t translate to other platforms
- Migration difficulty: Moving off a cloud-vendor gateway means rewriting integrations, authentication flows, and operational tooling
- Pricing leverage: Once you’re locked in, the provider has limited incentive to offer competitive pricing
- Multi-cloud barriers: If your organization adopts a multi-cloud strategy, a vendor-specific gateway becomes a bottleneck that needs to be duplicated per provider
When Cloud-Vendor Managed Makes Sense
- All-in on one cloud: If your entire stack runs on a single cloud provider and you have no multi-cloud aspirations
- Tight serverless integration: When you need seamless triggers between the gateway and serverless functions on the same platform
- Low-to-moderate traffic: At lower request volumes, the per-request pricing can be cost-effective and the zero-ops appeal is strong
- Rapid prototyping: Getting an API endpoint live with minimal configuration for early-stage projects
Edge-Native Managed API Gateways
Edge-native managed gateways represent a newer deployment model that combines the operational simplicity of a managed service with globally distributed edge processing — without tying you to a single cloud vendor.
How the Architecture Works
Unlike cloud-vendor gateways that run in one or two cloud regions, an edge-native gateway deploys your API configuration to hundreds of edge locations worldwide simultaneously. Every request is processed — not just cached — at the nearest point of presence (PoP) to your user:
- Full compute at the edge: Authentication, rate limiting, request transformation, and custom business logic all execute at edge locations rather than routing back to a central region
- Globally distributed by default: A single deployment propagates to all edge locations automatically — no per-region configuration or multi-region orchestration needed
- Vendor-neutral: Your backend services can run on any cloud provider (or multiple), since the gateway sits at the edge in front of them all
This architecture eliminates the fundamental latency penalty of routing every API request through a single data center. A user in Singapore hitting an API with a traditional gateway in us-east-1 adds hundreds of milliseconds of round-trip latency before the gateway even processes the request. An edge-native gateway handles it locally, typically within 50ms of the user.
Zuplo: An Edge-Native Approach
Zuplo is built from the ground up as an edge-native API gateway, deploying to 300+ data centers across 120+ countries. Key architectural characteristics:
- GitOps-native configuration: Your entire gateway configuration — routes,
policies, and custom TypeScript handlers — lives in a Git repository. Every
push to a branch triggers an automatic deployment. There’s no separate
infrastructure management tool, no Terraform provider to maintain, and
rollback is a
git revert. - Branch-based environments: Each Git branch creates an isolated environment with its own URL. Feature branches become preview environments automatically, giving your team the same workflow they use for application code.
- Programmable with TypeScript: Custom policies and request handlers are
written in TypeScript using standard Web APIs (
Request/Response), not proprietary DSLs or niche languages like Lua. - Zero infrastructure management: No servers to provision, no clusters to scale, no databases to back up, no control planes to maintain. Deployments propagate globally in under 20 seconds.
What You Own Operationally
With an edge-native managed gateway, your operational responsibility shrinks to:
- API configuration: Defining routes, selecting policies, and writing any custom logic — all as code in Git
- Git workflow: Managing branches, pull requests, and code reviews using your existing development process
- Environment variables: Setting secrets and configuration values through the platform’s environment management
Everything else — infrastructure, scaling, global distribution, high availability, patching, monitoring, TLS certificates — is handled by the platform.
Cost Profile
Edge-native managed gateways use subscription-based pricing with included request volumes:
- Predictable monthly costs: You know what you’ll pay before the month starts, rather than getting a bill determined by exact request counts
- No infrastructure line items: No separate charges for compute, load balancers, databases, or data transfer
- No per-region multipliers: Global distribution is included — you don’t pay extra for each region or edge location
- Scales with your business: Pricing tiers align with usage levels and feature needs, not the complexity of your infrastructure
This model eliminates two common cost surprises: the hidden DevOps labor of self-hosted gateways and the usage-based bill shock of cloud-vendor gateways.
When Edge-Native Makes Sense
- Globally distributed users: Your API consumers span multiple regions or continents and latency matters
- Small-to-mid-size teams: You want managed simplicity but don’t want to sacrifice flexibility or lock into a single cloud vendor
- Multi-cloud architectures: Your backends run on multiple cloud providers and you need a gateway that sits in front of all of them
- Developer-first teams: You want gateway configuration to follow the same Git-based workflow as your application code
- Fast time-to-value: You need to go from zero to a production-grade, globally distributed API gateway without weeks of infrastructure setup
Side-by-Side Comparison
Here’s how the three models compare across the dimensions that matter most:
Infrastructure Requirements
- Self-hosted: You provision and manage all compute, networking, databases, and supporting infrastructure. Minimum viable production setup includes multi-node clusters across availability zones.
- Cloud-vendor managed: Zero infrastructure — the provider handles everything. However, multi-region requires separate deployments per region.
- Edge-native managed: Zero infrastructure with automatic global distribution. A single deployment covers 300+ edge locations.
Operational Overhead
- Self-hosted: High. Requires dedicated DevOps time for patching, scaling, monitoring, and incident response. Expect 10–20+ hours per month of ongoing maintenance.
- Cloud-vendor managed: Low-to-moderate. No infrastructure ops, but you manage configuration through provider-specific tooling and handle multi-region orchestration yourself.
- Edge-native managed: Minimal. Configuration is code in Git. Deployments are automatic. No infrastructure or scaling to manage.
Scalability Model
- Self-hosted: Manual or semi-automated. You configure auto-scaling rules, monitor capacity, and provision additional infrastructure as traffic grows.
- Cloud-vendor managed: Automatic within the provider’s limits. AWS API Gateway defaults to 10,000 requests per second (steady-state) with a burst limit of 5,000 requests.
- Edge-native managed: Automatic and globally distributed. Traffic is handled at the nearest edge location with serverless scaling across all PoPs.
Latency Profile
- Self-hosted: Latency depends on where you deploy. Users far from your data center pay a round-trip penalty on every request.
- Cloud-vendor managed: Latency determined by your selected cloud region(s). Requests from other regions incur cross-region latency.
- Edge-native managed: Requests processed at the nearest edge PoP, typically within 50ms of the user regardless of their location.
Cost Structure
- Self-hosted: $0 license + infrastructure + DevOps labor + tooling. Realistic TCO: $2,000–$4,000+/month for production.
- Cloud-vendor managed: Per-request pricing ($1–$3.50 per million) plus data transfer, caching, and supporting service fees. Costs scale linearly with traffic.
- Edge-native managed: Subscription-based pricing that bundles global distribution, developer portal, API key management, and analytics into a single line item. Zuplo’s Builder plan starts at $25/month; Enterprise (starting at $1,000/month on annual contract) covers high-volume production workloads with custom pricing.
Lock-In Risk
- Self-hosted: Low vendor lock-in (you own the infrastructure), but high operational lock-in (migrating away from a self-hosted setup is complex for different reasons).
- Cloud-vendor managed: High. Configuration references provider-specific services and doesn’t port to other platforms.
- Edge-native managed: Low-to-moderate. Vendor-neutral positioning means backends can run anywhere. Configuration portability depends on the specific platform.
Time to Deploy
- Self-hosted: Weeks to months for a production-grade setup, including infrastructure provisioning, security hardening, and operational tooling.
- Cloud-vendor managed: Hours to days for basic setups. More complex configurations with custom domains, VPC integration, and multi-region require more time.
- Edge-native managed: Minutes to hours. With Git-based configuration and automatic global deployment, production-ready setups are achievable in a single session.
Decision Framework
Use these questions to identify which deployment model fits your situation:
Start with Your Constraints
Do you have regulatory requirements that mandate on-premises or air-gapped deployment?
If yes, self-hosted is likely your only option. Some edge-native providers (like Zuplo) also offer self-hosted and dedicated deployment options that may satisfy compliance requirements while reducing operational burden.
Are you fully committed to a single cloud provider with no multi-cloud plans?
If yes and your traffic is moderate, a cloud-vendor managed gateway offers the tightest integration with your existing stack. If traffic is high or you’re concerned about cost predictability, evaluate edge-native options.
Evaluate Your Team
Do you have a dedicated platform engineering or DevOps team?
If yes, self-hosted is viable — your team has the skills and bandwidth to manage gateway infrastructure. If no, self-hosted will pull application developers into infrastructure work, slowing feature delivery.
How many engineers can you dedicate to gateway operations?
- 0 engineers → managed (cloud-vendor or edge-native)
- 1–2 engineers part-time → cloud-vendor managed or edge-native
- 2+ engineers dedicated → self-hosted is operationally feasible
Consider Your Traffic Patterns
Where are your API consumers located?
If they’re concentrated in a single region, a cloud-vendor managed gateway in that region may be sufficient. If they’re globally distributed, edge-native provides meaningfully better latency without multi-region orchestration complexity.
How predictable is your traffic volume?
If traffic is highly variable or spiky, per-request cloud-vendor pricing creates budget uncertainty. Subscription-based edge-native pricing offers more cost predictability.
Assess Your Growth Trajectory
Will you need to support multiple cloud providers in the next 1–2 years?
If multi-cloud is on your roadmap, a cloud-vendor managed gateway becomes a liability. Edge-native and self-hosted options give you vendor neutrality.
How fast do you need to ship?
If time-to-value is critical, edge-native gateways get you from zero to production-grade in the shortest time. Self-hosted has the longest runway.
Total Cost of Ownership: A Realistic Comparison
Request-volume pricing comparisons miss the point. The real question is what it costs to run a complete API platform — including global distribution, developer portal, API key management, and analytics. Here’s an honest breakdown for a team handling 50 million API requests per month across all three models, comparing equivalent feature sets.
Self-Hosted (e.g., Kong OSS on Kubernetes)
- Gateway license: $0
- Compute (3-node cluster, multi-AZ): ~$400/month
- Database (PostgreSQL RDS or equivalent): ~$150/month
- Load balancer: ~$50/month
- Monitoring/logging (Datadog or equivalent): ~$300/month
- DevOps engineer time (15 hrs/month at $85/hr): ~$1,275/month
- Incident response and on-call: ~$500/month
- Estimated monthly TCO: ~$2,675/month
This estimate is conservative — it excludes the initial setup investment (typically 2–4 weeks of senior DevOps time), annual upgrade cycles, and opportunity cost of diverted engineering time. It also doesn’t include a developer portal, API key management platform, or global distribution, each of which requires additional tooling and engineering effort. For a deeper look at the build-vs-buy calculus, see our build vs buy analysis.
Cloud-Vendor Managed (e.g., AWS API Gateway REST) — Full Feature Stack
At 50M requests, the AWS API Gateway REST cost is $175/month — but that’s a single-region deployment with request routing only. To match the capabilities a production API team typically needs:
- AWS API Gateway REST (50M requests × $3.50/million): ~$175/month
- CloudWatch logging and enhanced metrics: ~$50/month
- Developer portal (ReadMe, Stoplight, or self-hosted): ~$250–$600/month
- API key management (Lambda authorizer + DynamoDB or Cognito): ~$100–$200/month
- CloudFront for basic edge distribution: ~$100–$200/month
- Engineering time managing multi-region config: ~$300–$500/month
- Estimated full-platform TCO: ~$975–$1,725/month
At 500 million requests, the gateway cost alone rises to $1,750/month — before any of the above. And you remain locked into AWS throughout. Read more about AWS API Gateway cost optimization.
Edge-Native Managed (Zuplo)
At 50M requests/month, Zuplo’s Enterprise plan applies — starting at $1,000/month on an annual contract, with custom pricing negotiated for your specific volume. That single subscription includes:
- Global edge distribution: 300+ data centers across 120+ countries — no CloudFront or multi-region configuration required
- Developer portal: Full-featured production developer portal — no ReadMe or Stoplight subscription
- API key management: Unlimited API keys with consumer metadata and analytics — no custom Lambda authorizers or DynamoDB tables
- Built-in analytics: Request volume, error rates, and latency distribution — no separate analytics tooling
- GitOps deployment: Every push deploys globally in under 20 seconds — no Terraform or CloudFormation
- SLA up to 99.999%: Enterprise-grade availability without operating your own infrastructure
- Estimated monthly TCO: from $1,000/month (contact sales for volume-specific pricing)
Zuplo consolidates four to six separate tools — gateway, portal, key management, analytics, CDN, and CI/CD deployment — into a single predictable line item. For teams evaluating the full-stack cost rather than just the gateway line item, the comparison shifts significantly in Zuplo’s favor.
Migration Paths: Moving Between Models
Teams aren’t locked into their initial choice forever. Here are common migration paths:
Self-Hosted → Managed
The most common migration direction. Teams that started with a self-hosted gateway to save on licensing costs often find the operational burden unsustainable as the team grows or reprioritizes. The migration involves mapping existing routes and policies to the target platform’s configuration model and executing a traffic cutover. For a detailed walkthrough, see our self-hosted to managed migration guide.
Cloud-Vendor Managed → Edge-Native
Organizations adopting multi-cloud strategies or expanding internationally often migrate from a cloud-vendor gateway to an edge-native platform. The primary challenge is decoupling provider-specific integrations (IAM roles, Lambda triggers) from gateway configuration.
Self-Hosted → Edge-Native
This path delivers the most dramatic reduction in operational overhead. Teams moving from self-hosted Kong, NGINX, or Envoy to an edge-native platform eliminate infrastructure management entirely while gaining global distribution.
FAQ
How many engineers do I need to run a self-hosted API gateway?
For a production-grade self-hosted deployment, expect to dedicate at least one senior DevOps or platform engineer part-time (10–20 hours per month) for ongoing maintenance, patching, and incident response. Initial setup typically requires 2–4 weeks of full-time senior engineering effort. Managed options (both cloud-vendor and edge-native) reduce this to near-zero infrastructure engineering.
Can I start with a cloud-vendor gateway and migrate later?
Yes, but expect friction. Cloud-vendor gateways create coupling with provider-specific services (IAM, serverless functions, monitoring). The longer you run on a cloud-vendor gateway, the deeper the integration and the harder the migration. If multi-cloud is on your roadmap, starting with a vendor-neutral option saves future migration effort.
What about compliance — do managed gateways meet enterprise requirements?
Most managed API gateway providers support standard compliance certifications (SOC 2, ISO 27001, HIPAA). Edge-native platforms like Zuplo also offer dedicated and self-hosted deployment options for organizations with strict data residency or air-gapped requirements. Review the API gateway security and compliance checklist for a complete evaluation framework.
Is an edge-native gateway just a CDN with extra features?
No. A CDN caches static content at edge locations. An edge-native gateway executes full compute at the edge — authentication, rate limiting, request transformation, and custom business logic all run at the nearest PoP, not at a central origin server. Learn more about how edge-native architecture differs from traditional deployment models.
How do I evaluate which model fits my team?
Start with constraints (compliance, cloud commitment), then evaluate your team’s operational capacity and your users’ geographic distribution. Our guide to choosing an API gateway covers product-level evaluation criteria once you’ve narrowed down the deployment model.
Conclusion
The managed-vs-self-hosted decision isn’t binary anymore. Edge-native gateways introduce a third option that combines the operational simplicity of a managed service with the vendor neutrality of self-hosted — plus global performance that neither traditional model can match without significant engineering investment.
Self-hosted gives you maximum control at the cost of significant operational overhead. Cloud-vendor managed eliminates infrastructure work but locks you into a single provider’s ecosystem and pricing. Edge-native managed offers zero-infrastructure operations with global distribution and vendor flexibility.
For most teams that don’t have strict air-gapped requirements, the question isn’t whether to use a managed gateway — it’s which kind of managed gateway gives you the right balance of simplicity, performance, and flexibility.
If global latency, multi-cloud support, and developer-friendly operations matter to your team, start with Zuplo for free — deploy a production-grade, globally distributed API gateway in minutes, not months.