If you think rate limiting is about saying “no'' a lot, think again. Zuplo’s Complex Rate Limiting policy is here to show you how to flex your API muscles and provide a highly customizable, dynamic system that caters to the unique demands of modern applications. Here’s an in-depth look at why this policy is the best one out there for developers.
Standard Rate Limiting
Let’s go back to basics, shall we? Standard rate limiting is fine. At its core, it involves restricting the number of requests an API can handle within a given time frame. This is important to protect APIs from abuse and ensure fair usage–limiting requests by IP address, user, or function. You can even throw in some JWT (JSON Web Token) policies for a little bit of spice. Zuplo’s standard rate limiting policy is enough to get the job done and keep API bullies in check, making sure to provide a robust layer of protection right out of the box.
Complex Rate Limiting
Now, let’s go beyond the basics. While standard rate limiting serves well in many scenarios, some applications require more sophisticated control mechanisms. Enter Zuplo’s Complex Rate Limiting. Unlike standard rate limiting, which focuses solely on the number of requests, complex rate limiting allows you to limit based on various properties of their organization, metadata stored in a database, or metrics within the request or response.
For instance, Zuplo’s policy allows the user to limit based on payload size, function, IP–and even number of curly braces within a response. This level of granularity is critical for applications that want to manage more than just traffic volume. By configuring complex rate limits, you can double down on making sure your API performs well in any circumstance.
Dynamic Rate Limiting
Oh you thought that was it? Get ready for dynamic rate limiting. Instead of applying a one-size-fits-all rate limit, dynamic rate limiting adjusts based on the properties of each request or user. This is done by using identifiers found within JWT payloads or API keys, allowing rate limiting to be customized on-the-fly.
For example, you can set various rate limits based on a user’s subscription level, request type, or any other attribute. Zuplo’s system automatically evaluates and applies the appropriate rate limit each and every time a request is made. Picture it: You’ve got paying, premium users getting the VIP treatment with higher rate limits, while users on a free tier get the standard deal.
How Zuplo Compares to Other Rate Limiters
Talk is cheap, so let’s put Zuplo side by side with the other big players in the API gateway space. When teams evaluate rate limiting solutions, they typically care about a handful of core capabilities: the algorithm used, whether limits can be set per individual consumer, how programmable the system is, what information gets returned in response headers, and how much effort configuration actually takes.
Here is how Zuplo stacks up against AWS API Gateway, Kong, Apigee, and Tyk across these dimensions:
Sliding Window Support
| Platform | Support |
|---|---|
| Zuplo | Built-in with multiple algorithm options |
| AWS API Gateway | No — uses token bucket algorithm only |
| Kong | Enterprise only (Rate Limiting Advanced); open-source fixed window |
| Apigee | SpikeArrest only (with UseEffectiveCount); Quota uses fixed windows |
| Tyk | Yes — Redis Rate Limiter uses sliding window log; DRL uses token bucket |
Per-Key Limits
| Platform | Support |
|---|---|
| Zuplo | Per user, API key, JWT claim, IP, or custom key |
| AWS API Gateway | Per API key via usage plans only |
| Kong | Per consumer or credential |
| Apigee | Per app or developer |
| Tyk | Per key or policy |
Dynamic / Programmable Limits
| Platform | Support |
|---|---|
| Zuplo | Fully programmable in TypeScript; limits set at runtime |
| AWS API Gateway | Static configuration only; no runtime logic |
| Kong | Requires custom Lua plugins for dynamic behavior |
| Apigee | Programmable via JavaScript policies, but complex setup |
| Tyk | Programmable via Go plugins or JS middleware |
Custom Response Headers
| Platform | Support |
|---|---|
| Zuplo | Full control over headers on both 429 and success responses |
| AWS API Gateway | No rate limit headers by default; requires custom gateway response config |
| Kong | Configurable via plugin settings |
| Apigee | Requires custom policy scripting for full control |
| Tyk | Configurable, but requires middleware for custom headers |
Configuration Complexity
| Platform | Approach |
|---|---|
| Zuplo | Declarative JSON config or TypeScript handler; deploys in seconds |
| AWS API Gateway | AWS Console, CloudFormation, or CDK |
| Kong | YAML/Admin API with plugin chain management |
| Apigee | XML-based policy bundles deployed through management API |
| Tyk | Dashboard UI or API definition files |
A few things jump out when you look at this comparison:
Programmability is the differentiator. Most gateways let you set a number and a time window. Zuplo lets you write actual TypeScript logic that runs on every request, meaning your rate limits can factor in the user’s subscription tier, the cost of the specific endpoint, time of day, or any other attribute you care about. No Lua scripts, no XML policy bundles, no redeployment cycles.
Response headers matter more than people think. Developers rely on
X-RateLimit-Remaining and Retry-After to write well-behaved API clients.
Zuplo injects these headers on every response by default, not just on 429
errors. That is a small detail that has an outsized impact on developer
experience.
Edge deployment changes the game. Zuplo deploys rate limiting logic to over 300 data centers worldwide, which means limits are enforced at the edge rather than at a centralized origin. This keeps latency low and ensures that rate limiting does not become a bottleneck itself. Most traditional gateways enforce limits in a single region or require complex multi-region setups to achieve something similar.
If you want a deeper dive into how different platforms handle rate limiting across a wider set of criteria, check out our API rate limiting platform comparison in the learning center.
Global Distribution: The Hidden Rate Limiting Challenge
There is one dimension of rate limiting that rarely shows up in feature comparison tables but matters enormously in production: global distribution. If your API serves traffic from multiple regions, where does the rate limit counter live? And can a determined user bypass your limits simply by routing requests through different geographic locations?
Most traditional API gateways enforce rate limits per-region or per-instance. This means a user with a 100 request/minute limit could potentially consume 100 requests/minute in each region your API is deployed, effectively multiplying their actual throughput by the number of regions.
Here is how the major platforms handle this:
AWS API Gateway maintains completely independent rate limit counters per region. There is no cross-region synchronization. Each region’s token bucket operates in isolation.
Kong shares counters across nodes within a single cluster when using Redis as the storage backend, but cross-region sharing depends entirely on your Redis topology. If you run separate Redis instances per region (common for latency reasons), each region enforces limits independently.
Apigee offers two different behaviors depending on the policy. SpikeArrest
synchronizes within a region but explicitly does not replicate counters across
regions. The Quota policy can synchronize globally when configured with
Distributed and Synchronous set to true, but this introduces latency
overhead and uses fixed windows rather than sliding window.
Tyk distributes rate limit budgets across gateway nodes within a cluster using its DRL (Distributed Rate Limiter), but each cluster (typically one per region) maintains its own counters. There is no built-in mechanism for cross-region synchronization.
Zuplo takes a fundamentally different approach. Because Zuplo runs at the edge across 300+ data centers worldwide, rate limiting state is globally synchronized by default. A user hitting your API from Tokyo, London, and New York all draws from the same rate limit counter. There is no configuration needed, no Redis topology to manage, and no trade-off between accuracy and latency. This is what it means to have rate limiting that actually works at global scale.
Conclusion
Zuplo isn’t just about managing API traffic–it’s about dominating it. Whether you’re slinging standard, complex, or dynamic rate limits, Zuplo deploys your policies across 300 data centers worldwide in less than 5 seconds with globally synchronized rate limit counters across every location. No per-region gaps, no multi-cluster workarounds–just accurate enforcement everywhere your users are. Now that’s just impressive. Embrace standout features and highly flexible tooling that adapts to your needs and experience what makes Zuplo the best damn rate limiter on the planet.
To configure Zuplo’s Complex Rate Limiting policy, take a look at our detailed policy guide. You can also watch a video demonstration of this functionality in action.
Want to learn more about API rate limiting? Check out our best practices guide which covers technical implementation of rate limiting. If you're a rate limiting pro, consider reading our more advanced guide - the subtle art of rate limiting. It covers higher level decision making and considerations around rate limiting (ex. keeping limits secret, observability, latency/accuracy tradeoffs).