Skip to main content

· 2 min read
Nate Totten

In light of some recent news about Docker deleting organizations and the containers that are registered with those organizations I figured I would share how we manage our Docker Containers. Zuplo uses a simple Github Action that runs on a cron schedule that mirrors containers we depend on. We initially built this because we experienced some downtime with Docker Hub that caused interruptions to our deployments. The other reason we mirror images is to keep them close to where we use them - in our case that means GCP Artifact Registry.

The Github Action is fairly simple (see below). This has worked well for us and has removed our dependency on Docker Hub for day to day deployments.

name: Mirror Docker Images
- cron: "0 1 * * *"

name: Mirror Images
runs-on: ubuntu-latest

contents: "read"
id-token: "write"

PROJECT_ID: my-project
REPO_NAME: docker-registry

fail-fast: false
- "ubuntu:latest"
- "node:18-alpine3.16"

- uses: actions/[email protected]

# Uses workload federation:
- id: "auth-gcp"
name: "Authenticate to Google Cloud"
uses: "google-github-actions/[email protected]"
token_format: "access_token"
workload_identity_provider: "your-provider"
service_account: "your-service-account"
access_token_lifetime: "300s"

- name: Set up Cloud SDK
uses: google-github-actions/setup-[email protected]
project_id: ${{ env.PROJECT_ID }}

- name: Authenticate Docker
run: gcloud auth configure-docker

- name: Pull Image from Docker Hub
run: docker pull ${{ matrix.image }}

- name: Tag Image
run: docker tag ${{ matrix.image }}${{ env.PROJECT_ID }}/${{ env.REPO_NAME }}/${{ matrix.image }}

- name: Push Image
run: docker push${{ env.PROJECT_ID }}/${{ env.REPO_NAME }}/${{ matrix.image }}

· 2 min read
Josh Twist

Welcome to OpenAPI week at Zuplo!

Zuplo <3 OpenAPI

Today we're announcing our official support for OpenAPI. Unlike other gateways, we're not simply adding an import feature for OpenAPI but are now OpenAPI native, with the format being the core of how route configuration is specified in the gateway.

Any valid OpenAPI document is a valid API Gateway configuration for Zuplo, additional properties and configuration is added via x-zuplo vendor extensions.

As a result, Zuplo now offers the most seamless workflow for API design-first teams and users of OpenAPI. For those that don't, not to worry, you'll be using OpenAPI in Zuplo but won't even notice.

Check out the demo video

Read more about this in our OpenAPI docs.

To celebrate, we're hosting an OpenAPI week with an interviews with special guests from the OpenAPI community. Sign up for the premiere of each chat below:

Darrel Miller
Darrel Miller
Editor of the OpenAPI specification and API architect at Microsoft.
On Tuesday, 3/7, we discuss the Future of OpenAPI (and some history)
Watch Darrel Miller's session here
Phil Sturgeon
Phil Sturgeon
Staff Author and co-host of APIs you won't hate
On Wednesday 3/8, we discuss living with OpenAPI in the real world
Watch Phil Sturgeon's session here
Kevin Swiber
Kevin Swiber
Marketing Chair, OpenAPI
On Thursday, 3/9 we talk about the spec wars and how OpenAPI plays a role in the API lifecycle
Watch Kevin Swiber's session here
Erik Wilde
Erik Wilde
Author, RFC 7807
On Friday 3/10 we'll look at the new(ish) Problem Details for HTTP APIs specification with one of its authors
Watch Erik Wilde's session here

These conversations will be premiering on our YouTube channel. Subscribe for notifications, follow us on Twitter or our Discord to get notified when these great conversations drop.

· 5 min read
Josh Twist

OpenAI is all the rage now and developers are rushing to leverage this technology in their apps and SaaS products. But such is the demand for this stuff, you might need to think about how you protect yourself from abuse - here's a post from a colleague I saw on LinkedIn recently:

LinkedIn comment

You can use a Zuplo gateway to store your API keys and enforce a number of layers of protection to discourage abuse of your OpenAI API keys.

How it works

Zuplo Gateway for OpenAI

Zuplo allows you to easily perform authentication translation, that is, change the authentication method your clients use. For example you might require your clients to use

  • JWT tokens
  • API Keys issued to each of your customers (different to your OpenAI key, so you can identify each individual customer)
  • Anonymously in a web browser — but ensure the call is coming from the right origin, enforce CORS and rate-limit by IP etc.

Setting up Zuplo to send the API Key

This is a CURL command to call the OpenAI API directly, note the need for an API KEY

curl \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY_HERE' \
-d '{
"model": "babbage",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0

To get started we'll create a simple Zuplo gateway that removes the need to specify the API key.

Create a new project and add a route:

  • Summary: My OpenAI Completions
  • Path: /v1/my-completions
  • Method: POST
  • Handler: URL Rewrite -

Next, we need to add a policy that will set the authorization header when calling OpenAI. Open the Policies section and click Add Policy.

Add or Set Request Headers

Choose the Add or Set Request Headers policy. Set the policy configuration as follows

"export": "SetHeadersInboundPolicy",
"module": "$import(@zuplo/runtime)",
"options": {
"headers": [
"name": "authorization",
"value": "Bearer $env(OPEN_AI_API_KEY)",
"overwrite": true

Note that we will read the API Key from a secure secret stored as an environment variable - go setup your OPEN_AI_API_KEY env var.

Save your changes, and we're ready.

Take the above curl command and remove the authorization header and change the URL to your project URL:

curl \
-H 'Content-Type: application/json' \
-d '{
"model": "babbage",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0

Look no API key 👏 - but your request should work fine as Zuplo will add the key on the way out.

Securing Zuplo

You now have several choices to secure Zuplo.

  1. Require your users to login (with a service like Auth0) and then use an Auth0 JWT with Zuplo.
  2. Issue API Keys to all your users using Zuplo's API Key Service.
  3. Host anonymously but add additional safe guards, including requiring a specific Origin and strict CORS using custom CORS policies.

Make sure to add rate limiting - based on user or maybe IP (for anonymous use case).

Event Streaming (data-only server-sent events)

OpenAI supports event streaming, this is easy to get working with Zuplo and works out of the box. You can try this by adding a stream: true property to your POST to OpenAI:

curl \
-H 'Content-Type: application/json' \
-d '{
"model": "babbage",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0,
"stream": true

However, what if you want to support EventSource in the browser? That is easy to accomplish with Zuplo also by taking the incoming GET request created by EventSource and translating it into a POST request, with the appropriate headers and body inside Zuplo.

Create a new route:

  • Summary: My OpenAI Completions for Browser Event Source
  • Path: /v1/my-browser-completions
  • Method: GET
  • CORS: Anything Goes
  • Handler: URL Rewrite -

Add the following policies

  • Reuse your Set Header policy that sets the authorization key above.
  • Add a Change Method policy to update the request to be a POST
  • Add another Set Header policy to set the content-type header to application/json
  • Finally, add a Set Body policy with the following configuration.


"export": "SetBodyInboundPolicy",
"module": "$import(@zuplo/runtime)",
"options": {
"body": "{ \"model\": \"babbage\", \"prompt\": \"Say this is a test\", \"max_tokens\": 7, \"temperature\": 0, \"stream\": true }"

You can now use an EventSource in a browser and call Zuplo as follows

const evtSource = new EventSource(

evtSource.onmessage = (evt) => {
if ( === "[DONE]") {
console.log("end of event stream...");

You could also make the POST body dynamic, based on a querystring in the EventSource - you would then read the querystring values in a custom policy and set the body based on values in the querystring (you would no longer need the Set Body policy in this case).

The custom code (inbound policy) might look like this

import { ZuploContext, ZuploRequest } from "@zuplo/runtime";

export default async function (
request: ZuploRequest,
context: ZuploContext,
options: never,
policyName: string
) {
const prompt = request.query.prompt;

// perform any appropriate validation on `prompt` here

const data = {
model: "babbage",
prompt, // pass the query value in here
max_tokens: 7,
temperature: 0,

return new ZuploRequest(request, {
body: JSON.stringify(data),

· 2 min read
Josh Twist

We just published a new video showing how you can add smart routing, behind a single common API for multiple backends, in 1 page of TypeScript. Metadata is loaded from an external service (in this case, Xata but you could use Supabase, Mongo etc).

Here's the code used in the demonstration:

import {
} from "@zuplo/runtime";

interface RouteInfo {
customerId: string;
primaryUrl: string;
secondaryUrl?: string;


async function loadRouteInfoFromApi(context: ZuploContext) {
const cache = new ZoneCache(CACHE_NAME, context);

const records = await cache.get(CACHE_KEY);

if (!records) {
const options = {
method: "POST",
headers: {
Authorization: `Bearer ${environment.XATA_API_KEY}`,
"Content-Type": "application/json",
body: '{"page":{"size":15}}',

const response = await fetch(

const data = await response.json();
cache.put(CACHE_KEY, data.records, 300); // 5 minutes"RouteInfo loaded from API");
return data.records;
}"RouteInfo loaded from Cache");
return records;

export default async function (request: ZuploRequest, context: ZuploContext) {
const customerId =;

const routing = await loadRouteInfoFromApi(context);

const routeInfo = routing.find((r) => r.customerId === customerId);

if (!routeInfo) {
return new Response(`No route found for customer '${customerId}'`, {
status: 404,

const response = await fetch(routeInfo.primaryUrl);
if (response.status !== 200 && routeInfo.secondaryUrl) {
`First request failed, trying secondary (${response.status})`
const response2 = await fetch(routeInfo.secondaryUrl);
return response2;

return response;

Got questions or feedback? Join us on Discord.

· 2 min read
Josh Twist

Your supabase backend is often exposed to the public, anybody can sign in create an account and work with data. This can be a problem, if you get a malicious or clumsy user that is hitting your service too hard. That's where you need rate-limits, a way of making sure a single user doesn't starve others of resources (or cost you too much $).

With Zuplo, you can add user-based rate-limiting to a supabase backend in a couple of minutes. There is a video tutorial version of this guide here: YouTube: Per-user rate limit your supabase backend.

Best of all, the only code you'll need to change in your client is the URL of the supabase service (because traffic will now go via Zuplo).

Here are the steps

1/ Create a new project in Zuplo (get a free account at

2/ Add a route to your new project. Set the following properties

  • path: /(.*) - this is wildcard route that will match all paths
  • methods: all - select all methods in the dropdown
  • CORS: anything goes - this is easiest, but you can set stricter policies
  • URL Rewrite: <https://your-supabase-domain>${pathname} - make sure to add your supabase URL, e.g.${pathname}

3/ Add a policy to the request pipeline - choose the supabase-jwt-auth policy. Remove the required claims from the JSON template.

"export": "SupabaseJwtInboundPolicy",
"module": "$import(@zuplo/runtime)",
"options": {
"secret": "$env(SUPABASE_JWT_SECRET)",
"allowUnauthenticatedRequests": false

4/ Create an environment variable called SUPABASE_JWT_SECRET (this is in Settings > Environment Variables). Paste in the JWT Secret from supabase (available in Settings > API).

Environment Variable

5/ Add a rate-limiting policy at the end of the request pipeline. Configure it to be a user mode rate limit, suggest 2 requests per minute for demo purposes.

Rate Limit Policy

"export": "RateLimitInboundPolicy",
"module": "$import(@zuplo/runtime)",
"options": {
"rateLimitBy": "user",
"requestsAllowed": 2,
"timeWindowMinutes": 1

6/ Get the URL for your gateway by going to Getting Started tab and copying the gateway URL. Replace the Supabase URL in your client and boom 💥!

Getting Started

You now have a rate-limit protected supabase backend. Stay tuned for a subsequent tutorial where we'll show how to ensure folks have to come via Zuplo to call your Supabase backend.

Got questions or feedback, join us on Discord.

· 3 min read
Josh Twist

One of the most powerful aspects of Zuplo is the programmable extensibility. Recently somebody on our Discord channel asked if we supported query parameter validation as we do JSON Body validation.

We plan to add this soon as a built-in policy (which will use your OpenAPI specification). However, I spent 20 minutes building a custom policy to demonstrate how easy it would be to build a custom policy to support this while you wait.

Here's how you would configure the policy

"export": "default",
"module": "$import(./modules/query-param-validator)",
"options": {
"allowAdditionalParameters": false,
"params": [
"name": "foo",
"required": true,
"type": "int"
"name": "bar",
"required": true,
"type": "number"
"name": "wib",
"required": false,
"type": "string"
"name": "ble",
"required": true,
"type": "boolean"

This defines a policy for a route (which can be reused on other routes) that states there are four supported query parameters: foo, bar, wib and ble. No additional query parameters are allowed.

Note that foo, bar and ble are required, whereas wib is optional.

Each has a different type specified, and the request will be rejected if the data cannot be parsed as that type from the options int, number, string, and boolean.

Here are some hits on that URL and associated error responses (status code 400):

Path: /query

Bad Request

Required query parameter 'foo' missing
Required query parameter 'bar' missing
Required query parameter 'ble' missing

Path: /query?foo=&bar=hey&wib=nope&ble=23

Bad Request

Required query parameter 'foo' missing
Invalid value for query parameter 'bar': 'hey' is not a valid number
Invalid value for query parameter 'ble': '23' not a valid boolean value (expect 'true' or false')

Easy peasy - here's the code for that custom policy

import { ZuploContext, ZuploRequest } from "@zuplo/runtime";

type SupportedTyped = "int" | "number" | "string" | "boolean";

type ParameterValidationRule = {
name: string;
required?: boolean;
type?: SupportedTyped;

type QueryParamValidatorOptions = {
params: ParameterValidationRule[];
allowAdditionalParameters?: boolean;

const typeValidators: Record<
(value: string) => string | undefined
> = {
int: (value: string) => {
const int = parseFloat(value);
if (!Number.isInteger(int)) {
return `'${value}' is not a valid integer`;
number: (value: string) => {
const float = parseFloat(value);
if (Number.isNaN(float)) {
return `'${value}' is not a valid number`;
string: (value: string) => {
if (value.length === 0) {
return `empty string provided`;
boolean: (value: string) => {
if (!["true", "false"].includes(value)) {
return `'${value}' not a valid boolean value (expect 'true' or false')`;

export default async function (
request: ZuploRequest,
context: ZuploContext,
options: QueryParamValidatorOptions,
policyName: string
) {
const allowAdditionalParameters = options.allowAdditionalParameters ?? false;
const q = request.query;
const errors: string[] = [];

// 1. check no additional parameters
if (!allowAdditionalParameters) {
const allowedNames = =>;

for (const queryName of Object.keys(q)) {
if (!allowedNames.includes(queryName)) {
errors.push(`Additional query parameter '${queryName}' not allowed`);

// 2. check required and value types
for (const param of options.params) {
const value = q[];
const required = param.required ?? true;
if (!value) {
if (!required) {
// required parameter not provided.
errors.push(`Required query parameter '${}' missing`);

if (param.type && value) {
const validatorResult = typeValidators[param.type](value);
if (validatorResult) {
`Invalid value for query parameter '${}': ${validatorResult}`

if (errors.length > 0) {
return new Response(`Bad Request\n\n${errors.join("\n")}`, { status: 400 });

return request;

Have fun!

· One min read
Josh Twist

We've recently been playing with Supabase a lot and showing how Zuplo can help you take your Supabase backend and go "API-first".

Often times, this requires you to have a JWT token from Supabase for testing. Since we're very focused on the API and backend of your infrastructure I got tired of creating test websites to login to Supabase and get myself a valid JWT. For that reason, we created a free online tool to help you get a JWT token from supabase for testing.

It's easy to use and the instructions are on the homepage. Also, check out this short video for a quick guide:

It's open source too - contribute on github

· 2 min read
Josh Twist

Today we’re excited to introduce the Zuplo Free plan (beta), offering the fastest way to ship your public API. Zuplo is fully-managed API management that adds authentication, rate limiting, and developer documentation to any API — on any stack or cloud — in minutes.

We take care of all the boring stuff that makes a great API so that you can focus on whatever differentiates your business.

Our Free plan is perfect for developers who are building an API to share with other developers. Maybe you are building the next big API-first startup? Or you have a weekend side-gig, hackathon or hobby project that needs an API? Maybe you’re just learning how to build APIs or exploring API management and API gateways? Zuplo’s Free plan is the perfect solution, and you can relax knowing that Zuplo can grow with you - we’re already handling billions of requests per month for tiny startups and large enterprises alike.

Why free? Why now?

Zuplo was founded by Josh Twist and Nathan Totten with the goal of making API Management accessible to all. Josh founded Azure API Management at Microsoft, and Nate built much of the developer experience at Auth0.

API Management is traditionally only used by large organizations, but we believe that every business can benefit from the power of API management that is optimized for developers.

We want to democratize API management, and we do that by making it much easier to use and more affordable.

Today we’re announcing a Free (forever) plan for folks looking to get started with API management, whether you’re a total beginner or a veteran of other legacy solutions. Weekend project, hackathon, side-gig? Give Zuplo and try and let us know what you think.

Great! What do I do next?

Watch this 2-minute demo to see what makes Zuplo different in this Demo video.

Ready to start? [Sign up for free]( and explore the todo-list sample.

Come talk APIs with us and other developers Join our community on Discord.

· 2 min read
Josh Twist

One of the best things about Zuplo is it's programmable nature. That combined with our approach to making policies composable means you can do some amazing things with them, like our rate-limiter. In this video we show how you can have the rate-limiter interact with external services and data. Here we use supabase as a data-source for the limits.

Here's the key code from the sample

import {
} from "@zuplo/runtime";
import { createClient } from "@supabase/supabase-js";

const fallBackLimits = {
requestsAllowed: 10000,
timeWindowMinutes: 1,

const CACHE_NAME = "rate-limit-cache";
const CACHE_KEY = "rate-limit-data";

export async function getRateLimit(
request: ZuploRequest,
context: ZuploContext,
policyName: string
) {
const limitResponse: CustomRateLimitDetails = {
key: request.user.sub,

const userGroup =;
const cache = new ZoneCache(CACHE_NAME, context);

const cached: any = await cache.get(CACHE_KEY);

if (cached) {
context.log.debug("cache hit");
const item = cached.find((row) => row.userGroup === userGroup);
limitResponse.requestsAllowed = item.reqPerMinute;
return limitResponse;

context.log.debug("cache miss");
const supabase = createClient(
const { data, error } = await supabase.from("rate-limits").select();

if (error) {
context.log.error(`Error reading data from supabase`, error);
// return fallback rate-limit - don't want API downtime
// if this dependency is down.
return limitResponse;

const item = data.find((row) => row.userGroup === userGroup);

if (!item) {
context.log.warn(`No row rateLimitId '${userGroup}' found, using fallback`);
// return fallback
return limitResponse;

void cache.put(CACHE_KEY, data, 10);

limitResponse.requestsAllowed = item.reqPerMinute;
return limitResponse;

You could make this even higher performance by having the cache have a longer expiry, but periodically reloading the data from supabase asynchronously and pushing the results back into the cache; something like an SWR (stale, while revalidate) approach.

Get started with Zuplo for free today: Sign Up Free

See also:

Shipping a public API backed by Supabase

API Authentication using Supabase JWT tokens

· 10 min read
Josh Twist

Many public APIs choose to use API keys as their authentication mechanism, and with good reason. In this article, we’ll discuss how to approach API key security for your API, including:

  • why you should consider API key security
  • design options and tradeoffs
  • best practices of API key authentication
  • technical details of a sound implementation

API Key Best Practices

This article is language agnostic and doesn't provide a particular solution for PHP, Python, TypeScript, C# etc but every language should afford the capabilities that would allow you to build an appropriate solution.

There is an accompanying video presentation of this content: API Key Authentication Best Practices

Why API Keys? Why not?

I talked about this in more detail in Wait, you’re not using API Keys? but in summary, API keys are a great choice because they are plenty secure, easier for developers to use vs JWT tokens, are opaque strings that don’t give away any clues to your claims structure, and are used by some of the best API-first companies in the world like Stripe, Twilio, and SendGrid.

Perhaps the most legitimate complaint against API keys is that they are not standardized, which is true, but — thanks to programs like GitHub’s secret scanning program — some patterns are starting to emerge.

If you’re building a public API, API-key authentication is much easier for a developer to configure and learn. They work great in curl and, provided you follow some of the best practices outlined here, are plenty secure.

The main case where I would not advocate for using API keys is for operations that are on behalf of an individual user. For this, OAuth and JWT is a much better fit. Examples of APIs that do and should use OAuth are Twitter and Facebook. However, if you’re Stripe and the callee of your API is an ‘organization’ and not a user, API keys are a great choice. Perhaps the best example of this is the GitHub API, which uses both: API-keys for organization-level interactions and JWT for on-behalf of users.

Decisions to make

The best practices for API key authentication are becoming somewhat recognizable now, but there is a dimension where we still see some variability in the implementation of API keys: to make the key retrievable or irretrievable.

The world of API-key implementations is divided into two groups. The first will show you your API key only once. You'll need to copy it and save it somewhere safe before leaving the console. This is irretrievable. Typically the keys are unrecoverable because they are not actually stored in the key database, only a hash of the key is stored in the database. This means, if lost, the keys can genuinely never be recovered. Of course, in the case of a loss you can usually regenerate a new key and view it once.

The other group allows you to go back to your developer portal and retrieve your key at any time. These keys are typically stored encrypted in the database. Meaning if the database is stolen, the thief would also need the encryption codes to access the API keys.

The tradeoffs here are tricky, and there are two schools of thought

  1. Irretrievable is better because it’s more secure. The keys are stored via a one-way encryption process so they can never be retrieved, or stolen from the database in a worse case scenario.
  2. Retrievable offers good-enough security with some advantages, and it’s easier to use. The keys are stored encrypted via reversible encryption. One potential security advantage is that users are less likely to feel pressured to quickly store the key somewhere to avoid losing it. A person that follows best practices will use a vault or service like 1password. However, some users will take the convenient path and paste it into a .txt file for a few minutes thinking, “I’ll delete that later.”

So what are some examples of APIs that support recoverable vs. unrecoverable today??

Irretrievable: Stripe, Amazon AWS

Retrievable: Twilio, AirTable, RapidAPI

There is some correlation between services that protect sensitive information and services seem more likely to use irretrievable, while services that are less sensitive choose retrievable for ease of use and good-enough security.

The choice is yours. Personally, I lean a little toward retrievable because I know that I personally have made the mistake of quickly pasting a newly generated irretrievable key into notepad and forgetting about it. You may come to a different conclusion for your own API key authentication.

Best Practices of API Key Authentication

The bit of API key authentication advice you’ve been waiting for… the best practices of API key auth based on the patterns observed in the API world and our experience building our own API key authentication service for Zuplo.

1/ Secure storage for the keys

Depending on your choice of retrievable vs. irretrievable, you’ll need to take a different path. For irretrievable keys, it’s best to store them as a hash, probably using sha256 and ensure that they are stored as primary key (this will help avoid hash collisions, unlikely but possible so you should build in a retry on create and insert).

For retrievable, you’ll need to use encryption so that the values can be read from the database to show to the user at a later date. You have a few choices here, like storing the keys in a secure vault, or using encryption programmatically to store in a standard database and manage the keys yourself.

2/ Support a rolling transition period

It’s critical that you allow your users to roll their API keys in case they accidentally expose it, or just have a practice of periodically changing them out. It’s important that this ‘roll’ function either allows for multiple keys to exist at the same time or allows the setting of an expiry period on the previous key, otherwise rolling the key will cause downtime for any consumers that didn’t get the chance to plug-in the new key before the last one expired. Here’s Stripe’s roll dialog:


In Zuplo, we allow folks to have multiple keys so they can temporarily add another key and delete the old one as soon as they’re done with the transition.

3/ Show the key creation date

It’s important to show developers when the key was created so they can compare the date to any potential incidents. This is especially important if you support multiple keys so that users can differentiate between old and new.


4/ Checksum validation

Since checking the API key will be on the critical path of every API call, you want to minimize latency. This is one of the reasons that you’ll want to add a checksum to your API key. Here’s an example API key from Zuplo:


The last section _631238 is a checksum that we can use to verify in the request pipeline whether this even looks like a valid key. If not, we can simply reject the request and avoid putting load on the API key store.

5/ Support secret scanning

One of the reason we have the unusual beginning of the key "zpka_" is so that we could participate in programs like GitHub’s secret scanning. This allows us to create a regular expression that allows GitHub to inform us if an API key is accidentally checked into a repo. Then we can automatically revoke the key and inform its owner of the event. We also use the checksum to double-check that it’s one of our keys before locating the service and owner.

At Zuplo, we participate in the GitHub secret scanning program so that we can offer these services to any customer using our API-key policy.

(Aside: Yes, the example key above triggered our secret scanning when I checked this article into GitHub and we got notified about a token leak 👏😆)

6/ Minimize latency and load on your API Key storage

To reduce the latency on every API request, you might consider using an in-memory cache to store keys (and any metadata read from the API Key Store). If you have a globally distributed system, you’ll want multiple caches, one in each location. Since Zuplo runs at the edge, we use a high-performance cache in every data center. To increase security it's important to consider only caching the one-way hashed version of the API-key (taking care to avoid hash-collisions by doing a pre-hash collision check at the point of key-creation, using the same hash algorithm).

You’ll need to choose an appropriate TTL (time-to-live) for your cache entries which has some tradeoffs. The longer the cache, the faster your average response time will be and less load will be placed on your API Key store - however, it will also take longer for any revocations or changes to key metadata to work.

We recommend just a couple of minutes maximum; that’s usually plenty to keep your latency low, a manageable load on your API Key Store, and be able to revoke keys quickly.

If this is important to you, you might want to design a way to actively flush a key from your cache.

7/ Hide keys until needed

Today, everybody has a high-quality camera in their pocket. Don’t show an API key on-screen unless explicitly requested. Avoid the need to show keys at all by providing a copy button. Here’s the supabase console, which almost gets full marks, but would be even better if it provided a copy option without needing me to reveal the key visually.


8/ Attention to detail — keys need to be copied and pasted

Sometimes it’s the little things in life; for example, try double-clicking on key-1 below to select it. Then try key-2.

key-1: zpka-83fff45-1639-4e8d-be-122621fcd4d1

key-2: zpka_a5c5e56x54c4437fbd6ce7dee9185e_631238

Note how much easier it is to select the API key in snake_case?

9/ Consider labeling your keys

A number of services are increasingly doing this to help their customers, but mostly to help their support teams. For example, in Stripe they have a convention as follows:

  • sk_live_ - Secret Key, Live version
  • pk_test_ - Publishable Key, Test version

This not only supports GitHub secret key scanning above, but it can also be invaluable to your support team when they can easily check if the customer is using the right key in the right place. Your SDK can even enforce this to prevent confusion.

The downside to key labeling is that if the key is found without context by a malicious user - they can discover which services to attack. This is one advantage of using a managed API key service that dissociates the key from any specific API. GitHub have a great article - Behind GitHub’s new authentication token formats.

A canonical flow through an API key check

Stacking all of that together, here’s a flow chart showing the canonical implementation of the API key check using all these practices above.



API keys are a great approach if you want to maximize the developer experience of those using your API, but there are quite a few things to think about when it comes to API key security. An alternative to building this yourself is to use an API Management product with a gateway that does all the work for you and includes a self-serve developer portal. Examples include Apigee, Kong, and — of course — Zuplo.

The Author

Before founding Zuplo, Josh led Product for Stripe’s Payment Methods team (responsible for the majority of Stripes payment APIs) and worked at Facebook and Microsoft, where he founded a number of services, including Azure API Management.

Updated December 4 2022 to update recommended hashing algo to sha256 based on community feedback.