Skip to main content

· 2 min read
Josh Twist

Today we’re announcing that Zuplo offers API Key Scanning on Github for API keys generated in Zuplo.

According to the most recent GitGuardian report, in 2021 over 6 million secrets were leaked, which was 2x 2020’s total and 3 in every 1,000 commits exposed at least one secret. The massive Heroku security incident in April 2022 was caused by API Keys checked into source control. It’s no surprise then that since we opened Zuplo up publicly we’ve seen a lot of excitement about our API Key Management capabilities. We’ve written why we think API Keys are the best way to secure your API here and now we make it effortless to secure both you and your customers with API Key Scanning.

"Heroku determined that the unidentified threat actor gained access to the machine account from an archived private GitHub repository containing Heroku source code."

Respecting the developer workflow is one of our central tenets at Zuplo, which is why we designed it with GitOps in mind. Starting today, if one of the API keys for one of your APIs in Zuplo shows up in a public repo on Github you’ll receive an alert from Zuplo notifying you of the token and the URL where the match was found. You can also choose to have Zuplo notify your customer on your behalf.

Zuplo API Key management includes:

  • secure storage and management of keys and metadata - with an admin UI and API to manage consumers.
  • integrated developer portal with self-serve key management for your customers.

If you've already built your own API Key solution we can easily integrate Zuplo authentication with custom policies or even help you API key to Zuplo for even greater protection. It's never too late to make hosting your API much easier.

API Key Leak Prevention is part of our Business and Enterprise subscriptions.

· 2 min read
Josh Twist

When providing recommendations we like to use examples of great companies, the decisions they made — that often go against the grain, and why they made those decisions. One of my favorite examples of this is the fact that the best API companies tend to use API keys.

But what about great companies getting it wrong?

We recently wrote recommendations for versioning your API and had one primary piece of advice - insist that the client includes the desired version on every request.

It’s simple: No Version? No Service.

When giving examples of great APIs, I have a few ‘go-tos’:

APIAPI Keys?Require Version?URL-based version
Stripe
AirTable
Twilio
SendGrid
GitHub

You’ll notice one anomaly here though - the GitHub API doesn’t use URL-based versioning and doesn’t require a version. Let’s quote their docs

When using the REST API, we encourage you to request v3 via the Accept header

Note the use of the word ‘encourage’, there’s more

*Important: The default version of the API may change in the future. If you're building an application and care about the stability of the API, be sure to request a specific version in the Acceptheader as shown in the examples below.*

Source: https://docs.github.com/en/rest/overview/media-types#request-specific-version

This approach is asking for trouble and limits your options. It’s hard to run multiple versions of your API simultaneously (without defaulting to the old one 🤮) and it’s hard to know which customers are trying to use which version of your API. Maybe you can tell because of all the support calls and errors they’re getting?

That’s why we tend to recommend URL-based versioning - it’s implicit in the address of the URL what version the client is coded to consume and it can’t be skipped, lest you’ll get a 404. It’s good enough for Stripe, AirTable, and the Twilio family so it’s probably good for you.

If you do decide to go the headers route, be sure to send back a 400 if you don’t see an explicit version requested. Your error message could be “No version, no service” - that one’s on us.

· 4 min read
Josh Twist

We recently discussed some best practices for versioning your API with (spoiler) a strong recommendation that you should require that clients indicate the version of the API they were designed for. Go read the article for more details.

I’ve been giving a talk at a few conferences recently about how the world’s fastest-growing companies develop their public API, which included a piece on versioning. One question I commonly get from attendees after the talk is, “How do I move clients off the old version of my API?”, where clients might be internal depts, long-term customers or even teammates who don’t want to change that UI code that is dependent on v1.

In most cases, you can’t just turn off the old version. In the cases above you’ll cause business harm by breaking the marketing department or taking down one of your loyal customers. These folks have priorities and updating to /v2/ of your API is probably not at the top of the list. So how might you create more urgency?

note

One of the reasons we strongly recommend requiring the client to indicate the version they were designed for is so that you can continue to maintain multiple versions of your API for as long as you need to. The decision to pressure a customer or department to upgrade is a business decision for you to make. Once you’ve made the call, these techniques can help and are better than just shutting them down one day.

Creating Urgency

We do not recommend just turning off the API permanently. Instead, you can take the approach of scheduling a ‘brownout’ or timed, temporary downtime. This is where you take the API down during a low-impact period for a short amount of time, maybe at 3 am in the morning, for 2 minutes. This is probably enough to trigger a bunch of alarms and service alerts that make the impact of the upcoming breaking change clear to the consumer.

We’d recommend sharing the schedule of the planned outages so people aren’t surprised at all and know what the glideslope to actual deprecation looks like. Some businesses share that the version of the API they are using will be fully deprecated on a specific date but, knowing that some clients will not upgrade in time, activate the first brownout at this time and notify clients that they have one more week to complete their upgrade. This can you make you appear like you’re being generous, you’re giving them more time than they were told they would have, but they still get the shock of alerts firing.

Another technique to encourage folks to move off an old API, and combines well with brownouts, is to start to add deliberate latency to the old version of the API. You can use an API gateway to do this and, as time progresses, you can increase the latency to multiple seconds even - depending on the use case.

Again, this is a business decision but once you’ve decided you need to create urgency amongst consumers of your soon-to-be-deprecated, old version of your API. This is a better approach to just going dark on active customers on the scheduled date.

We introduced a sleep and brownout policy to Zuplo to make this even easier. If you want to try it out for yourself and schedule a brownout, go sign up for Zuplo.

Weekly Zoom Chat

Are you starting work on, or in the middle of building, a public customer or partner API? Register for our "Building a Customer API" weekly chat on Zoom every Thursday at 3pm ET/12pm PT and learn from featured guests and other developers building public APIs.

· 2 min read
Josh Twist

Planning to build a new public customer or partner API but not sure where to start your research? Connect with fellow developers and nail development of your new API by joining our meetup on Zoom, "Building a Customer API".

On June 9, 2022 at 3pm ET/12pm PT we'll be joined by Utsav Shah of fast growing startup Vanta (and host of the Software at Scale podcast). He'll share his experience building a customer API and challenges when implementing rate limiting. Then we'll have an open discussion on implementing rate limiting and exposing APIs to customers and partners.

Register Here

Utsav Shah

About Utsav Shah

Utsav Shah is a software engineer at Vanta and host of the Software at Scale podcast. Before joining Vanta, Utsav was responsible for enabling product velocity and ensuring the reliability of Dropbox’s monolith Python web application and large async systems like Cape.

About Vanta

Our mission at Vanta is to be a layer of trust on top of cloud services, to secure the internet, increase trust in software companies, and keep consumer data safe. Think of us as your automated security and compliance expert.

Building a Customer API

At 3pm ET/12pm PT every Thursday, Zuplo hosts a 1-hour round-table discussion virtually (Zoom) to help developers plan for and build their public customer or partner APIs. We feature one attendee each week who has recently built a public customer or partner API and who will share how they approached the process and what they learned, then stick around to answer questions and discuss. This chat is limited to 25 spaces per week to keep the conversation flowing and fruitful. We'll start with a Q&A with the featured guest and then have an open discussion immediately following.

Have more questions, check out our event FAQ.

· 3 min read
Josh Twist

At some point, you’re going to make an update to your API that would break existing clients if they don’t change their code. That’s OK, change happens.

However, it is critical that you give yourself the option to do two things when this situation arises:

  1. Support multiple versions of your API simultaneously (so that you can give older clients the opportunity to migrate to your latest version).
  2. Inform a client that the version they have coded for is no longer supported

Life is much easier if you think about versioning from the beginning of the lifecycle of your API. A key decision to make is how you want to design versioning into your API; that is, how should the client communicate the version of the API they are coded to work with?

There are two primary options:

  1. URL-based versioning - where the version is encoded directly in the URL, e.g. /v1/charges. This is the most common approach and is used by most large API-first companies like Stripe, Twilio, SendGrid and Airtable.
  2. Header-based versioning - where the version is in a header; either a customer header like api-version: v3 or as part of the accept header, e.g. accept: application/vnd.example+json;version=1.0.

Our recommendations

1/ Keep those options open

First and foremost, we strongly recommend that you make the version a mandatory part of all requests received by your API. So any request that doesn’t include the version should receive a 4xx error code (400 if it’s a required header, 404 if it is missing from the URL).

This is the most important decision because it means you always have both options outlined at the opening of this post.

2/ Keep it simple

After this, we recommend URL-based versioning. There is plenty of precedent in the market and it’s the easiest for developers to use - it’s easier to test with CURL, call with fetch, test in a browser if you support GET requests. It’s just easier.

3/ Use headers if you’re passionate about building a pure REST implementation

The primary reason to use headers over URL-based versioning is to avoid violating a REST principle that states a URI should refer to a unique resource (v1/foo and v2/foo could theoretically point to the same resource). However, such pure implementations of REST APIs have not proven popular and are trickier for developers to use.

4/ Don’t break rule 1

There are examples of APIs in the public domain that have a default version if the client doesn’t specify a version. Here’s GitHub’s documentation on their API:

Github version documentation

Even though it encourages developers to use the version header, the API still works without it and just assumes v3. We think this is a mistake; if GitHub upgrades to v4 and that becomes the new default, all of those old clients that didn’t follow the best practice will experience unpredictable behavior and strange errors.

· 3 min read
Josh Twist

The day comes for most startups, even those that aren’t API-first SaaS businesses. When a large partner or customer — who can’t use your UI at the scale they need — requests an API. This is a high-quality problem - a customer that integrates with your API has higher switching costs and is more likely to be retained.

Sharing an API is a non-trivial exercise that can eat a surprising amount of your eng team’s time and there are three pillars that you need as a minimum bar:

authentication, documentation, protection

1/ Authentication

How will the partner authenticate securely with your API?

Most startups go with API-key authentication because it’s secure and the easiest to use for developers (more on this here) - this is the right choice in my experience. There’s a lot to consider when building a secure API-key solution:

  • Where do I store the keys and do so securely?
  • How do I let partners self-serve?
  • Can partners easily roll keys to ensure best practice security?
  • How do I implement read-once key infrastructure for best-practice security?

This can take even the best engineering teams multiple weeks to ship, and be an ongoing burden to maintain and scale that will reduce your team's agility.

2/ Documentation

How will the developer learn how to use your API?

Your partner’s developers will need documentation to learn how to use your API. Maybe a shared google doc is enough? But, your engineering team will spend much less time helping partner eng folks if they have real API docs - generated using open standards like Open API - that have integrated test clients and API keys. This will save your team time and your partner’s eng team - they’ll thank you for it!

3/ Protection

How do you stop a rogue for-loop in your partners’ code from taking down your whole business?

A partner hitting your API with a Denial of Service attack is rarely a deliberate, malicious act. Rather, it’s probably a simple coding error that results in an infinite loop that - without protection - can take down your API, or your whole business. That’s why rate-limiting is an essential part of any shared API.

Wait... there are more than three pillars?

Those three pillars are just the basics - Ideally, you have a strategy around versioning, analytics, composition, routing, caching, and have the right abstraction to deal with unexpected needs from new partners.

Any API program eventually runs into a customer that needs JWT or mTLS security - will your solution easily allow the layering of different security options? Can you easily maintain both versions of your API? Can you quickly implement a brown-out to push partners onto v2?

· One min read
Josh Twist

We recently shared some reasoning on why we think API keys are the best authentication approach for your public API.

We think this is so important that we built it as a feature of the Zuplo gateway. Our API Key management includes:

  • secure storage and management of keys and metadata - with an admin UI and API to manage consumers.
  • integrated developer portal with self-serve key management for your customers.

| Note, if you've already built your own API Key solution, and have a database store with your keys and users, we can easily integrate zuplo authentication with custom policies. It's never too late to make hosting your API much easier.

See it all in action in this 2-minute video:

Try it out now, for free at portal.zuplo.com

· 2 min read
Josh Twist

Have you noticed something the best API companies have in common?

Stripe, SendGrid, Twilio and Airtable logos

Folks like Stripe, Twilio, Airtable, SendGrid, and many more?

Yep, they all use API Keys as the primary authentication method for their APIs. Why is this?

There are two primary reasons

1/ Security

While there is no formal protocol for API Keys and most implementations have some level of variability - at least compared to standards like OAuth and OpenID Connect - they offer a good level of security, arguably greater than using JWT tokens for a few reasons.

  • Revokability - API Keys can be quickly revoked for any reason, whereas JWT tokens are hard to revoke and often require the reset of an entire certificate or tenant.
  • Opaqueness - unlike JWT tokens, which can be easily decoded using services like jwt.io, API keys are completely opaque and don’t reveal any hint of your internal authorization mechanism.
  • Self-management - a good API program with API keys allows consumers to manage API keys themselves and, in the event of a leak (or accidental push to a GitHub repo), the consumer can quickly revoke and roll their keys. If the same mishap occurs with a JWT token, it is typically harder for the consumer to self-serve and revoke the validity of the JWT token.

2/ Optimizing Time to First Call (TTFC)

Great API companies focus on developer experience and obsess about metrics like time-to-first-call. That is, how long does it take for a developer to find your documentation and get everything set up, and successfully invoke your API.

If you choose an authentication option that has some complexity (token acquisition, token refresh, etc) you are automatically increasing that Time to First Call and thus reducing the conversion rate of developers that get to know and adopt your platform.

Summary

Of course, there are valid reasons to use a more complex authentication method for your API, particularly if the consumer is doing work on behalf of another identity (e.g. accessing the GitHub API on behalf of a GitHub user) then OAuth makes the most sense.

However, if you’re primarily identifying the B2B business partners, API keys are a secure choice that is easy to use and will optimize the conversion funnel for winning new developers.

· 2 min read
Nate Totten

Unfortunately, the Cloudflare Pages integration for Github doesn't support Github Deployment status, this means you can't trigger actions on deployment_status. deployment_status is the ideal event to use when running tests after a deployment as the event details include the URL. This is how we run tests on Vercel deployments after they are finished.

Fortunately, there is a pretty hacky way to do this with Cloudflare Pages. The check_run event is fired by the pages integration. The check run provides an output object in the event. Unfortunately, the contents is meant for human readable output. It looks like this:

"output": {
"annotations_count": 0,
"annotations_url": "https://api.github.com/repos/org/repo/check-runs/12356/annotations",
"summary": "<table><tr><td><strong>Latest commit:</strong> </td><td>\n<code>be76cc6</code>\n</td></tr>\n<tr><td><strong>Status:</strong></td><td>&nbsp;✅&nbsp; Deploy successful!</td></tr>\n<tr><td><strong>Preview URL:</strong></td><td>\n<a href='https://4fcdd3b4.site-name.pages.dev'>https://4fcdd3b4.site-name.pages.dev</a>\n</td></tr>\n</table>\n\n[View logs](https://dash.cloudflare.com/?to=/:account/pages/view/portal/4fcdd3b4-e0a7-42df-b2d9-4a89a1981d9d)\n",
"text": null,
"title": "Deployed successfully"
},

With a little bit of scripting though it is possible to extract the URL from that summary string. I used grep and cut to do that with the following:

echo "${{ github.event.check_run.output.summary }}" | grep -o "href\=.*>https" | cut -c 7-43

Note the 7-43 at the end is the start index and end index of the string to pull from the grep result. Those numbers need to be adjusted depending on the length of your deployment url name.

To use the value, just extract it into an environment variable and you can use it later steps in your action.

on:
check_run:
types: [completed]

jobs:
build:
name: Test
runs-on: ubuntu-latest

steps:
- uses: actions/[email protected]
- name: Get Deployment URL
run: echo "DEPLOYMENT_URL=$(echo "${{ github.event.check_run.output.summary }}" | grep -o "href\=.*>https" | cut -c 7-43)" >> $GITHUB_ENV
- run: echo $DEPLOYMENT_URL

Not a super elegant solution, but it works. At least until Cloudflare changes that status message.

· 2 min read
Josh Twist

AKA Why you need rate-limiting, and ideally Dynamic Rate Limiting.

Length: 2 minutes

Before launching any API program you need to think about protection. Many API developers don't think they need to worry about rate-limiting because they aren't a target for hackers. That's probably true in the early days, but it isn't the hackers that are going to DoS you; it's your prized customers.

The most common type of API abuse isn't malicious, it's accidental. It's a misplaced for-loop in your partners code that takes you down. This happens often.

So ideally your API has some per-user rate limits. This is super easy with Zuplo.

However, in reality this isn't enough. You should probably have different rate-limits for different customers. For example, your free customers should have a more restrictive policy while your paid, or premium customers a more lenient one.

Guess what? That's also easy with Zuplo because you can write custom code that tells the rate-limit policy how to work. In the video above we show how you can modify the policy limits based on customer type (in this example, stored in the API Key metadata, but could be based on JWT claim, or even a cache-able DB lookup if so required).

Here's the code from request-limit-lookup.ts file in the video:

import { CustomRateLimitPolicyOptions, ZuploRequest } from "@zuplo/runtime";

const requestsPerMinute = {
premium: 3,
free: 1,
};

export default function (request: ZuploRequest): CustomRateLimitPolicyOptions {
const customerType = request.user.data.customerType;
const reqsPerMinute = requestsPerMinute[customerType];

const rateLimitConfig = {
// The key tells the rate limiter how to correlate different requests
key: request.user.sub,
requestsAllowed: reqsPerMinute,
timeWindowMinutes: 1,
};

return rateLimitConfig;
}

And the config for the rate-limit policy

{
"export": "BasicRateLimitInboundPolicy",
"module": "$import(@zuplo/runtime)",
"options": {
"rateLimitBy": "function",
"identifier": {
"module": "$import(./modules/request-limit-lookup)",
"export": "default"
}
}
}

Stay safe out there folks!