Flowise, one of the most popular open-source AI agent builders, has been hit by a critical vulnerability with confirmed in-the-wild exploitation. CVE-2025-59528 carries a perfect CVSS 10.0 score, the maximum severity rating, and it allows pre-authentication remote code execution. Attackers can fully compromise a vulnerable Flowise instance without any credentials.
VulnCheck identified between 12,000 and 15,000 Flowise instances exposed on the public internet at the time of research, many with no authentication configured.
This is a textbook example of what happens when AI agent infrastructure is deployed without gateway-level security. And it’s a pattern that extends far beyond Flowise.
- You run Flowise, Langflow, or a custom MCP server exposed to the public internet
- You're deploying AI agent endpoints without gateway-level authentication
- You stood up an LLM orchestration API for prototyping that's now taking production traffic
- You rely on an AI tool's optional built-in auth instead of enforcing it upstream
What Makes CVE-2025-59528 So Dangerous
The vulnerability lives in Flowise’s CustomMCP node, which handles configuration
settings for connecting to external
Model Context Protocol servers.
The root cause is a single-line issue: Flowise’s convertToValidJSONString
function passes user-provided configuration strings directly into a JavaScript
Function() constructor without any input validation or sanitization.
Successful exploitation gives attackers access to Node.js runtime modules like
child_process (arbitrary command execution) and fs (full filesystem access).
From there, they can execute commands on the host machine, extract API keys and
credentials, access sensitive data, and deploy persistent backdoors.
The CustomMCP endpoint is reachable pre-authentication on vulnerable builds, so no API key on the Flowise instance itself is required to trigger it. Check the NVD entry for the affected version range and the patched release. CVE-2025-8943 and CVE-2025-26319 previously saw in-the-wild exploitation on Flowise as well, which is worth noting for teams deciding how much perimeter security to put in front of it.
The Bigger Pattern: Unprotected AI Agent Endpoints
Flowise isn’t the only AI tool being deployed without adequate security. The same pattern plays out across the ecosystem:
- Flowise and Langflow provide drag-and-drop AI agent builders that developers deploy directly to the internet for quick prototyping, often without adding authentication.
- Custom MCP servers get stood up with default configurations that expose powerful tool-calling capabilities to anyone who can reach the endpoint.
- LLM orchestration APIs built with frameworks like LangChain or LlamaIndex get deployed with permissive access controls to speed up development, then never get locked down for production.
The numbers tell the story. Akamai’s 2026 State of the Internet report found that average daily API attacks per organization surged 113% year-over-year, jumping from 121 to 258 attacks per day. 87% of organizations experienced an API-related security incident in 2025. And Salt Security’s research shows 80% of organizations lack continuous, real-time API monitoring, leaving them blind to active threats targeting their AI agent infrastructure.
The convergence is clear: AI tooling is multiplying the API attack surface, and most teams aren’t keeping pace with security.
Why This Keeps Happening
Three forces are pushing AI endpoints into production without proper security:
Security gets deferred during prototyping. When you’re building an AI agent workflow, the goal is to get the agent working: connecting to LLMs, calling tools, processing responses. Authentication and access control feel like friction to add later. “Later” often never comes, and the prototype becomes the production deployment.
AI tooling often lacks built-in authentication. Many AI agent builders and MCP server implementations ship without authentication enabled by default. Flowise does support API key authentication, but it’s opt-in, and most deployments never flip it on. When the default is “open to anyone,” that’s usually how things stay.
Rapid prototyping leads to production exposure. The move from “works on my machine” to “running in the cloud” happens fast with modern deployment tools. A developer can spin up a Flowise instance on a VPS in minutes. Without a deliberate security step in the deployment process, that instance goes live with whatever defaults the tool ships with.
The Gateway-First Solution
The fix isn’t to hope that every AI tool will eventually ship with robust built-in security. The fix is to never expose AI agent infrastructure directly to the internet in the first place.
An API gateway sits between the public internet and your AI endpoints, enforcing security policies on every request before traffic ever reaches the underlying tool. It’s the same pattern that has protected traditional APIs for years, and it applies directly to AI agent endpoints, MCP servers, and LLM orchestration layers.
With a gateway in front of your AI infrastructure:
- Every request is authenticated. No anonymous access to your AI tools, period. Whether you use API keys, OAuth, or JWT, the gateway ensures every caller has valid credentials before any request reaches your backend.
- Traffic is rate-limited. Even if credentials are compromised, rate limiting contains the blast radius. A compromised API key can’t make unlimited calls to your AI endpoints.
- Malicious requests are blocked at the edge. Bot detection, request validation, and DDoS protection stop attacks before they reach your infrastructure, not after they’ve already exploited a vulnerability.
- You get a complete audit trail. Every request to your AI endpoints is logged, giving you visibility into who is calling what, when, and how often.
The exploitation pattern observed in the wild is unauthenticated: attackers scan the public internet, find a Flowise instance, and send a crafted request to the CustomMCP node. A gateway that requires authentication on every route cuts off that entire pattern. An authorised caller could still send the same payload, so the gateway isn’t a substitute for patching, but it reduces the attack surface from “anyone on the internet” to “anyone holding a valid credential you issued.”
How Zuplo Secures AI Agent Endpoints
Zuplo’s API gateway is purpose-built for this exact scenario. Here’s how it protects AI agent infrastructure:
MCP Server Handler
Zuplo’s MCP Server Handler transforms your API routes into MCP tools while inheriting every security policy configured on your gateway. Each route you mark as an MCP tool becomes callable via the handler, which re-invokes that same route inside the gateway without going back out over HTTP. Your existing authentication, rate limiting, and validation policies apply automatically to every MCP tool call.
The policy execution order is explicit: inbound policies on the MCP route run first, then inbound policies on the target tool’s route, then the tool’s outbound policies, and finally the MCP route’s outbound policies. Security is enforced at every layer.
API Key Authentication
Zuplo’s API Key Authentication policy gives every consumer of your AI endpoints unique credentials. No shared keys, no anonymous access. Each API key is tied to a specific consumer with metadata you control, so you know exactly who is making every request. Setting up API key authentication at the gateway takes minutes, not days, and a gateway-issued key requirement would have stopped every unauthenticated CVE-2025-59528 exploitation attempt seen in the wild.
Rate Limiting
The Rate Limiting policy counts requests per fixed window, with the counter resetting at the end of each window. You can rate limit by IP address, user identity, API key, or custom logic. For AI agent endpoints, this is critical: it prevents runaway agents from overwhelming your infrastructure and caps the blast radius if a credential is compromised.
Bot Detection and DDoS Protection
On enterprise plans, Zuplo’s bot detection policy scores every request and can automatically block traffic that looks automated or abusive. And on Zuplo’s managed edge deployment, managed DDoS protection is always-on at the edge, so malicious traffic is stopped before it ever reaches your AI infrastructure.
Edge-Native Enforcement
All of these policies run at the edge, across 300+ global points of presence. Malicious requests are blocked at the nearest edge location, not at your origin. An unauthenticated attempt at CVE-2025-59528 is rejected at the perimeter. The crafted payload never reaches your Flowise instance, your MCP server, or whatever AI tool you’re running behind the gateway.
MCP Server Handler and security policies
Reference docs for the MCP Server Handler, API Key Authentication, and Rate Limiting policies covered above.
Secure Your AI Endpoints Now
The Flowise vulnerability is a wake-up call, but the underlying problem is architectural. If your AI agent endpoints are reachable from the public internet without authentication and access control, you’re one CVE away from a full compromise.
The fix is straightforward:
- Never expose AI tooling directly to the internet. If it’s a prototype that doesn’t need public access, keep it off the public internet entirely. If it’s productionised, put an API gateway in front of every AI endpoint: Flowise, custom MCP servers, LLM APIs, agent orchestrators, all of it. Bind the tool itself to localhost or a private network so the only public route into it is through the gateway.
- Require authentication on every request. API keys are the simplest starting point. No anonymous access, ever.
- Add rate limiting and bot protection. Contain blast radius and block automated abuse.
- Monitor everything. You can’t protect what you can’t see.
If you’re running Flowise, apply the vendor patch and rotate any credentials the instance had access to. Patching alone isn’t enough. The next vulnerability is a matter of when, not if, and a gateway-first architecture means that when the next CVE drops, your AI infrastructure is already protected.
Zuplo makes this trivial to set up. You can have authentication and rate limiting running in front of your AI endpoints in minutes, with zero infrastructure to manage. Get started for free, or check out the AI Gateway and MCP Gateway product pages.