MCP tools let AI agents interact with your APIs. But tools alone only give agents capabilities, they don't tell the agent how to use those capabilities effectively. That's where MCP prompts come in.
What Are MCP Prompts?#
MCP prompts are reusable, parameterized templates that guide an AI chat session to use available tools correctly, or to achieve a standardized response regardless of the client they are used in. They're part of the Model Context Protocol specification alongside tools and resources.
When an agent invokes a prompt, your server returns structured messages that get injected into the AI's context. These messages can set the agent's role, provide instructions, establish constraints, or define multi-step workflows.
Unlike tools (which perform actions and return data), prompts return instructions that shape how the agent thinks and behaves and they are designed to be invoked by users rather than AI itself.
Watch the Demo#
Tools vs Prompts in Practice#
Say you've built an MCP server exposing a bookmark manager API with three tools:
list-bookmarks- fetch saved bookmarkssave-bookmark- add a new bookmarkdelete-bookmark- remove a bookmark
An agent can use these tools, but it has no context for when or why to use them together. You'd need to explain your workflow every time.
A prompt can change that.
Defining a Prompt#
In this example, we define a research_roundup prompt that instructs the agent
to fetch bookmarks, group them by topic, identify research patterns, and suggest
next steps the user might take.
Now anyone using the Bookmark Manager MCP server gets that workflow for free.
Implementing a Prompt#
In Zuplo, prompts are routes that return structured messages. Here's the handler
for our research_roundup prompt:
// modules/research-roundup-prompt.ts
import { ZuploRequest, ZuploContext } from "@zuplo/runtime";
export default async function (request: ZuploRequest, context: ZuploContext) {
const body = await request.json();
const days = parseInt(body.days, 10) || 7;
return {
messages: [
{
role: "assistant",
content: {
type: "text",
text: `You are a research assistant helping the user understand what they've been exploring lately.
1. Use list-bookmarks to fetch the user's saved bookmarks
2. Filter to only those saved in the last ${days} days based on created_at
3. Group them by tags to identify research themes
4. For each theme, summarize what the bookmarks suggest they're researching
5. Identify patterns or connections across themes
6. Suggest what they might want to explore next
Keep it conversational and insightful. Don't just list the bookmarks - synthesize what they mean.`,
},
},
],
};
}
The handler accepts an optional parameter (in this case, how many days to look
back) and returns a messages array. These messages populate the AI's context
and guide its behavior and use of the available MCP tools.
Route Configuration#
In Zuplo, this prompt is configured as a route in your OpenAPI spec with
x-zuplo-route.mcp.type set to "prompt":
{
"/prompts/research-roundup": {
"post": {
"operationId": "research-roundup",
"summary": "Summarize recent research based on saved bookmarks",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"days": {
"oneOf": [{ "type": "number" }, { "type": "string" }],
"description": "How many days back to look (default: 7)"
}
}
}
}
}
},
"x-zuplo-route": {
"corsPolicy": "none",
"handler": {
"export": "default",
"module": "$import(./modules/research-roundup-prompt)"
},
"mcp": {
"type": "prompt",
"name": "research_roundup",
"description": "Get a summary of what you've been researching based on recent bookmarks"
}
}
}
}
}
The mcp block controls how this route appears to MCP clients:
type: "prompt"registers it as a prompt rather than a toolnameis the identifier clients use to invoke itdescriptionhelps agents understand when to use it
Registering with the MCP Server#
Add the prompt to your MCP server handler's operations array alongside your
tools:
{
"/mcp": {
"post": {
"x-zuplo-route": {
"handler": {
"export": "mcpServerHandler",
"module": "$import(@zuplo/runtime)",
"options": {
"name": "bookmark-manager",
"version": "1.0.0",
"operations": [
{ "file": "./config/routes.oas.json", "id": "list-bookmarks" },
{ "file": "./config/routes.oas.json", "id": "save-bookmark" },
{ "file": "./config/routes.oas.json", "id": "delete-bookmark" },
{ "file": "./config/routes.oas.json", "id": "research-roundup" }
]
}
}
}
}
}
}
Now when clients call prompts/list, they'll see the research_roundup prompt
in the list. When they invoke it, the agent receives the specific instructions
the prompt defines and uses the available MCP tools to execute the workflow.

Once the prompt is selected, and any additional user inputs are captured, the full prompt is added to the chat session context and can then be invoked.

Why This Matters#
How multiple MCP tools should be used together isn't the easiest thing for AI to understand.
Prompts let you encode domain expertise into your MCP server. Instead of hoping users know how to combine your tools effectively (let's face it, they won't!) you ship pre-built workflows that guide the AI toward useful outcomes.
Wherever you know about common patterns of usage, creating and shipping a prompt that achieves the best outcome for your user is a wise move.
In this example, users don't need to figure out how to analyze their research
patterns. They invoke research_roundup and the agent knows exactly what to do.
You can build prompts for any workflow: daily planning, data cleanup, report generation, onboarding sequences. The tools provide the capabilities; the prompts provide the intelligence.
Check out the full MCP Server Prompts example and read more about MCP prompts in the Zuplo docs.