MCP tools let AI agents interact with your APIs. But tools alone only give agents capabilities, they don't tell the agent how to use those capabilities effectively. That's where MCP prompts come in.
What Are MCP Prompts?
MCP prompts are reusable, parameterized templates that guide an AI chat session to use available tools correctly, or to achieve a standardized response regardless of the client they are used in. They're part of the Model Context Protocol specification alongside tools and resources.
When an agent invokes a prompt, your server returns structured messages that get injected into the AI's context. These messages can set the agent's role, provide instructions, establish constraints, or define multi-step workflows.
Unlike tools (which perform actions and return data), prompts return instructions that shape how the agent thinks and behaves and they are designed to be invoked by users rather than AI itself.
Demo & Example of MCP Server Prompts
MCP Server Prompts Example
A complete Bookmark Manager MCP server with tools and prompts that demonstrate multi-step AI workflows.
Tools vs Prompts in Practice
Say you've built an MCP server exposing a bookmark manager API with three tools:
list-bookmarks- fetch saved bookmarkssave-bookmark- add a new bookmarkdelete-bookmark- remove a bookmark
An agent can use these tools, but it has no context for when or why to use them together. You'd need to explain your workflow every time.
A prompt can change that.
Defining a Prompt
In this example, we define a research_roundup prompt that instructs the agent
to fetch bookmarks, group them by topic, identify research patterns, and suggest
next steps the user might take.
Now anyone using the Bookmark Manager MCP server gets that workflow for free.
Implementing a Prompt
In Zuplo, prompts are routes that return structured messages. Here's the handler
for our research_roundup prompt:
The handler accepts an optional parameter (in this case, how many days to look
back) and returns a messages array. These messages populate the AI's context
and guide its behavior and use of the available MCP tools.
Route Configuration
In Zuplo, this prompt is configured as a route in your OpenAPI spec with
x-zuplo-route.mcp.type set to "prompt":
The mcp block controls how this route appears to MCP clients:
type: "prompt"registers it as a prompt rather than a toolnameis the identifier clients use to invoke itdescriptionhelps agents understand when to use it
Registering with the MCP Server
Add the prompt to your MCP server handler's operations array alongside your
tools:
Now when clients call prompts/list, they'll see the research_roundup prompt
in the list. When they invoke it, the agent receives the specific instructions
the prompt defines and uses the available MCP tools to execute the workflow.

Once the prompt is selected, and any additional user inputs are captured, the full prompt is added to the chat session context and can then be invoked.

Why This Matters
How multiple MCP tools should be used together isn't the easiest thing for AI to understand.
Prompts let you encode domain expertise into your MCP server. Instead of hoping users know how to combine your tools effectively (let's face it, they won't!) you ship pre-built workflows that guide the AI toward useful outcomes.
Wherever you know about common patterns of usage, creating and shipping a prompt that achieves the best outcome for your user is a wise move.
In this example, users don't need to figure out how to analyze their research
patterns. They invoke research_roundup and the agent knows exactly what to do.
You can build prompts for any workflow: daily planning, data cleanup, report generation, onboarding sequences. The tools provide the capabilities; the prompts provide the intelligence.
