Understanding MCP Prompts and Resources: The Hidden Gems of Model Context Protocol
Understanding MCP Prompts and Resources: The Hidden Gems of Model Context Protocol
When people talk about Model Context Protocol (MCP), they usually focus on tools: those executable functions that let AI models take actions in the world. But MCP has two other powerful features that are less understood but equally important: prompts and resources.
What is MCP Again?
Model Context Protocol is like USB for AI integrations. Instead of building custom connections between every AI application and every data source or tool, MCP provides a standardized way for them to communicate.
MCP defines three core primitives:
- Tools: Actions the AI can execute (like running code or sending emails)
- Resources: Data the AI can read (like files or database content)
- Prompts: Reusable templates for AI interactions
graph TB subgraph "MCP Architecture" Client["AI Client
(Claude Desktop)"] Server["MCP Server
(Your Custom Server)"] end subgraph "Three Primitives" Tools["Tools
Actions with side effects
Model-controlled"] Resources["Resources
Read-only data
Application-controlled"] Prompts["Prompts
Reusable templates
User-controlled"] end Client <--> Server Server --> Tools Server --> Resources Server --> Prompts Tools --> Actions["Send email
Query database
Run computation"] Resources --> Data["Files
Logs
Metrics"] Prompts --> Templates["Code review
Git commits
Bug analysis"]
While tools get most of the attention, prompts and resources are really interesting for everyday users.
MCP Prompts: Your Personal AI Assistant’s Playbook
What Are Prompts?
MCP prompts are pre-built instruction templates that servers can offer to clients. Think of them as saved recipes for common AI tasks that you can pull up and use whenever needed. Unlike tools (which the AI decides when to use) or resources (which provide background context), prompts are user-controlled: you explicitly choose when to invoke them.
Imagine having a collection of perfectly crafted prompts for:
- Generating commit messages from git diffs
- Explaining code in different programming languages
- Creating documentation from existing code
- Analyzing data with specific methodologies
Instead of remembering these complex prompts or rebuilding them each time, they’re standardized and available across any MCP-compatible client.
How Prompts Work
When you connect to an MCP server that provides prompts, your AI client can discover what’s available:
{
"prompts": [
{
"name": "git-commit",
"description": "Generate a Git commit message",
"arguments": [
{
"name": "changes",
"description": "Git diff or description of changes",
"required": true
}
]
}
]
}
When you want to use a prompt, you select it and provide the required arguments. The server then returns a fully formed prompt that gets sent to your AI model:
sequenceDiagram participant U as User participant C as AI Client participant S as MCP Server participant M as AI Model U->>C: "Use git-commit prompt" C->>S: prompts/list S->>C: Available prompts C->>U: Show prompt options U->>C: Select "git-commit" + provide args C->>S: prompts/get (name + arguments) S->>C: Formatted prompt template C->>M: Send prompt to AI M->>C: AI response C->>U: Show result
{
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Based on these changes:\n\n```diff\n+ def calculate_total(items):\n+ return sum(item.price for item in items)\n```\n\nGenerate a concise, descriptive commit message following conventional commit format."
}
}
]
}
The beauty is that these prompts can be shared across teams, refined over time, and work with any MCP-compatible AI client.
Real Example: Git MCP Server Prompts
The official Git MCP server provides several pre-built prompts:
commit-staged
- Generates commit messages from staged changescommit-all-staged
- Creates commits for all staged files with descriptionssuggest-commit-message
- Analyzes git diff and suggests appropriate commit messagescreate-pr-description
- Generates pull request descriptions from branch changes
These prompts automatically pull from git resources like git://status
, git://diff
, and git://log
to provide context-aware suggestions.
MCP Resources: Your AI’s Reading Material
What Are Resources?
Resources are data sources that AI models can read to gain context about your environment. They’re like GET endpoints in a REST API. The AI requests them to load information, but they don’t perform actions or have side effects.
Resources use URI schemes to identify different types of content:
file://
for filesystem contentdatabase://
for database queriesapi://
for API responses- Custom schemes for specialized data sources
The File System Connection
One of the most powerful resource types is filesystem access through file://
URIs. An MCP filesystem server can expose files and directories that your AI can read:
{
"resources": [
{
"uri": "file:///project/README.md",
"name": "Project Documentation",
"mimeType": "text/markdown"
},
{
"uri": "file:///logs/app.log",
"name": "Application Logs",
"mimeType": "text/plain"
}
]
}
When your AI needs context about your project, it can read these resources to understand the codebase structure, recent changes, or current issues.
Real Example: Filesystem MCP Server Resources
The official Filesystem MCP server exposes directory contents as resources:
file:///project/README.md
- Project documentationfile:///project/package.json
- Dependencies and scriptsfile:///project/src/
- Source code directory (with secure access controls)file:///logs/app.log
- Application logsfile:///config/settings.json
- Configuration files
The server includes security features like configurable access controls and path traversal protection, ensuring AI can only access approved directories.
Dynamic Resource Discovery
Resources aren’t just static files. MCP servers can dynamically generate resource lists based on your current context. For example:
Database Resource Server
{
"resources": [
{
"uri": "database://users/recent",
"name": "Recent User Signups",
"description": "Users who signed up in the last 7 days"
},
{
"uri": "database://errors/critical",
"name": "Critical Errors",
"description": "Critical errors from the last 24 hours"
}
]
}
Git Resource Server
{
"resources": [
{
"uri": "git://commits/recent",
"name": "Recent Commits",
"description": "Last 10 commits on current branch"
},
{
"uri": "git://diff/staging",
"name": "Staged Changes",
"description": "Currently staged git changes"
}
]
}
Real-Time Updates
Resources can update in real-time. When the underlying data changes, the MCP server can notify clients that resources have been updated. This means your AI always has access to the latest information without manual refreshing.
Real Example: Everything MCP Server
The official “Everything” server demonstrates all MCP features in one place:
Resources:
hello://world
- Simple greeting resourcecounter://global
- Dynamic counter that updates in real-timeslow://resource
- Demonstrates long-running resource operations
Prompts:
simple-prompt
- Basic template with no argumentscomplex-prompt
- Template with multiple required and optional argumentsresource-aware-prompt
- Prompt that references multiple resources
This server is perfect for testing and learning how all MCP primitives work together.
Resources vs Tools: When to Use What
A common question when building MCP servers is whether to expose something as a resource or a tool. While both can technically make API calls or query databases, they serve fundamentally different purposes and have different implications for how AI models use them.
Use Resources When:
- Reading data without side effects - Getting current user count, latest logs, or configuration values
- Providing context automatically - The AI should always have access to this information when reasoning
- Data that changes frequently - Real-time metrics, live status updates, or dynamic content
- Background information - Documentation, schemas, or reference data that informs decision-making
Use Tools When:
- Taking actions with side effects - Creating users, sending notifications, or modifying data
- Operations requiring explicit approval - Since tools require user permission in most clients
- Complex computations - Processing that’s expensive or shouldn’t happen automatically
- Workflows that need human oversight - Deployments, financial transactions, or sensitive operations
Consider an API that returns user analytics data. As a resource (analytics://daily-stats
), the AI can automatically reference current metrics when discussing performance. As a tool (get_analytics
), the AI would need permission each time and would only fetch data when explicitly needed.
The key difference is intent and control: resources provide ambient context that enhances AI reasoning, while tools perform specific actions at specific moments. A well-designed MCP server often uses both resources for reading current state and tools for making changes.
Combining Prompts and Resources
What matters of course is that all these things work together. A prompt can reference multiple resources to create rich, context-aware AI interactions:
{
"name": "project-status",
"description": "Generate comprehensive project status report",
"arguments": [
{
"name": "timeframe",
"description": "Analysis timeframe (e.g., 'last week')",
"required": true
}
]
}
This prompt might automatically pull from:
git://commits/recent
for development activityfile:///logs/app.log
for system healthdatabase://metrics/performance
for performance datafile:///docs/CHANGELOG.md
for documented changes
The result is an AI that can generate comprehensive reports with current, accurate data without you having to manually gather and paste information.
Why This Matters
Prompts and resources represent a shift from AI as a general assistant to AI as a specialized team member that understands your specific context and workflows. Instead of explaining your codebase every time or copying and pasting logs, your AI automatically has access to current information and knows how to use it effectively.
This standardization means:
- Consistency: Same high-quality prompts across different AI clients
- Collaboration: Teams can share and improve prompts together
- Context: AI always has access to relevant, up-to-date information
- Efficiency: No more repetitive setup or context-switching
Getting Started
To start using MCP prompts and resources:
- Explore existing servers: Check out the official MCP server repository for filesystem, git, and other common servers
- Try Claude Desktop: It has built-in MCP support for connecting to local servers
- Build your own: Use the official SDKs (Python, TypeScript, etc.) to create custom servers for your specific needs
- Share with your team: MCP servers can be shared and deployed for team-wide access
Prompts and resources are the foundation of understanding, turning your AI from a general assistant into a specialized team member who knows your codebase, your data, and your workflows.
Subscribe to the Newsletter
Get the latest posts and insights delivered straight to your inbox.