From Saved Prompts to Slash Commands: Building a Personal Prompt Library with MCP
I keep finding myself typing the same prompts over and over. “Run make lint and fix all issues.” “Do an audit and create contextswitch tickets for tech debt.” “Review this code and suggest improvements.”
These aren’t one-off requests. They’re workflows I repeat constantly across different projects. And every time, I’m either retyping them from memory or hunting through chat history to copy-paste the exact phrasing that worked well last time.
That’s what led me to build Lexicon. It’s a simple idea: save your best prompts, organize them with categories and tags, and serve them out via MCP so they work everywhere. But the implementation turned out to be more interesting than I expected, especially when I discovered how Claude Code handles MCP prompts.
The Lexicon Interface
Lexicon is just a prompt manager built into a macOS app w/ a packaged MCP (very similar in that way to ContextSwitch). You can create prompts with:
- Name and description - What the prompt does
- Categories - Organize by workflow type (Code Review, QA, Documentation, etc.)
- Tags - Multi-dimensional filtering (like “SWE” for software engineering prompts)
- Messages - The actual prompt text with support for arguments and templating
Here’s what a basic prompt looks like. The “Linting Review” prompt tells an agent to run the makefile’s linter, fix what it can, and report back:
Use the makefile to run our linter and formatter and fix any
issues that it cannot fix automatically. Rerun both at the
end to verify that you have fixed all issues.
Simple, but now I don’t have to remember the exact phrasing. I just pull up the prompt.
These can get much more complicated, with things like arguments/templating, multiple prompts (system, assistant, and user), etc.
MCP Prompts as Resources
This ties back to my earlier post on MCP prompts and resources. MCP has three primitives:
- Tools - Actions the AI can execute (model-controlled)
- Resources - Data the AI can read (application-controlled)
- Prompts - Reusable templates (user-controlled)
Prompts are the third pillar that doesn’t get enough attention. They’re user-controlled templates that clients can discover and invoke. Unlike tools (which the AI decides when to use) or resources (which provide background context), prompts are explicitly requested by users.
Lexicon exposes all your saved prompts as MCP prompts. Any MCP-compatible client can:
- List available prompts
- See their descriptions and arguments
- Request a formatted prompt
- Send it to the AI model
The protocol is straightforward. When a client asks for prompts, ContextSwitch returns the list:
{
"prompts": [
{
"name": "linting_review",
"description": "Run make lint and fix all issues",
"arguments": []
},
{
"name": "tech_debt_audit",
"description": "Do an audit and create contextswitch tickets",
"arguments": []
}
]
}
When you invoke a prompt, you get back the full message ready to send to the AI.
The Claude Code Integration
Here’s where it gets interesting. Claude Code doesn’t just expose MCP prompts through some UI menu. It automatically registers them as custom slash commands.
Look at what happens when you connect Lexicon to Claude Code:
Type /lexicon
and you get autocomplete for all your saved prompts:
/lexicon:linting_review (MCP) Run make lint and fix all issues
/lexicon:tech_debt_audit (MCP) Do an audit and create contextswitch tickets
These aren’t built into Claude Code. They’re dynamically discovered from the MCP server. Every prompt in your Lexicon automatically becomes a slash command prefixed with the server name.
This is brilliant. You’re not switching contexts or opening menus. You just type /lexicon:
and your entire prompt library appears inline.
The Self-Referential Use Case
The “Tech Debt Audit” prompt is my favorite example of how this composes. Here’s what it does:
Do a thorough audit of the current codebase looking for tech debt,
areas that need improvement, missing tests, or other issues. For
each issue you find, create a new task in ContextSwitch with:
- Clear description of the problem
- Priority level (high/medium/low)
- Relevant tags
- Links to affected files
Think about what’s happening here:
- I run
/lexicon:tech_debt_audit
in Claude Code - Claude Code fetches the prompt from Lexicon’s MCP server
- The prompt tells the agent to audit for tech debt
- The agent uses ContextSwitch’s MCP tools to create tasks for what it finds
- Those tasks show up in the ContextSwitch kanban board
- Later sessions can pick up those tasks and work on them
Prompts with Arguments
Basic prompts are useful, but templated prompts with arguments are where this really shines. You can define prompts that accept inputs:
{
"name": "code_review",
"description": "Review code with specific focus area",
"arguments": [
{
"name": "focus",
"description": "What aspect to focus on (security, performance, style)",
"required": true
},
{
"name": "file_path",
"description": "Specific file to review",
"required": false
}
]
}
When you invoke this in Claude Code, you can pass arguments:
/lexicon:code_review focus="security" file_path="auth.swift"
The MCP server templates these into the final prompt before sending it to the AI.
Export and Import for Sharing
One thing I built in early: Lexicon prompts can export to JSON. This means you can share prompt libraries with your team or across projects.
Hit “Export JSON” and you get a file with all your prompts:
Import this into another Lexicon instance and you’ve got the same prompt library. I’m using this to maintain different prompt sets for different types of projects.
What Makes This Different
There are plenty of prompt libraries and snippet managers out there. What makes Lexicon different:
It’s local-first. Your prompts live in Lexicon’s local database on your machine, not in some cloud service. They’re private by default.
It’s protocol-native. Lexicon doesn’t have its own UI for “running” prompts. It serves them through MCP, so any compatible client can use them.
It composes with tools. Prompts can reference MCP tools from the same server or others. The tech debt audit prompt references ContextSwitch tools, but could just as easily use filesystem tools, git tools, or custom project tools.
It works across contexts. Same prompts in Claude Desktop, Claude Code, Cursor, or any other MCP client. Switch tools, keep your prompts.
The Bigger Pattern
What I’m seeing is that MCP enables a new kind of personal software. Not apps that do everything, but small, focused tools that compose through a standard protocol.
ContextSwitch doesn’t try to be an IDE or a project manager or a prompt library. It’s a kanban board that happens to serve its data via MCP tools; Lexicon is a prompt manager that happens to serve prompts via MCP prompts. Other tools can read and write the same data.
This is the “personal software” idea from the Evergreen post, but applied to developer tools. Build exactly what you need, expose it through MCP, and let it compose with everything else.
Try It Yourself
ContextSwitch, Lexicon, and Evergreen are all still personal software I built for myself. They’re not published on the app store but I’ve got them on TestFlight to share with some friends so if you do think it would be useful let me know.
In any case the ideas are portable:
- MCP prompts are easy to implement in any MCP server
- The prompt-as-slash-command pattern in Claude Code works today
- Local-first prompt libraries can coexist with cloud ones
- Prompts that reference tools create powerful compositions
If you’re building MCP servers or tools, consider adding prompt support. It’s one of the less-discussed parts of the protocol, but it’s incredibly useful for codifying workflows.
And if you find yourself typing the same prompts over and over, maybe it’s time to build your own Lexicon.
Subscribe to the Newsletter
Get the latest posts and insights delivered straight to your inbox.