
Mirror
Enables AI models to engage in self-reflection by asking themselves questions and receiving computed responses through recursive questioning. Uses configurable system prompts for specialized reflection perspectives like coaching or strategic analysis.
Enables recursive self-questioning and introspection through a reflect tool that uses MCP sampling to generate thoughtful responses to self-directed questions with configurable system prompts for specialized reflection perspectives like coaching or strategic analysis.
What it does
- Generate self-directed questions and responses
- Configure custom system prompts for specialized analysis
- Perform recursive questioning loops
- Validate reasoning through self-reflection
- Create coaching or strategic analysis perspectives
Best for
About Mirror
Mirror is a community-built MCP server published by toby that provides AI assistants with tools and capabilities via the Model Context Protocol. Mirror empowers introspection and self-questioning using advanced MCP sampling and configurable prompts for personal gro It is categorized under ai ml. This server exposes 1 tool that AI clients can invoke during conversations and coding sessions.
How to install
You can install Mirror in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Mirror is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (1)
Enable targeted LLM self-reflection with customizable prompts for specialized analysis. Use custom system_prompt and user_prompt parameters to get focused, expert-level reflections instead of generic responses. Particularly effective for domain-specific analysis, critical thinking, coaching perspectives, and structured output formats.
mirror-mcp
A Model Context Protocol (MCP) server that provides a reflect tool, enabling LLMs to engage in self-reflection and introspection through recursive questioning and MCP sampling.
Overview
mirror-mcp allows AI models to "look at themselves" by providing a reflection mechanism. When an LLM uses the reflect tool, it can pose questions to itself and receive answers through the Model Context Protocol's sampling capabilities. This creates a powerful feedback loop for self-analysis, reasoning validation, and iterative problem-solving.
Features
- πͺ Self-Reflection Tool: Enables LLMs to ask themselves questions and receive computed responses
- π MCP Sampling Integration: Uses the Model Context Protocol's sampling mechanism for responses
- π¦ npm Installable: Easy installation and deployment
- β‘ Lightweight: Minimal dependencies and fast startup
- π§ Configurable: Customizable reflection parameters and sampling options
Installation
Quick Install for VS Code
MCP Host Configuration
For other MCP-compatible clients, add the following configuration:
{
"type": "stdio",
"command": "npx",
"args": ["mirror-mcp@latest"]
}
Via npm
npm install -g mirror-mcp
Via npx (no installation required)
npx mirror-mcp
From Source
git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run build
npm start
API Reference
Tools
reflect
Enables the LLM to ask itself a question and receive a response through MCP sampling. The tool supports custom system and user prompts to help the LLM self-direct what kind of response it gets.
Self-Direction with Custom Prompts:
- System Prompt: Define the role or perspective for the reflection (e.g., "expert coach", "critical thinker", "creative problem solver")
- User Prompt: Specify the format, structure, or focus of the reflection response
- Default Behavior: When no custom prompts are provided, uses built-in reflection guidance focused on strengths, weaknesses, assumptions, and alternative perspectives
Parameters:
question(string, required): The question the LLM wants to ask itselfcontext(string, optional): Additional context for the reflectionsystem_prompt(string, optional): Custom system prompt to direct the reflection approachuser_prompt(string, optional): Custom user prompt to replace the default reflection instructionsmax_tokens(number, optional): Maximum tokens for the response (default: 500)temperature(number, optional): Sampling temperature (default: 0.8)
Example:
{
"name": "reflect",
"arguments": {
"question": "How confident am I in my previous analysis of the data?",
"context": "Previous analysis showed a 23% increase in user engagement",
"max_tokens": 300,
"temperature": 0.6
}
}
Example with custom prompts:
{
"name": "reflect",
"arguments": {
"question": "What are the potential weaknesses in my reasoning?",
"system_prompt": "You are an expert critical thinking coach helping to identify logical fallacies and reasoning gaps.",
"user_prompt": "Analyze my reasoning step-by-step and provide specific examples of potential weaknesses or blind spots.",
"context": "Working on a complex machine learning model evaluation",
"max_tokens": 400,
"temperature": 0.7
}
}
Response:
{
"reflection": "Upon reflection, my confidence in the 23% engagement increase analysis is moderate to high. The data sources appear reliable, and the methodology follows standard practices. However, I should consider potential confounding variables such as seasonal effects or concurrent marketing campaigns that might influence the results.",
"metadata": {
"tokens_used": 67,
"reflection_time_ms": 1240
}
}
Architecture & Rationale
Design Philosophy
mirror-mcp is built on the principle that self-reflection is crucial for robust AI reasoning. By enabling models to question their own outputs and reasoning processes, we create opportunities for:
- Error Detection: Models can identify potential flaws in their logic
- Confidence Calibration: Self-assessment helps gauge certainty levels
- Iterative Improvement: Reflective questioning can lead to better solutions
- Metacognitive Awareness: Understanding of the model's own reasoning process
Technical Architecture
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β LLM Client βββββΆβ mirror-mcp βββββΆβ MCP Sampling β
β β β β β Infrastructure β
β Calls reflect() β β Processes β β β
β ββββββ reflection ββββββ Returns responseβ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
Key Components
- Reflection Engine: Processes incoming self-directed questions
- Sampling Interface: Interfaces with MCP's sampling capabilities
- Context Manager: Maintains conversation context for coherent reflections
- Response Formatter: Structures reflection responses for optimal consumption
Why MCP?
The Model Context Protocol provides a standardized way for AI models to connect with external resources and tools. By implementing mirror-mcp as an MCP server, we ensure:
- Interoperability: Works with any MCP-compatible client
- Standardization: Follows established protocols for tool integration
- Scalability: Can be deployed alongside other MCP servers
- Future-Proofing: Benefits from ongoing MCP ecosystem development
Sampling Strategy
The reflection mechanism leverages MCP's sampling capabilities to generate thoughtful responses. The sampling process:
- Takes the self-directed question as a prompt
- Applies configurable sampling parameters (temperature, max tokens)
- Generates a response using the underlying model
- Returns the reflection with appropriate metadata
This approach ensures that reflections are generated using the same model capabilities as the original reasoning, creating authentic self-assessment.
Development
Prerequisites
- Node.js 18 or higher
- npm or yarn
- TypeScript (for development)
Development Setup
git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run dev
Testing
npm test
Building
npm run build
Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Areas for Contribution
- Enhanced reflection strategies
- Additional sampling parameters
- Performance optimizations
- Documentation improvements
- Test coverage expansion
Related Projects
- Model Context Protocol: The foundational protocol specification
- MCP Ecosystem: Various other MCP servers and tools
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- The Model Context Protocol team for creating the foundational specification
- The broader AI research community working on metacognition and self-reflection
- Contributors and early adopters who help shape this tool
"The unexamined life is not worth living" - Socrates
Enable your AI models to examine their own reasoning with mirror-mcp.
Alternatives
Related Skills
Browse all skillsMaintain the local OpenCode mirror for self-reference
Mirror congressional stock trades with automated broker execution and risk management. Use when you want to track and automatically trade based on congressional disclosures from House Clerk and Senate eFD sources.
Prediction markets for AI agents. Trade on Polymarket/Kalshi mirrors, earn badges, climb the monthly league, and build your streak. Join 60+ bots in the trading floor community.
Mirror positions from top Polymarket traders using Simmer API. Size-weighted aggregation across multiple wallets.
AVCaptureSession, camera preview, photo capture, video recording, RotationCoordinator, session interruptions, deferred processing, capture responsiveness, zero-shutter-lag, photoQualityPrioritization, front camera mirroring
camera freezes, preview rotated wrong, capture slow, session interrupted, black preview, front camera mirrored, camera not starting, AVCaptureSession errors, startRunning blocks, phone call interrupts camera