
Sentry
OfficialConnects to Sentry.io to retrieve error reports, stacktraces, and debugging information from your projects. Helps developers analyze and track issues directly through the MCP interface.
Streamline Sentry API integration with this remote MCP server middleware prototype. sentry-mcp acts as a bridge between clients and Sentry, supporting flexible transport methods and offering tools like the MCP Inspector for easy service testing. Inspired by Cloudflare’s remote MCP initiative, it helps developers adapt and debug workflows, making Sentry interaction smoother for both cloud and self-hosted environments.
What it does
- Retrieve Sentry issues by ID or URL
- List issues from specific projects
- View detailed stacktraces and error data
- Access issue metadata like timestamps and event counts
- Analyze error reports across organizations
Best for
About Sentry
Sentry is an official MCP server published by getsentry that provides AI assistants with tools and capabilities via the Model Context Protocol. Easily integrate and debug Sentry APIs with sentry-mcp, a flexible MCP middleware for cloud and self-hosted setups. It is categorized under developer tools.
How to install
You can install Sentry in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server supports remote connections over HTTP, so no local installation is required.
License
Sentry is released under the NOASSERTION license.
sentry-mcp
Sentry's MCP service is primarily designed for human-in-the-loop coding agents. Our tool selection and priorities are focused on developer workflows and debugging use cases, rather than providing a general-purpose MCP server for all Sentry functionality.
This remote MCP server acts as middleware to the upstream Sentry API, optimized for coding assistants like Cursor, Claude Code, and similar development tools. It's based on Cloudflare's work towards remote MCPs.
Getting Started
You'll find everything you need to know by visiting the deployed service in production:
If you're looking to contribute, learn how it works, or to run this for self-hosted Sentry, continue below.
Claude Code Plugin
Install as a Claude Code plugin for automatic subagent delegation:
claude plugin marketplace add getsentry/sentry-mcp
claude plugin install sentry-mcp@sentry-mcp
This provides a sentry-mcp subagent that Claude automatically delegates to when you ask about Sentry errors, issues, traces, or performance.
For experimental features:
claude plugin install sentry-mcp@sentry-mcp-experimental
Stdio vs Remote
While this repository is focused on acting as an MCP service, we also support a stdio transport. This is still a work in progress, but is the easiest way to adapt run the MCP against a self-hosted Sentry install.
Note: The AI-powered search tools (search_events, search_issues, etc.) require an LLM provider (OpenAI or Anthropic). These tools use natural language processing to translate queries into Sentry's query syntax. Without a configured provider, these specific tools will be unavailable, but all other tools will function normally.
To utilize the stdio transport, you'll need to create an User Auth Token in Sentry with the necessary scopes. As of writing this is:
org:read
project:read
project:write
team:read
team:write
event:write
Launch the transport:
npx @sentry/mcp-server@latest --access-token=sentry-user-token
Need to connect to a self-hosted deployment? Add --host (hostname
only, e.g. --host=sentry.example.com) when you run the command.
Some features (like Seer) may not be available on self-hosted instances. You can disable specific skills to prevent unsupported tools from being exposed:
npx @sentry/mcp-server@latest --access-token=TOKEN --host=sentry.example.com --disable-skills=seer
Environment Variables
SENTRY_ACCESS_TOKEN= # Required: Your Sentry auth token
# LLM Provider Configuration (required for AI-powered search tools)
EMBEDDED_AGENT_PROVIDER= # Required: 'openai' or 'anthropic'
OPENAI_API_KEY= # Required if using OpenAI
ANTHROPIC_API_KEY= # Required if using Anthropic
# Optional overrides
SENTRY_HOST= # For self-hosted deployments
MCP_DISABLE_SKILLS= # Disable specific skills (comma-separated, e.g. 'seer')
Important: Always set EMBEDDED_AGENT_PROVIDER to explicitly specify your LLM provider. Auto-detection based on API keys alone is deprecated and will be removed in a future release. See docs/embedded-agents.md for detailed configuration options.
Example MCP Configuration
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@sentry/mcp-server"],
"env": {
"SENTRY_ACCESS_TOKEN": "your-token",
"EMBEDDED_AGENT_PROVIDER": "openai",
"OPENAI_API_KEY": "sk-..."
}
}
}
}
If you leave the host variable unset, the CLI automatically targets the Sentry SaaS service. Only set the override when you operate self-hosted Sentry.
For self-hosted instances that don't support Seer:
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@sentry/mcp-server"],
"env": {
"SENTRY_ACCESS_TOKEN": "your-token",
"SENTRY_HOST": "sentry.example.com",
"MCP_DISABLE_SKILLS": "seer"
}
}
}
}
MCP Inspector
MCP includes an Inspector, to easily test the service:
pnpm inspector
Enter the MCP server URL (http://localhost:5173) and hit connect. This should trigger the authentication flow for you.
Note: If you have issues with your OAuth flow when accessing the inspector on 127.0.0.1, try using localhost instead by visiting http://localhost:6274.
Local Development
To contribute changes, you'll need to set up your local environment:
-
Set up environment and agent skills:
make setup-env # Creates .env files and installs shared agent skillsThis also runs
npx @sentry/dotagents installto install shared skills from getsentry/skills into.agents/skills/(symlinked into.claude/skillsand.cursor/skills). If you need to update skills later, run it directly:npx @sentry/dotagents install -
Create an OAuth App in Sentry (Settings => API => Applications):
- Homepage URL:
http://localhost:5173 - Authorized Redirect URIs:
http://localhost:5173/oauth/callback - Note your Client ID and generate a Client secret
- Homepage URL:
-
Configure your credentials:
- Edit
.envin the root directory and add yourOPENAI_API_KEY - Edit
packages/mcp-cloudflare/.envand add:SENTRY_CLIENT_ID=your_development_sentry_client_idSENTRY_CLIENT_SECRET=your_development_sentry_client_secretCOOKIE_SECRET=my-super-secret-cookie
- Edit
-
Start the development server:
pnpm dev
Verify
Run the server locally to make it available at http://localhost:5173
pnpm dev
To test the local server, enter http://localhost:5173/mcp into Inspector and hit connect. Once you follow the prompts, you'll be able to "List Tools".
Tests
There are three test suites included: unit tests, evaluations, and manual testing.
Unit tests can be run using:
pnpm test
Evaluations require a .env file in the project root with some config:
# .env (in project root)
OPENAI_API_KEY= # Also required for AI-powered search tools in production
Note: The root .env file provides defaults for all packages. Individual packages can have their own .env files to override these defaults during development.
Once that's done you can run them using:
pnpm eval
Manual testing (preferred for testing MCP changes):
# Test with local dev server (default: http://localhost:5173)
pnpm -w run cli "who am I?"
# Test agent mode (use_sentry tool only)
pnpm -w run cli --agent "who am I?"
# Test against production
pnpm -w run cli --mcp-host=https://mcp.sentry.dev "query"
# Test with local stdio mode (requires SENTRY_ACCESS_TOKEN)
pnpm -w run cli --access-token=TOKEN "query"
Note: The CLI defaults to http://localhost:5173. Override with --mcp-host or set MCP_URL environment variable.
Comprehensive testing playbooks:
- Stdio testing: See
docs/testing-stdio.mdfor complete guide on building, running, and testing the stdio implementation (IDEs, MCP Inspector) - Remote testing: See
docs/testing-remote.mdfor complete guide on testing the remote server (OAuth, web UI, CLI client)
Development Notes
Automated Code Review
This repository uses automated code review tools (like Cursor BugBot) to help identify potential issues in pull requests. These tools provide helpful feedback and suggestions, but we do not recommend making these checks required as the accuracy is still evolving and can produce false positives.
The automated reviews should be treated as:
- ✅ Helpful suggestions to consider during code review
- ✅ Starting points for discussion and improvement
- ❌ Not blocking requirements for merging PRs
- ❌ Not replacements for human code review
When addressing automated feedback, focus on the underlying concerns rather than strictly following every suggestion.
Contributor Documentation
Looking to contribute or explore the full documentation map? See CLAUDE.md (also available as AGENTS.md) for contributor workflows and the complete docs index. The docs/ folder contains the per-topic guides and tool-integrated .md files.
Alternatives
Related Skills
Browse all skillsUI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".
Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals. Use PROACTIVELY for API documentation or developer portal creation.
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.
Guide for building TypeScript CLIs with Bun. Use when creating command-line tools, adding subcommands to existing CLIs, or building developer tooling. Covers argument parsing, subcommand patterns, output formatting, and distribution.
Integrate Vercel AI SDK applications with You.com tools (web search, AI agent, content extraction). Use when developer mentions AI SDK, Vercel AI SDK, generateText, streamText, or You.com integration with AI SDK.