gpt-researcher
GPT Researcher is an autonomous deep research agent that conducts web and local research, producing detailed reports with citations. Use this skill when helping developers understand, extend, debug, or integrate with GPT Researcher - including adding features, understanding the architecture, working with the API, customizing research workflows, adding new retrievers, integrating MCP data sources, or troubleshooting research pipelines.
Install
mkdir -p .claude/skills/gpt-researcher && curl -L -o skill.zip "https://mcp.directory/api/skills/download/588" && unzip -o skill.zip -d .claude/skills/gpt-researcher && rm skill.zipInstalls to .claude/skills/gpt-researcher
About this skill
GPT Researcher Development Skill
GPT Researcher is an LLM-based autonomous agent using a planner-executor-publisher pattern with parallelized agent work for speed and reliability.
Quick Start
Basic Python Usage
from gpt_researcher import GPTResearcher
import asyncio
async def main():
researcher = GPTResearcher(
query="What are the latest AI developments?",
report_type="research_report", # or detailed_report, deep, outline_report
report_source="web", # or local, hybrid
)
await researcher.conduct_research()
report = await researcher.write_report()
print(report)
asyncio.run(main())
Run Servers
# Backend
python -m uvicorn backend.server.server:app --reload --port 8000
# Frontend
cd frontend/nextjs && npm install && npm run dev
Key File Locations
| Need | Primary File | Key Classes |
|---|---|---|
| Main orchestrator | gpt_researcher/agent.py | GPTResearcher |
| Research logic | gpt_researcher/skills/researcher.py | ResearchConductor |
| Report writing | gpt_researcher/skills/writer.py | ReportGenerator |
| All prompts | gpt_researcher/prompts.py | PromptFamily |
| Configuration | gpt_researcher/config/config.py | Config |
| Config defaults | gpt_researcher/config/variables/default.py | DEFAULT_CONFIG |
| API server | backend/server/app.py | FastAPI app |
| Search engines | gpt_researcher/retrievers/ | Various retrievers |
Architecture Overview
User Query → GPTResearcher.__init__()
│
▼
choose_agent() → (agent_type, role_prompt)
│
▼
ResearchConductor.conduct_research()
├── plan_research() → sub_queries
├── For each sub_query:
│ └── _process_sub_query() → context
└── Aggregate contexts
│
▼
[Optional] ImageGenerator.plan_and_generate_images()
│
▼
ReportGenerator.write_report() → Markdown report
For detailed architecture diagrams: See references/architecture.md
Core Patterns
Adding a New Feature (8-Step Pattern)
- Config → Add to
gpt_researcher/config/variables/default.py - Provider → Create in
gpt_researcher/llm_provider/my_feature/ - Skill → Create in
gpt_researcher/skills/my_feature.py - Agent → Integrate in
gpt_researcher/agent.py - Prompts → Update
gpt_researcher/prompts.py - WebSocket → Events via
stream_output() - Frontend → Handle events in
useWebSocket.ts - Docs → Create
docs/docs/gpt-researcher/gptr/my_feature.md
For complete feature addition guide with Image Generation case study: See references/adding-features.md
Adding a New Retriever
# 1. Create: gpt_researcher/retrievers/my_retriever/my_retriever.py
class MyRetriever:
def __init__(self, query: str, headers: dict = None):
self.query = query
async def search(self, max_results: int = 10) -> list[dict]:
# Return: [{"title": str, "href": str, "body": str}]
pass
# 2. Register in gpt_researcher/actions/retriever.py
case "my_retriever":
from gpt_researcher.retrievers.my_retriever import MyRetriever
return MyRetriever
# 3. Export in gpt_researcher/retrievers/__init__.py
For complete retriever documentation: See references/retrievers.md
Configuration
Config keys are lowercased when accessed:
# In default.py: "SMART_LLM": "gpt-4o"
# Access as: self.cfg.smart_llm # lowercase!
Priority: Environment Variables → JSON Config File → Default Values
For complete configuration reference: See references/config-reference.md
Common Integration Points
WebSocket Streaming
class WebSocketHandler:
async def send_json(self, data):
print(f"[{data['type']}] {data.get('output', '')}")
researcher = GPTResearcher(query="...", websocket=WebSocketHandler())
MCP Data Sources
researcher = GPTResearcher(
query="Open source AI projects",
mcp_configs=[{
"name": "github",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")}
}],
mcp_strategy="deep", # or "fast", "disabled"
)
For MCP integration details: See references/mcp.md
Deep Research Mode
researcher = GPTResearcher(
query="Comprehensive analysis of quantum computing",
report_type="deep", # Triggers recursive tree-like exploration
)
For deep research configuration: See references/deep-research.md
Error Handling
Always use graceful degradation in skills:
async def execute(self, ...):
if not self.is_enabled():
return [] # Don't crash
try:
result = await self.provider.execute(...)
return result
except Exception as e:
await stream_output("logs", "error", f"⚠️ {e}", self.websocket)
return [] # Graceful degradation
Critical Gotchas
| ❌ Mistake | ✅ Correct |
|---|---|
config.MY_VAR | config.my_var (lowercased) |
| Editing pip-installed package | pip install -e . |
| Forgetting async/await | All research methods are async |
websocket.send_json() on None | Check if websocket: first |
| Not registering retriever | Add to retriever.py match statement |
Reference Documentation
| Topic | File |
|---|---|
| System architecture & diagrams | references/architecture.md |
| Core components & signatures | references/components.md |
| Research flow & data flow | references/flows.md |
| Prompt system | references/prompts.md |
| Retriever system | references/retrievers.md |
| MCP integration | references/mcp.md |
| Deep research mode | references/deep-research.md |
| Multi-agent system | references/multi-agents.md |
| Adding features guide | references/adding-features.md |
| Advanced patterns | references/advanced-patterns.md |
| REST & WebSocket API | references/api-reference.md |
| Configuration variables | references/config-reference.md |
You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
rust-coding-skill
UtakataKyosui
Guides Claude in writing idiomatic, efficient, well-structured Rust code using proper data modeling, traits, impl organization, macros, and build-speed best practices.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.