literature-review
Conduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).
Install
mkdir -p .claude/skills/literature-review && curl -L -o skill.zip "https://mcp.directory/api/skills/download/89" && unzip -o skill.zip -d .claude/skills/literature-review && rm skill.zipInstalls to .claude/skills/literature-review
Deep dive
10 systematic-review pipelines with the literature-review skill
Cookbook of 10 PRISMA-grade pipelines — Semantic Scholar query, triple-database Boolean merge, BibTeX export, methods extraction, gap analysis, citation graph, snowball, summary table, thesis chapter — each in one Claude prompt.
About this skill
Literature Review
Overview
Conduct systematic, comprehensive literature reviews following rigorous academic methodology. Search multiple literature databases, synthesize findings thematically, verify all citations for accuracy, and generate professional output documents in markdown and PDF formats.
This skill integrates with multiple scientific skills for database access (gget, bioservices, datacommons-client) and provides specialized tools for citation verification, result aggregation, and document generation.
When to Use This Skill
Use this skill when:
- Conducting a systematic literature review for research or publication
- Synthesizing current knowledge on a specific topic across multiple sources
- Performing meta-analysis or scoping reviews
- Writing the literature review section of a research paper or thesis
- Investigating the state of the art in a research domain
- Identifying research gaps and future directions
- Requiring verified citations and professional formatting
Visual Enhancement with Scientific Schematics
⚠️ MANDATORY: Every literature review MUST include at least 1-2 AI-generated figures using the scientific-schematics skill.
This is not optional. Literature reviews without visual elements are incomplete. Before finalizing any document:
- Generate at minimum ONE schematic or diagram (e.g., PRISMA flow diagram for systematic reviews)
- Prefer 2-3 figures for comprehensive reviews (search strategy flowchart, thematic synthesis diagram, conceptual framework)
How to generate figures:
- Use the scientific-schematics skill to generate AI-powered publication-quality diagrams
- Simply describe your desired diagram in natural language
- Nano Banana Pro will automatically generate, review, and refine the schematic
How to generate schematics:
python scripts/generate_schematic.py "your diagram description" -o figures/output.png
The AI will automatically:
- Create publication-quality images with proper formatting
- Review and refine through multiple iterations
- Ensure accessibility (colorblind-friendly, high contrast)
- Save outputs in the figures/ directory
When to add schematics:
- PRISMA flow diagrams for systematic reviews
- Literature search strategy flowcharts
- Thematic synthesis diagrams
- Research gap visualization maps
- Citation network diagrams
- Conceptual framework illustrations
- Any complex concept that benefits from visualization
For detailed guidance on creating schematics, refer to the scientific-schematics skill documentation.
Core Workflow
Literature reviews follow a structured, multi-phase workflow:
Phase 1: Planning and Scoping
-
Define Research Question: Use PICO framework (Population, Intervention, Comparison, Outcome) for clinical/biomedical reviews
- Example: "What is the efficacy of CRISPR-Cas9 (I) for treating sickle cell disease (P) compared to standard care (C)?"
-
Establish Scope and Objectives:
- Define clear, specific research questions
- Determine review type (narrative, systematic, scoping, meta-analysis)
- Set boundaries (time period, geographic scope, study types)
-
Develop Search Strategy:
- Identify 2-4 main concepts from research question
- List synonyms, abbreviations, and related terms for each concept
- Plan Boolean operators (AND, OR, NOT) to combine terms
- Select minimum 3 complementary databases
-
Set Inclusion/Exclusion Criteria:
- Date range (e.g., last 10 years: 2015-2024)
- Language (typically English, or specify multilingual)
- Publication types (peer-reviewed, preprints, reviews)
- Study designs (RCTs, observational, in vitro, etc.)
- Document all criteria clearly
Phase 2: Systematic Literature Search
-
Multi-Database Search:
Select databases appropriate for the domain:
Biomedical & Life Sciences:
- Use
ggetskill:gget search pubmed "search terms"for PubMed/PMC - Use
ggetskill:gget search biorxiv "search terms"for preprints - Use
bioservicesskill for ChEMBL, KEGG, UniProt, etc.
General Scientific Literature:
- Search arXiv via direct API (preprints in physics, math, CS, q-bio)
- Search Semantic Scholar via API (200M+ papers, cross-disciplinary)
- Use Google Scholar for comprehensive coverage (manual or careful scraping)
Specialized Databases:
- Use
gget alphafoldfor protein structures - Use
gget cosmicfor cancer genomics - Use
datacommons-clientfor demographic/statistical data - Use specialized databases as appropriate for the domain
- Use
-
Document Search Parameters:
## Search Strategy ### Database: PubMed - **Date searched**: 2024-10-25 - **Date range**: 2015-01-01 to 2024-10-25 - **Search string**:("CRISPR"[Title] OR "Cas9"[Title]) AND ("sickle cell"[MeSH] OR "SCD"[Title/Abstract]) AND 2015:2024[Publication Date]
- **Results**: 247 articlesRepeat for each database searched.
-
Export and Aggregate Results:
- Export results in JSON format from each database
- Combine all results into a single file
- Use
scripts/search_databases.pyfor post-processing:python search_databases.py combined_results.json \ --deduplicate \ --format markdown \ --output aggregated_results.md
Phase 3: Screening and Selection
-
Deduplication:
python search_databases.py results.json --deduplicate --output unique_results.json- Removes duplicates by DOI (primary) or title (fallback)
- Document number of duplicates removed
-
Title Screening:
- Review all titles against inclusion/exclusion criteria
- Exclude obviously irrelevant studies
- Document number excluded at this stage
-
Abstract Screening:
- Read abstracts of remaining studies
- Apply inclusion/exclusion criteria rigorously
- Document reasons for exclusion
-
Full-Text Screening:
- Obtain full texts of remaining studies
- Conduct detailed review against all criteria
- Document specific reasons for exclusion
- Record final number of included studies
-
Create PRISMA Flow Diagram:
Initial search: n = X ├─ After deduplication: n = Y ├─ After title screening: n = Z ├─ After abstract screening: n = A └─ Included in review: n = B
Phase 4: Data Extraction and Quality Assessment
-
Extract Key Data from each included study:
- Study metadata (authors, year, journal, DOI)
- Study design and methods
- Sample size and population characteristics
- Key findings and results
- Limitations noted by authors
- Funding sources and conflicts of interest
-
Assess Study Quality:
- For RCTs: Use Cochrane Risk of Bias tool
- For observational studies: Use Newcastle-Ottawa Scale
- For systematic reviews: Use AMSTAR 2
- Rate each study: High, Moderate, Low, or Very Low quality
- Consider excluding very low-quality studies
-
Organize by Themes:
- Identify 3-5 major themes across studies
- Group studies by theme (studies may appear in multiple themes)
- Note patterns, consensus, and controversies
Phase 5: Synthesis and Analysis
-
Create Review Document from template:
cp assets/review_template.md my_literature_review.md -
Write Thematic Synthesis (NOT study-by-study summaries):
- Organize Results section by themes or research questions
- Synthesize findings across multiple studies within each theme
- Compare and contrast different approaches and results
- Identify consensus areas and points of controversy
- Highlight the strongest evidence
Example structure:
#### 3.3.1 Theme: CRISPR Delivery Methods Multiple delivery approaches have been investigated for therapeutic gene editing. Viral vectors (AAV) were used in 15 studies^1-15^ and showed high transduction efficiency (65-85%) but raised immunogenicity concerns^3,7,12^. In contrast, lipid nanoparticles demonstrated lower efficiency (40-60%) but improved safety profiles^16-23^. -
Critical Analysis:
- Evaluate methodological strengths and limitations across studies
- Assess quality and consistency of evidence
- Identify knowledge gaps and methodological gaps
- Note areas requiring future research
-
Write Discussion:
- Interpret findings in broader context
- Discuss clinical, practical, or research implications
- Acknowledge limitations of the review itself
- Compare with previous reviews if applicable
- Propose specific future research directions
Phase 6: Citation Verification
CRITICAL: All citations must be verified for accuracy before final submission.
-
Verify All DOIs:
python scripts/verify_citations.py my_literature_review.mdThis script:
- Extracts all DOIs from the document
- Verifies each DOI resolves correctly
- Retrieves metadata from CrossRef
- Generates verification report
- Outputs properly formatted citations
-
Review Verification Report:
- Check for any failed DOIs
- Verify author names, titles, and publication details match
- Correct any errors in the original document
- Re-run verification until all citations pass
-
Format Citations Consistently:
- Choose one citation style and use throughout (see
references/citation_styles.md) - Common styles: APA, Nature, Vancouver, Chicago, IEEE
- Use verification script output to format citations correctly
- Ensure in-text citations match reference list format
- Choose one citation style and use throughout (see
Phase 7: Document Generation
-
Generate PDF:
python scripts/generate_pdf.py my_literature_review.md \ --citation-style apa \ --output my_review.pdfOptions:
--citation-style: apa, nature, chicago, vancouver, ieee--no-toc: Disable table of conten
Content truncated.
More by K-Dense-AI
View all skills by K-Dense-AI →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversGemini DeepSearch automates web research using Google Search API and Gemini models, delivering in-depth, cited insights
Desktop Commander MCP unifies code management with advanced source control, git, and svn support—streamlining developmen
Deep Research MCP — an AI research assistant and LLM research tool for multi-step web search, content analysis, and synt
Create modern React UI components instantly with Magic AI Agent. Integrates with top IDEs for fast, stunning design and
Empower AI with the Exa MCP Server—an AI research tool for real-time web search, academic data, and smarter, up-to-date
Boost Postgres performance with Postgres MCP Pro—AI-driven index tuning, health checks, and safe, intelligent SQL optimi
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.