peer-review
Systematic peer review toolkit. Evaluate methodology, statistics, design, reproducibility, ethics, figure integrity, reporting standards, for manuscript and grant review across disciplines.
Install
mkdir -p .claude/skills/peer-review && curl -L -o skill.zip "https://mcp.directory/api/skills/download/748" && unzip -o skill.zip -d .claude/skills/peer-review && rm skill.zipInstalls to .claude/skills/peer-review
About this skill
Scientific Critical Evaluation and Peer Review
Overview
Peer review is a systematic process for evaluating scientific manuscripts. Assess methodology, statistics, design, reproducibility, ethics, and reporting standards. Apply this skill for manuscript and grant review across disciplines with constructive, rigorous evaluation.
When to Use This Skill
This skill should be used when:
- Conducting peer review of scientific manuscripts for journals
- Evaluating grant proposals and research applications
- Assessing methodology and experimental design rigor
- Reviewing statistical analyses and reporting standards
- Evaluating reproducibility and data availability
- Checking compliance with reporting guidelines (CONSORT, STROBE, PRISMA)
- Providing constructive feedback on scientific writing
Visual Enhancement with Scientific Schematics
When creating documents with this skill, always consider adding scientific diagrams and schematics to enhance visual communication.
If your document does not already contain schematics or diagrams:
- Use the scientific-schematics skill to generate AI-powered publication-quality diagrams
- Simply describe your desired diagram in natural language
- Nano Banana Pro will automatically generate, review, and refine the schematic
For new documents: Scientific schematics should be generated by default to visually represent key concepts, workflows, architectures, or relationships described in the text.
How to generate schematics:
python scripts/generate_schematic.py "your diagram description" -o figures/output.png
The AI will automatically:
- Create publication-quality images with proper formatting
- Review and refine through multiple iterations
- Ensure accessibility (colorblind-friendly, high contrast)
- Save outputs in the figures/ directory
When to add schematics:
- Peer review workflow diagrams
- Evaluation criteria decision trees
- Review process flowcharts
- Methodology assessment frameworks
- Quality assessment visualizations
- Reporting guidelines compliance diagrams
- Any complex concept that benefits from visualization
For detailed guidance on creating schematics, refer to the scientific-schematics skill documentation.
Peer Review Workflow
Conduct peer review systematically through the following stages, adapting depth and focus based on the manuscript type and discipline.
Stage 1: Initial Assessment
Begin with a high-level evaluation to determine the manuscript's scope, novelty, and overall quality.
Key Questions:
- What is the central research question or hypothesis?
- What are the main findings and conclusions?
- Is the work scientifically sound and significant?
- Is the work appropriate for the intended venue?
- Are there any immediate major flaws that would preclude publication?
Output: Brief summary (2-3 sentences) capturing the manuscript's essence and initial impression.
Stage 2: Detailed Section-by-Section Review
Conduct a thorough evaluation of each manuscript section, documenting specific concerns and strengths.
Abstract and Title
- Accuracy: Does the abstract accurately reflect the study's content and conclusions?
- Clarity: Is the title specific, accurate, and informative?
- Completeness: Are key findings and methods summarized appropriately?
- Accessibility: Is the abstract comprehensible to a broad scientific audience?
Introduction
- Context: Is the background information adequate and current?
- Rationale: Is the research question clearly motivated and justified?
- Novelty: Is the work's originality and significance clearly articulated?
- Literature: Are relevant prior studies appropriately cited?
- Objectives: Are research aims/hypotheses clearly stated?
Methods
- Reproducibility: Can another researcher replicate the study from the description provided?
- Rigor: Are the methods appropriate for addressing the research questions?
- Detail: Are protocols, reagents, equipment, and parameters sufficiently described?
- Ethics: Are ethical approvals, consent, and data handling properly documented?
- Statistics: Are statistical methods appropriate, clearly described, and justified?
- Validation: Are controls, replicates, and validation approaches adequate?
Critical elements to verify:
- Sample sizes and power calculations
- Randomization and blinding procedures
- Inclusion/exclusion criteria
- Data collection protocols
- Computational methods and software versions
- Statistical tests and correction for multiple comparisons
Results
- Presentation: Are results presented logically and clearly?
- Figures/Tables: Are visualizations appropriate, clear, and properly labeled?
- Statistics: Are statistical results properly reported (effect sizes, confidence intervals, p-values)?
- Objectivity: Are results presented without over-interpretation?
- Completeness: Are all relevant results included, including negative results?
- Reproducibility: Are raw data or summary statistics provided?
Common issues to identify:
- Selective reporting of results
- Inappropriate statistical tests
- Missing error bars or measures of variability
- Over-fitting or circular analysis
- Batch effects or confounding variables
- Missing controls or validation experiments
Discussion
- Interpretation: Are conclusions supported by the data?
- Limitations: Are study limitations acknowledged and discussed?
- Context: Are findings placed appropriately within existing literature?
- Speculation: Is speculation clearly distinguished from data-supported conclusions?
- Significance: Are implications and importance clearly articulated?
- Future directions: Are next steps or unanswered questions discussed?
Red flags:
- Overstated conclusions
- Ignoring contradictory evidence
- Causal claims from correlational data
- Inadequate discussion of limitations
- Mechanistic claims without mechanistic evidence
References
- Completeness: Are key relevant papers cited?
- Currency: Are recent important studies included?
- Balance: Are contrary viewpoints appropriately cited?
- Accuracy: Are citations accurate and appropriate?
- Self-citation: Is there excessive or inappropriate self-citation?
Stage 3: Methodological and Statistical Rigor
Evaluate the technical quality and rigor of the research with particular attention to common pitfalls.
Statistical Assessment:
- Are statistical assumptions met (normality, independence, homoscedasticity)?
- Are effect sizes reported alongside p-values?
- Is multiple testing correction applied appropriately?
- Are confidence intervals provided?
- Is sample size justified with power analysis?
- Are parametric vs. non-parametric tests chosen appropriately?
- Are missing data handled properly?
- Are exploratory vs. confirmatory analyses distinguished?
Experimental Design:
- Are controls appropriate and adequate?
- Is replication sufficient (biological and technical)?
- Are potential confounders identified and controlled?
- Is randomization properly implemented?
- Are blinding procedures adequate?
- Is the experimental design optimal for the research question?
Computational/Bioinformatics:
- Are computational methods clearly described and justified?
- Are software versions and parameters documented?
- Is code made available for reproducibility?
- Are algorithms and models validated appropriately?
- Are assumptions of computational methods met?
- Is batch correction applied appropriately?
Stage 4: Reproducibility and Transparency
Assess whether the research meets modern standards for reproducibility and open science.
Data Availability:
- Are raw data deposited in appropriate repositories?
- Are accession numbers provided for public databases?
- Are data sharing restrictions justified (e.g., patient privacy)?
- Are data formats standard and accessible?
Code and Materials:
- Is analysis code made available (GitHub, Zenodo, etc.)?
- Are unique materials available or described sufficiently for recreation?
- Are protocols detailed in sufficient depth?
Reporting Standards:
- Does the manuscript follow discipline-specific reporting guidelines (CONSORT, PRISMA, ARRIVE, MIAME, MINSEQE, etc.)?
- See
references/reporting_standards.mdfor common guidelines - Are all elements of the appropriate checklist addressed?
Stage 5: Figure and Data Presentation
Evaluate the quality, clarity, and integrity of data visualization.
Quality Checks:
- Are figures high resolution and clearly labeled?
- Are axes properly labeled with units?
- Are error bars defined (SD, SEM, CI)?
- Are statistical significance indicators explained?
- Are color schemes appropriate and accessible (colorblind-friendly)?
- Are scale bars included for images?
- Is data visualization appropriate for the data type?
Integrity Checks:
- Are there signs of image manipulation (duplications, splicing)?
- Are Western blots and gels appropriately presented?
- Are representative images truly representative?
- Are all conditions shown (no selective presentation)?
Clarity:
- Can figures stand alone with their legends?
- Is the message of each figure immediately clear?
- Are there redundant figures or panels?
- Would data be better presented as tables or figures?
Stage 6: Ethical Considerations
Verify that the research meets ethical standards and guidelines.
Human Subjects:
- Is IRB/ethics approval documented?
- Is informed consent described?
- Are vulnerable populations appropriately protected?
- Is patient privacy adequately protected?
- Are potential conflicts of interest disclosed?
Animal Research:
- Is IACUC or equivalent approval documented?
- Are procedures humane and justified?
- Are the 3Rs (replacement, reduction, refinement) considered?
- Are euthanasia methods appropriate?
Research Integrity:
- Are there concerns ab
Content truncated.
More by davila7
View all skills by davila7 →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversSearch AMiner's scholarly article database to find peer reviewed articles by keyword, author, or venue for advanced acad
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Serena is a free AI code generator toolkit providing robust code editing and retrieval, turning LLMs into powerful artif
Create modern React UI components instantly with Magic AI Agent. Integrates with top IDEs for fast, stunning design and
Vizro creates and validates data-visualization dashboards from natural language, auto-generating chart code and interact
Unlock powerful text to speech and AI voice generator tools with ElevenLabs. Create, clone, and customize speech easily.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.