scholar-evaluation

37
3
Source

Systematically evaluate scholarly work using the ScholarEval framework, providing structured assessment across research quality dimensions including problem formulation, methodology, analysis, and writing with quantitative scoring and actionable feedback.

Install

mkdir -p .claude/skills/scholar-evaluation && curl -L -o skill.zip "https://mcp.directory/api/skills/download/1533" && unzip -o skill.zip -d .claude/skills/scholar-evaluation && rm skill.zip

Installs to .claude/skills/scholar-evaluation

About this skill

Scholar Evaluation

Overview

Apply the ScholarEval framework to systematically evaluate scholarly and research work. This skill provides structured evaluation methodology based on peer-reviewed research assessment criteria, enabling comprehensive analysis of academic papers, research proposals, literature reviews, and scholarly writing across multiple quality dimensions.

When to Use This Skill

Use this skill when:

  • Evaluating research papers for quality and rigor
  • Assessing literature review comprehensiveness and quality
  • Reviewing research methodology design
  • Scoring data analysis approaches
  • Evaluating scholarly writing and presentation
  • Providing structured feedback on academic work
  • Benchmarking research quality against established criteria
  • Assessing publication readiness for target venues
  • Providing quantitative evaluation to complement qualitative peer review

Visual Enhancement with Scientific Schematics

When creating documents with this skill, always consider adding scientific diagrams and schematics to enhance visual communication.

If your document does not already contain schematics or diagrams:

  • Use the scientific-schematics skill to generate AI-powered publication-quality diagrams
  • Simply describe your desired diagram in natural language
  • Nano Banana Pro will automatically generate, review, and refine the schematic

For new documents: Scientific schematics should be generated by default to visually represent key concepts, workflows, architectures, or relationships described in the text.

How to generate schematics:

python scripts/generate_schematic.py "your diagram description" -o figures/output.png

The AI will automatically:

  • Create publication-quality images with proper formatting
  • Review and refine through multiple iterations
  • Ensure accessibility (colorblind-friendly, high contrast)
  • Save outputs in the figures/ directory

When to add schematics:

  • Evaluation framework diagrams
  • Quality assessment criteria decision trees
  • Scholarly workflow visualizations
  • Assessment methodology flowcharts
  • Scoring rubric visualizations
  • Evaluation process diagrams
  • Any complex concept that benefits from visualization

For detailed guidance on creating schematics, refer to the scientific-schematics skill documentation.


Evaluation Workflow

Step 1: Initial Assessment and Scope Definition

Begin by identifying the type of scholarly work being evaluated and the evaluation scope:

Work Types:

  • Full research paper (empirical, theoretical, or review)
  • Research proposal or protocol
  • Literature review (systematic, narrative, or scoping)
  • Thesis or dissertation chapter
  • Conference abstract or short paper

Evaluation Scope:

  • Comprehensive (all dimensions)
  • Targeted (specific aspects like methodology or writing)
  • Comparative (benchmarking against other work)

Ask the user to clarify if the scope is ambiguous.

Step 2: Dimension-Based Evaluation

Systematically evaluate the work across the ScholarEval dimensions. For each applicable dimension, assess quality, identify strengths and weaknesses, and provide scores where appropriate.

Refer to references/evaluation_framework.md for detailed criteria and rubrics for each dimension.

Core Evaluation Dimensions:

  1. Problem Formulation & Research Questions

    • Clarity and specificity of research questions
    • Theoretical or practical significance
    • Feasibility and scope appropriateness
    • Novelty and contribution potential
  2. Literature Review

    • Comprehensiveness of coverage
    • Critical synthesis vs. mere summarization
    • Identification of research gaps
    • Currency and relevance of sources
    • Proper contextualization
  3. Methodology & Research Design

    • Appropriateness for research questions
    • Rigor and validity
    • Reproducibility and transparency
    • Ethical considerations
    • Limitations acknowledgment
  4. Data Collection & Sources

    • Quality and appropriateness of data
    • Sample size and representativeness
    • Data collection procedures
    • Source credibility and reliability
  5. Analysis & Interpretation

    • Appropriateness of analytical methods
    • Rigor of analysis
    • Logical coherence
    • Alternative explanations considered
    • Results-claims alignment
  6. Results & Findings

    • Clarity of presentation
    • Statistical or qualitative rigor
    • Visualization quality
    • Interpretation accuracy
    • Implications discussion
  7. Scholarly Writing & Presentation

    • Clarity and organization
    • Academic tone and style
    • Grammar and mechanics
    • Logical flow
    • Accessibility to target audience
  8. Citations & References

    • Citation completeness
    • Source quality and appropriateness
    • Citation accuracy
    • Balance of perspectives
    • Adherence to citation standards

Step 3: Scoring and Rating

For each evaluated dimension, provide:

Qualitative Assessment:

  • Key strengths (2-3 specific points)
  • Areas for improvement (2-3 specific points)
  • Critical issues (if any)

Quantitative Scoring (Optional): Use a 5-point scale where applicable:

  • 5: Excellent - Exemplary quality, publishable in top venues
  • 4: Good - Strong quality with minor improvements needed
  • 3: Adequate - Acceptable quality with notable areas for improvement
  • 2: Needs Improvement - Significant revisions required
  • 1: Poor - Fundamental issues requiring major revision

To calculate aggregate scores programmatically, use scripts/calculate_scores.py.

Step 4: Synthesize Overall Assessment

Provide an integrated evaluation summary:

  1. Overall Quality Assessment - Holistic judgment of the work's scholarly merit
  2. Major Strengths - 3-5 key strengths across dimensions
  3. Critical Weaknesses - 3-5 primary areas requiring attention
  4. Priority Recommendations - Ranked list of improvements by impact
  5. Publication Readiness (if applicable) - Assessment of suitability for target venues

Step 5: Provide Actionable Feedback

Transform evaluation findings into constructive, actionable feedback:

Feedback Structure:

  • Specific - Reference exact sections, paragraphs, or page numbers
  • Actionable - Provide concrete suggestions for improvement
  • Prioritized - Rank recommendations by importance and feasibility
  • Balanced - Acknowledge strengths while addressing weaknesses
  • Evidence-based - Ground feedback in evaluation criteria

Feedback Format Options:

  • Structured report with dimension-by-dimension analysis
  • Annotated comments mapped to specific document sections
  • Executive summary with key findings and recommendations
  • Comparative analysis against benchmark standards

Step 6: Contextual Considerations

Adjust evaluation approach based on:

Stage of Development:

  • Early draft: Focus on conceptual and structural issues
  • Advanced draft: Focus on refinement and polish
  • Final submission: Comprehensive quality check

Purpose and Venue:

  • Journal article: High standards for rigor and contribution
  • Conference paper: Balance novelty with presentation clarity
  • Student work: Educational feedback with developmental focus
  • Grant proposal: Emphasis on feasibility and impact

Discipline-Specific Norms:

  • STEM fields: Emphasis on reproducibility and statistical rigor
  • Social sciences: Balance quantitative and qualitative standards
  • Humanities: Focus on argumentation and scholarly interpretation

Resources

references/evaluation_framework.md

Detailed evaluation criteria, rubrics, and quality indicators for each ScholarEval dimension. Load this reference when conducting evaluations to access specific assessment guidelines and scoring rubrics.

Search patterns for quick access:

  • "Problem Formulation criteria"
  • "Literature Review rubric"
  • "Methodology assessment"
  • "Data quality indicators"
  • "Analysis rigor standards"
  • "Writing quality checklist"

scripts/calculate_scores.py

Python script for calculating aggregate evaluation scores from dimension-level ratings. Supports weighted averaging, threshold analysis, and score visualization.

Usage:

python scripts/calculate_scores.py --scores <dimension_scores.json> --output <report.txt>

Best Practices

  1. Maintain Objectivity - Base evaluations on established criteria, not personal preferences
  2. Be Comprehensive - Evaluate all applicable dimensions systematically
  3. Provide Evidence - Support assessments with specific examples from the work
  4. Stay Constructive - Frame weaknesses as opportunities for improvement
  5. Consider Context - Adjust expectations based on work stage and purpose
  6. Document Rationale - Explain the reasoning behind assessments and scores
  7. Encourage Strengths - Explicitly acknowledge what the work does well
  8. Prioritize Feedback - Focus on high-impact improvements first

Example Evaluation Workflow

User Request: "Evaluate this research paper on machine learning for drug discovery"

Response Process:

  1. Identify work type (empirical research paper) and scope (comprehensive evaluation)
  2. Load references/evaluation_framework.md for detailed criteria
  3. Systematically assess each dimension:
    • Problem formulation: Clear research question about ML model performance
    • Literature review: Comprehensive coverage of recent ML and drug discovery work
    • Methodology: Appropriate deep learning architecture with validation procedures
    • [Continue through all dimensions...]
  4. Calculate dimension scores and overall assessment
  5. Synthesize findings into structured report highlighting:
    • Strong methodology and reproducible code
    • Needs more diverse dataset evaluation
    • Writing could improve clarity in results section
  6. Provide prioritized recommendations with specific suggestions

Integration with Scientific Writer

This skill integrates seamlessly with the scientific writer workflow:

**After Paper


Content truncated.

literature-review

K-Dense-AI

Conduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).

870384

markitdown

K-Dense-AI

Convert various file formats (PDF, Office documents, images, audio, web content, structured data) to Markdown optimized for LLM processing. Use when converting documents to markdown, extracting text from PDFs/Office files, transcribing audio, performing OCR on images, extracting YouTube transcripts, or processing batches of files. Supports 20+ formats including DOCX, XLSX, PPTX, PDF, HTML, EPUB, CSV, JSON, images with OCR, and audio with transcription.

20895

scientific-writing

K-Dense-AI

Write scientific manuscripts. IMRAD structure, citations (APA/AMA/Vancouver), figures/tables, reporting guidelines (CONSORT/STROBE/PRISMA), abstracts, for research papers and journal submissions.

25875

pubmed-database

K-Dense-AI

"Direct REST API access to PubMed. Advanced Boolean/MeSH queries, E-utilities API, batch processing, citation management. For Python workflows, prefer biopython (Bio.Entrez). Use this for direct HTTP/REST work or custom API implementations."

14433

reportlab

K-Dense-AI

"PDF generation toolkit. Create invoices, reports, certificates, forms, charts, tables, barcodes, QR codes, Canvas/Platypus APIs, for professional document automation."

14428

matplotlib

K-Dense-AI

Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.

12321

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,5611,368

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

1,0951,178

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,4091,106

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

1,180741

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

1,139682

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,289604

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.