exploratory-data-analysis

111
10
Source

Perform comprehensive exploratory data analysis on scientific data files across 200+ file formats. This skill should be used when analyzing any scientific data file to understand its structure, content, quality, and characteristics. Automatically detects file type and generates detailed markdown reports with format-specific analysis, quality metrics, and downstream analysis recommendations. Covers chemistry, bioinformatics, microscopy, spectroscopy, proteomics, metabolomics, and general scientific data formats.

Install

mkdir -p .claude/skills/exploratory-data-analysis && curl -L -o skill.zip "https://mcp.directory/api/skills/download/165" && unzip -o skill.zip -d .claude/skills/exploratory-data-analysis && rm skill.zip

Installs to .claude/skills/exploratory-data-analysis

About this skill

Exploratory Data Analysis

Overview

Perform comprehensive exploratory data analysis (EDA) on scientific data files across multiple domains. This skill provides automated file type detection, format-specific analysis, data quality assessment, and generates detailed markdown reports suitable for documentation and downstream analysis planning.

Key Capabilities:

  • Automatic detection and analysis of 200+ scientific file formats
  • Comprehensive format-specific metadata extraction
  • Data quality and integrity assessment
  • Statistical summaries and distributions
  • Visualization recommendations
  • Downstream analysis suggestions
  • Markdown report generation

When to Use This Skill

Use this skill when:

  • User provides a path to a scientific data file for analysis
  • User asks to "explore", "analyze", or "summarize" a data file
  • User wants to understand the structure and content of scientific data
  • User needs a comprehensive report of a dataset before analysis
  • User wants to assess data quality or completeness
  • User asks what type of analysis is appropriate for a file

Supported File Categories

The skill has comprehensive coverage of scientific file formats organized into six major categories:

1. Chemistry and Molecular Formats (60+ extensions)

Structure files, computational chemistry outputs, molecular dynamics trajectories, and chemical databases.

File types include: .pdb, .cif, .mol, .mol2, .sdf, .xyz, .smi, .gro, .log, .fchk, .cube, .dcd, .xtc, .trr, .prmtop, .psf, and more.

Reference file: references/chemistry_molecular_formats.md

2. Bioinformatics and Genomics Formats (50+ extensions)

Sequence data, alignments, annotations, variants, and expression data.

File types include: .fasta, .fastq, .sam, .bam, .vcf, .bed, .gff, .gtf, .bigwig, .h5ad, .loom, .counts, .mtx, and more.

Reference file: references/bioinformatics_genomics_formats.md

3. Microscopy and Imaging Formats (45+ extensions)

Microscopy images, medical imaging, whole slide imaging, and electron microscopy.

File types include: .tif, .nd2, .lif, .czi, .ims, .dcm, .nii, .mrc, .dm3, .vsi, .svs, .ome.tiff, and more.

Reference file: references/microscopy_imaging_formats.md

4. Spectroscopy and Analytical Chemistry Formats (35+ extensions)

NMR, mass spectrometry, IR/Raman, UV-Vis, X-ray, chromatography, and other analytical techniques.

File types include: .fid, .mzML, .mzXML, .raw, .mgf, .spc, .jdx, .xy, .cif (crystallography), .wdf, and more.

Reference file: references/spectroscopy_analytical_formats.md

5. Proteomics and Metabolomics Formats (30+ extensions)

Mass spec proteomics, metabolomics, lipidomics, and multi-omics data.

File types include: .mzML, .pepXML, .protXML, .mzid, .mzTab, .sky, .mgf, .msp, .h5ad, and more.

Reference file: references/proteomics_metabolomics_formats.md

6. General Scientific Data Formats (30+ extensions)

Arrays, tables, hierarchical data, compressed archives, and common scientific formats.

File types include: .npy, .npz, .csv, .xlsx, .json, .hdf5, .zarr, .parquet, .mat, .fits, .nc, .xml, and more.

Reference file: references/general_scientific_formats.md

Workflow

Step 1: File Type Detection

When a user provides a file path, first identify the file type:

  1. Extract the file extension
  2. Look up the extension in the appropriate reference file
  3. Identify the file category and format description
  4. Load format-specific information

Example:

User: "Analyze data.fastq"
→ Extension: .fastq
→ Category: bioinformatics_genomics
→ Format: FASTQ Format (sequence data with quality scores)
→ Reference: references/bioinformatics_genomics_formats.md

Step 2: Load Format-Specific Information

Based on the file type, read the corresponding reference file to understand:

  • Typical Data: What kind of data this format contains
  • Use Cases: Common applications for this format
  • Python Libraries: How to read the file in Python
  • EDA Approach: What analyses are appropriate for this data type

Search the reference file for the specific extension (e.g., search for "### .fastq" in bioinformatics_genomics_formats.md).

Step 3: Perform Data Analysis

Use the scripts/eda_analyzer.py script OR implement custom analysis:

Option A: Use the analyzer script

# The script automatically:
# 1. Detects file type
# 2. Loads reference information
# 3. Performs format-specific analysis
# 4. Generates markdown report

python scripts/eda_analyzer.py <filepath> [output.md]

Option B: Custom analysis in the conversation Based on the format information from the reference file, perform appropriate analysis:

For tabular data (CSV, TSV, Excel):

  • Load with pandas
  • Check dimensions, data types
  • Analyze missing values
  • Calculate summary statistics
  • Identify outliers
  • Check for duplicates

For sequence data (FASTA, FASTQ):

  • Count sequences
  • Analyze length distributions
  • Calculate GC content
  • Assess quality scores (FASTQ)

For images (TIFF, ND2, CZI):

  • Check dimensions (X, Y, Z, C, T)
  • Analyze bit depth and value range
  • Extract metadata (channels, timestamps, spatial calibration)
  • Calculate intensity statistics

For arrays (NPY, HDF5):

  • Check shape and dimensions
  • Analyze data type
  • Calculate statistical summaries
  • Check for missing/invalid values

Step 4: Generate Comprehensive Report

Create a markdown report with the following sections:

Required Sections:

  1. Title and Metadata

    • Filename and timestamp
    • File size and location
  2. Basic Information

    • File properties
    • Format identification
  3. File Type Details

    • Format description from reference
    • Typical data content
    • Common use cases
    • Python libraries for reading
  4. Data Analysis

    • Structure and dimensions
    • Statistical summaries
    • Quality assessment
    • Data characteristics
  5. Key Findings

    • Notable patterns
    • Potential issues
    • Quality metrics
  6. Recommendations

    • Preprocessing steps
    • Appropriate analyses
    • Tools and methods
    • Visualization approaches

Template Location

Use assets/report_template.md as a guide for report structure.

Step 5: Save Report

Save the markdown report with a descriptive filename:

  • Pattern: {original_filename}_eda_report.md
  • Example: experiment_data.fastqexperiment_data_eda_report.md

Detailed Format References

Each reference file contains comprehensive information for dozens of file types. To find information about a specific format:

  1. Identify the category from the extension
  2. Read the appropriate reference file
  3. Search for the section heading matching the extension (e.g., "### .pdb")
  4. Extract the format information

Reference File Structure

Each format entry includes:

  • Description: What the format is
  • Typical Data: What it contains
  • Use Cases: Common applications
  • Python Libraries: How to read it (with code examples)
  • EDA Approach: Specific analyses to perform

Example lookup:

### .pdb - Protein Data Bank
**Description:** Standard format for 3D structures of biological macromolecules
**Typical Data:** Atomic coordinates, residue information, secondary structure
**Use Cases:** Protein structure analysis, molecular visualization, docking
**Python Libraries:**
- `Biopython`: `Bio.PDB`
- `MDAnalysis`: `MDAnalysis.Universe('file.pdb')`
**EDA Approach:**
- Structure validation (bond lengths, angles)
- B-factor distribution
- Missing residues detection
- Ramachandran plots

Best Practices

Reading Reference Files

Reference files are large (10,000+ words each). To efficiently use them:

  1. Search by extension: Use grep to find the specific format

    import re
    with open('references/chemistry_molecular_formats.md', 'r') as f:
        content = f.read()
        pattern = r'### \.pdb[^#]*?(?=###|\Z)'
        match = re.search(pattern, content, re.IGNORECASE | re.DOTALL)
    
  2. Extract relevant sections: Don't load entire reference files into context unnecessarily

  3. Cache format info: If analyzing multiple files of the same type, reuse the format information

Data Analysis

  1. Sample large files: For files with millions of records, analyze a representative sample
  2. Handle errors gracefully: Many scientific formats require specific libraries; provide clear installation instructions
  3. Validate metadata: Cross-check metadata consistency (e.g., stated dimensions vs actual data)
  4. Consider data provenance: Note instrument, software versions, processing steps

Report Generation

  1. Be comprehensive: Include all relevant information for downstream analysis
  2. Be specific: Provide concrete recommendations based on the file type
  3. Be actionable: Suggest specific next steps and tools
  4. Include code examples: Show how to load and work with the data

Examples

Example 1: Analyzing a FASTQ file

# User provides: "Analyze reads.fastq"

# 1. Detect file type
extension = '.fastq'
category = 'bioinformatics_genomics'

# 2. Read reference info
# Search references/bioinformatics_genomics_formats.md for "### .fastq"

# 3. Perform analysis
from Bio import SeqIO
sequences = list(SeqIO.parse('reads.fastq', 'fastq'))
# Calculate: read count, length distribution, quality scores, GC content

# 4. Generate report
# Include: format description, analysis results, QC recommendations

# 5. Save as: reads_eda_report.md

Example 2: Analyzing a CSV dataset

# User provides: "Explore experiment_results.csv"

# 1. Detect: .csv → general_scientific

# 2. Load reference for CSV format

# 3. Analyze
import pandas as pd
df = pd.read_csv('experiment_results.csv')
# Dimensions, dtypes, missing values, statistics, c

---

*Content truncated.*

literature-review

K-Dense-AI

Conduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).

877390

markitdown

K-Dense-AI

Convert various file formats (PDF, Office documents, images, audio, web content, structured data) to Markdown optimized for LLM processing. Use when converting documents to markdown, extracting text from PDFs/Office files, transcribing audio, performing OCR on images, extracting YouTube transcripts, or processing batches of files. Supports 20+ formats including DOCX, XLSX, PPTX, PDF, HTML, EPUB, CSV, JSON, images with OCR, and audio with transcription.

21096

scientific-writing

K-Dense-AI

Write scientific manuscripts. IMRAD structure, citations (APA/AMA/Vancouver), figures/tables, reporting guidelines (CONSORT/STROBE/PRISMA), abstracts, for research papers and journal submissions.

26075

pubmed-database

K-Dense-AI

"Direct REST API access to PubMed. Advanced Boolean/MeSH queries, E-utilities API, batch processing, citation management. For Python workflows, prefer biopython (Bio.Entrez). Use this for direct HTTP/REST work or custom API implementations."

14534

reportlab

K-Dense-AI

"PDF generation toolkit. Create invoices, reports, certificates, forms, charts, tables, barcodes, QR codes, Canvas/Platypus APIs, for professional document automation."

14628

matplotlib

K-Dense-AI

Foundational plotting library. Create line plots, scatter, bar, histograms, heatmaps, 3D, subplots, export PNG/PDF/SVG, for scientific visualization and publication figures.

12321

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,5731,370

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

1,1161,191

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,4181,109

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

1,197748

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

1,154684

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,318615

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.