lamindb
This skill should be used when working with LaminDB, an open-source data framework for biology that makes data queryable, traceable, reproducible, and FAIR. Use when managing biological datasets (scRNA-seq, spatial, flow cytometry, etc.), tracking computational workflows, curating and validating data with biological ontologies, building data lakehouses, or ensuring data lineage and reproducibility in biological research. Covers data management, annotation, ontologies (genes, cell types, diseases, tissues), schema validation, integrations with workflow managers (Nextflow, Snakemake) and MLOps platforms (W&B, MLflow), and deployment strategies.
Install
mkdir -p .claude/skills/lamindb && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5174" && unzip -o skill.zip -d .claude/skills/lamindb && rm skill.zipInstalls to .claude/skills/lamindb
About this skill
LaminDB
Overview
LaminDB is an open-source data framework for biology designed to make data queryable, traceable, reproducible, and FAIR (Findable, Accessible, Interoperable, Reusable). It provides a unified platform that combines lakehouse architecture, lineage tracking, feature stores, biological ontologies, LIMS (Laboratory Information Management System), and ELN (Electronic Lab Notebook) capabilities through a single Python API.
Core Value Proposition:
- Queryability: Search and filter datasets by metadata, features, and ontology terms
- Traceability: Automatic lineage tracking from raw data through analysis to results
- Reproducibility: Version control for data, code, and environment
- FAIR Compliance: Standardized annotations using biological ontologies
When to Use This Skill
Use this skill when:
- Managing biological datasets: scRNA-seq, bulk RNA-seq, spatial transcriptomics, flow cytometry, multi-modal data, EHR data
- Tracking computational workflows: Notebooks, scripts, pipeline execution (Nextflow, Snakemake, Redun)
- Curating and validating data: Schema validation, standardization, ontology-based annotation
- Working with biological ontologies: Genes, proteins, cell types, tissues, diseases, pathways (via Bionty)
- Building data lakehouses: Unified query interface across multiple datasets
- Ensuring reproducibility: Automatic versioning, lineage tracking, environment capture
- Integrating ML pipelines: Connecting with Weights & Biases, MLflow, HuggingFace, scVI-tools
- Deploying data infrastructure: Setting up local or cloud-based data management systems
- Collaborating on datasets: Sharing curated, annotated data with standardized metadata
Core Capabilities
LaminDB provides six interconnected capability areas, each documented in detail in the references folder.
1. Core Concepts and Data Lineage
Core entities:
- Artifacts: Versioned datasets (DataFrame, AnnData, Parquet, Zarr, etc.)
- Records: Experimental entities (samples, perturbations, instruments)
- Runs & Transforms: Computational lineage tracking (what code produced what data)
- Features: Typed metadata fields for annotation and querying
Key workflows:
- Create and version artifacts from files or Python objects
- Track notebook/script execution with
ln.track()andln.finish() - Annotate artifacts with typed features
- Visualize data lineage graphs with
artifact.view_lineage() - Query by provenance (find all outputs from specific code/inputs)
Reference: references/core-concepts.md - Read this for detailed information on artifacts, records, runs, transforms, features, versioning, and lineage tracking.
2. Data Management and Querying
Query capabilities:
- Registry exploration and lookup with auto-complete
- Single record retrieval with
get(),one(),one_or_none() - Filtering with comparison operators (
__gt,__lte,__contains,__startswith) - Feature-based queries (query by annotated metadata)
- Cross-registry traversal with double-underscore syntax
- Full-text search across registries
- Advanced logical queries with Q objects (AND, OR, NOT)
- Streaming large datasets without loading into memory
Key workflows:
- Browse artifacts with filters and ordering
- Query by features, creation date, creator, size, etc.
- Stream large files in chunks or with array slicing
- Organize data with hierarchical keys
- Group artifacts into collections
Reference: references/data-management.md - Read this for comprehensive query patterns, filtering examples, streaming strategies, and data organization best practices.
3. Annotation and Validation
Curation process:
- Validation: Confirm datasets match desired schemas
- Standardization: Fix typos, map synonyms to canonical terms
- Annotation: Link datasets to metadata entities for queryability
Schema types:
- Flexible schemas: Validate only known columns, allow additional metadata
- Minimal required schemas: Specify essential columns, permit extras
- Strict schemas: Complete control over structure and values
Supported data types:
- DataFrames (Parquet, CSV)
- AnnData (single-cell genomics)
- MuData (multi-modal)
- SpatialData (spatial transcriptomics)
- TileDB-SOMA (scalable arrays)
Key workflows:
- Define features and schemas for data validation
- Use
DataFrameCuratororAnnDataCuratorfor validation - Standardize values with
.cat.standardize() - Map to ontologies with
.cat.add_ontology() - Save curated artifacts with schema linkage
- Query validated datasets by features
Reference: references/annotation-validation.md - Read this for detailed curation workflows, schema design patterns, handling validation errors, and best practices.
4. Biological Ontologies
Available ontologies (via Bionty):
- Genes (Ensembl), Proteins (UniProt)
- Cell types (CL), Cell lines (CLO)
- Tissues (Uberon), Diseases (Mondo, DOID)
- Phenotypes (HPO), Pathways (GO)
- Experimental factors (EFO), Developmental stages
- Organisms (NCBItaxon), Drugs (DrugBank)
Key workflows:
- Import public ontologies with
bt.CellType.import_source() - Search ontologies with keyword or exact matching
- Standardize terms using synonym mapping
- Explore hierarchical relationships (parents, children, ancestors)
- Validate data against ontology terms
- Annotate datasets with ontology records
- Create custom terms and hierarchies
- Handle multi-organism contexts (human, mouse, etc.)
Reference: references/ontologies.md - Read this for comprehensive ontology operations, standardization strategies, hierarchy navigation, and annotation workflows.
5. Integrations
Workflow managers:
- Nextflow: Track pipeline processes and outputs
- Snakemake: Integrate into Snakemake rules
- Redun: Combine with Redun task tracking
MLOps platforms:
- Weights & Biases: Link experiments with data artifacts
- MLflow: Track models and experiments
- HuggingFace: Track model fine-tuning
- scVI-tools: Single-cell analysis workflows
Storage systems:
- Local filesystem, AWS S3, Google Cloud Storage
- S3-compatible (MinIO, Cloudflare R2)
- HTTP/HTTPS endpoints (read-only)
- HuggingFace datasets
Array stores:
- TileDB-SOMA (with cellxgene support)
- DuckDB for SQL queries on Parquet files
Visualization:
- Vitessce for interactive spatial/single-cell visualization
Version control:
- Git integration for source code tracking
Reference: references/integrations.md - Read this for integration patterns, code examples, and troubleshooting for third-party systems.
6. Setup and Deployment
Installation:
- Basic:
uv pip install lamindb - With extras:
uv pip install 'lamindb[gcp,zarr,fcs]' - Modules: bionty, wetlab, clinical
Instance types:
- Local SQLite (development)
- Cloud storage + SQLite (small teams)
- Cloud storage + PostgreSQL (production)
Storage options:
- Local filesystem
- AWS S3 with configurable regions and permissions
- Google Cloud Storage
- S3-compatible endpoints (MinIO, Cloudflare R2)
Configuration:
- Cache management for cloud files
- Multi-user system configurations
- Git repository sync
- Environment variables
Deployment patterns:
- Local dev → Cloud production migration
- Multi-region deployments
- Shared storage with personal instances
Reference: references/setup-deployment.md - Read this for detailed installation, configuration, storage setup, database management, security best practices, and troubleshooting.
Common Use Case Workflows
Use Case 1: Single-Cell RNA-seq Analysis with Ontology Validation
import lamindb as ln
import bionty as bt
import anndata as ad
# Start tracking
ln.track(params={"analysis": "scRNA-seq QC and annotation"})
# Import cell type ontology
bt.CellType.import_source()
# Load data
adata = ad.read_h5ad("raw_counts.h5ad")
# Validate and standardize cell types
adata.obs["cell_type"] = bt.CellType.standardize(adata.obs["cell_type"])
# Curate with schema
curator = ln.curators.AnnDataCurator(adata, schema)
curator.validate()
artifact = curator.save_artifact(key="scrna/validated.h5ad")
# Link ontology annotations
cell_types = bt.CellType.from_values(adata.obs.cell_type)
artifact.feature_sets.add_ontology(cell_types)
ln.finish()
Use Case 2: Building a Queryable Data Lakehouse
import lamindb as ln
# Register multiple experiments
for i, file in enumerate(data_files):
artifact = ln.Artifact.from_anndata(
ad.read_h5ad(file),
key=f"scrna/batch_{i}.h5ad",
description=f"scRNA-seq batch {i}"
).save()
# Annotate with features
artifact.features.add_values({
"batch": i,
"tissue": tissues[i],
"condition": conditions[i]
})
# Query across all experiments
immune_datasets = ln.Artifact.filter(
key__startswith="scrna/",
tissue="PBMC",
condition="treated"
).to_dataframe()
# Load specific datasets
for artifact in immune_datasets:
adata = artifact.load()
# Analyze
Use Case 3: ML Pipeline with W&B Integration
import lamindb as ln
import wandb
# Initialize both systems
wandb.init(project="drug-response", name="exp-42")
ln.track(params={"model": "random_forest", "n_estimators": 100})
# Load training data from LaminDB
train_artifact = ln.Artifact.get(key="datasets/train.parquet")
train_data = train_artifact.load()
# Train model
model = train_model(train_data)
# Log to W&B
wandb.log({"accuracy": 0.95})
# Save model in LaminDB with W&B linkage
import joblib
joblib.dump(model, "model.pkl")
model_artifact = ln.Artifact("model.pkl", key="models/exp-42.pkl").save()
model_artifact.features.add_values({"wandb_run_id": wandb.run.id})
ln.finish()
wandb.finish()
Use Case 4: Nextflow Pipeline Integration
# In Nextflow process script
import lamindb as ln
ln.track()
# Load input artifact
input_artifact =
---
*Content truncated.*
More by K-Dense-AI
View all skills by K-Dense-AI →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversUnlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Official Laravel-focused MCP server for augmenting AI-powered local development. Provides deep context about your Larave
Safely connect cloud Grafana to AI agents with MCP: query, inspect, and manage Grafana resources using simple, focused o
Empower your workflows with Perplexity Ask MCP Server—seamless integration of AI research tools for real-time, accurate
Boost your productivity by managing Azure DevOps projects, pipelines, and repos in VS Code. Streamline dev workflows wit
Boost AI coding agents with Ref Tools—efficient documentation access for faster, smarter code generation than GitHub Cop
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.