bio-workflows-cytometry-pipeline
End-to-end flow cytometry workflow from FCS files to differential analysis. Orchestrates compensation, transformation, gating/clustering, and statistical testing with CATALYST/diffcyt. Use when processing flow or mass cytometry data end-to-end.
Install
mkdir -p .claude/skills/bio-workflows-cytometry-pipeline && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9492" && unzip -o skill.zip -d .claude/skills/bio-workflows-cytometry-pipeline && rm skill.zipInstalls to .claude/skills/bio-workflows-cytometry-pipeline
About this skill
Version Compatibility
Reference examples tested with: FlowSOM 2.10+, edgeR 4.0+, flowCore 2.14+, ggplot2 3.5+, limma 3.58+, numpy 1.26+, pandas 2.2+, scanpy 1.10+, scikit-learn 1.4+
Before using code patterns, verify installed versions match. If versions differ:
- Python:
pip show <package>thenhelp(module.function)to check signatures - R:
packageVersion('<pkg>')then?function_nameto verify parameters
If code throws ImportError, AttributeError, or TypeError, introspect the installed package and adapt the example to match the actual API rather than retrying.
Flow Cytometry Pipeline
"Process my flow cytometry data from FCS to differential analysis" → Orchestrate compensation, transformation, doublet removal, FlowSOM clustering, phenotype annotation, and diffcyt differential testing across conditions.
Pipeline Overview
FCS Files ──> Compensation ──> Transformation ──> Gated/Clustered Data
│
▼
┌─────────────────────────────────────────────────┐
│ cytometry-pipeline │
├─────────────────────────────────────────────────┤
│ 1. Load FCS Files │
│ 2. Compensation & Transformation │
│ 3. QC & Filtering │
│ 4. Clustering (FlowSOM) or Gating │
│ 5. Dimensionality Reduction (UMAP) │
│ 6. Differential Abundance/State Analysis │
│ 7. Visualization │
└─────────────────────────────────────────────────┘
│
▼
Differential Cell Populations + Markers
Complete R Workflow (CATALYST)
library(CATALYST)
library(diffcyt)
library(SingleCellExperiment)
library(flowCore)
library(ggplot2)
# === 1. SETUP PANEL AND METADATA ===
# Panel definition
panel <- data.frame(
fcs_colname = c('FSC-A', 'SSC-A', 'CD45', 'CD3', 'CD4', 'CD8', 'CD19',
'CD14', 'CD56', 'HLA-DR', 'Ki67', 'IFNg'),
antigen = c('FSC', 'SSC', 'CD45', 'CD3', 'CD4', 'CD8', 'CD19',
'CD14', 'CD56', 'HLA-DR', 'Ki67', 'IFNg'),
marker_class = c('none', 'none', 'type', 'type', 'type', 'type', 'type',
'type', 'type', 'type', 'state', 'state')
)
# Sample metadata
md <- data.frame(
file_name = list.files('data/', pattern = '\\.fcs$'),
sample_id = paste0('Sample', 1:8),
condition = rep(c('Control', 'Treatment'), each = 4),
patient_id = rep(paste0('Patient', 1:4), 2)
)
cat('Loading', nrow(md), 'FCS files...\n')
# === 2. LOAD AND PREPARE DATA ===
fcs_files <- file.path('data', md$file_name)
fs <- read.flowSet(fcs_files)
# Apply compensation if stored in FCS
fs_comp <- compensate(fs, spillover(fs[[1]]))
# Prepare SingleCellExperiment with CATALYST
sce <- prepData(fs_comp, panel, md,
transform = TRUE,
cofactor = 5, # For CyTOF use 5, flow cytometry use 150
FACS = TRUE)
cat('Loaded', ncol(sce), 'cells\n')
# === 3. QC ===
# Per-sample cell counts
table(sce$sample_id)
# Expression distributions
plotExprs(sce, color_by = 'condition')
ggsave('qc_expression_distributions.png', width = 12, height = 8)
# MDS plot for sample similarity
plotMDS(sce, color_by = 'condition')
ggsave('qc_mds.png', width = 8, height = 6)
# === 4. CLUSTERING ===
cat('Clustering...\n')
sce <- cluster(sce,
features = 'type', # Use lineage markers
xdim = 10, ydim = 10,
maxK = 20,
seed = 42)
# Metaclustering at different resolutions
table(cluster_ids(sce, 'meta20'))
# === 5. DIMENSIONALITY REDUCTION ===
cat('Running UMAP...\n')
sce <- runDR(sce, dr = 'UMAP', features = 'type')
# Plot UMAP
plotDR(sce, dr = 'UMAP', color_by = 'meta20')
ggsave('umap_clusters.png', width = 8, height = 6)
plotDR(sce, dr = 'UMAP', color_by = 'condition')
ggsave('umap_condition.png', width = 8, height = 6)
# === 6. CLUSTER ANNOTATION ===
# Heatmap of marker expression
plotExprHeatmap(sce, features = 'type', k = 'meta20',
by = 'cluster_id', scale = 'last', bars = TRUE)
ggsave('heatmap_clusters.png', width = 12, height = 8)
# Manual annotation based on markers
cluster_annotations <- c(
'1' = 'CD4 T cells',
'2' = 'CD8 T cells',
'3' = 'B cells',
'4' = 'Monocytes',
'5' = 'NK cells'
# ... continue for all clusters
)
sce$cell_type <- cluster_annotations[cluster_ids(sce, 'meta20')]
# === 7. DIFFERENTIAL ANALYSIS ===
cat('Running differential analysis...\n')
# Create design matrix
design <- createDesignMatrix(ei(sce), cols_design = 'condition')
# Contrast
contrast <- createContrast(c(0, 1)) # Treatment vs Control
# Differential Abundance (DA)
res_DA <- testDA_edgeR(sce, design, contrast, cluster_id = 'meta20')
da_results <- as.data.frame(rowData(res_DA))
da_results <- da_results[order(da_results$p_adj), ]
cat('\nDifferential Abundance Results:\n')
print(da_results[, c('cluster_id', 'logFC', 'p_val', 'p_adj')])
# Differential State (DS) - marker expression
res_DS <- testDS_limma(sce, design, contrast,
cluster_id = 'meta20',
markers_include = rownames(sce)[rowData(sce)$marker_class == 'state'])
ds_results <- as.data.frame(rowData(res_DS))
cat('\nDifferential State Results:\n')
sig_ds <- ds_results[ds_results$p_adj < 0.05, ]
print(sig_ds[, c('cluster_id', 'marker_id', 'logFC', 'p_adj')])
# === 8. VISUALIZATION ===
# DA heatmap
plotDiffHeatmap(sce, res_DA, all = TRUE, fdr = 0.05)
ggsave('da_heatmap.png', width = 10, height = 8)
# Abundance boxplots
plotAbundances(sce, k = 'meta20', by = 'cluster_id', group_by = 'condition')
ggsave('abundance_boxplots.png', width = 12, height = 8)
# Volcano plot
da_results$significant <- da_results$p_adj < 0.05
ggplot(da_results, aes(x = logFC, y = -log10(p_adj), color = significant)) +
geom_point(size = 3) +
geom_hline(yintercept = -log10(0.05), linetype = 'dashed') +
scale_color_manual(values = c('gray', 'red')) +
theme_bw() +
labs(title = 'Differential Abundance')
ggsave('da_volcano.png', width = 8, height = 6)
# === 9. EXPORT ===
write.csv(da_results, 'da_results.csv', row.names = FALSE)
write.csv(ds_results, 'ds_results.csv', row.names = FALSE)
saveRDS(sce, 'cytometry_analysis.rds')
cat('\nAnalysis complete!\n')
cat('Significant DA clusters:', sum(da_results$p_adj < 0.05), '\n')
flowCore + Manual Gating Workflow
library(flowCore)
library(flowWorkspace)
library(ggcyto)
# Load data
fs <- read.flowSet(list.files('data/', pattern = '\\.fcs$', full.names = TRUE))
# Compensation
comp_matrix <- spillover(fs[[1]])[[1]]
fs_comp <- compensate(fs, comp_matrix)
# Transformation
trans <- estimateLogicle(fs_comp[[1]], colnames(comp_matrix))
fs_trans <- transform(fs_comp, trans)
# Create GatingSet
gs <- GatingSet(fs_trans)
# Apply gates
gs_add_gating_method(gs, alias = 'live',
pop = '+', parent = 'root',
dims = 'FSC-A,SSC-A',
gating_method = 'gate_flowclust_2d',
gating_args = list(K = 2, target = c(50000, 25000)))
gs_add_gating_method(gs, alias = 'singlets',
pop = '+', parent = 'live',
dims = 'FSC-A,FSC-H',
gating_method = 'singletGate')
# Visualize gates
autoplot(gs[[1]], 'singlets')
# Extract gated data
gated_data <- gs_pop_get_data(gs, 'singlets')
Python Alternative (FlowCytometryTools)
import flowkit as fk
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
# Load FCS files
sample = fk.Sample('sample.fcs')
# Get data as DataFrame
data = sample.as_dataframe(source='raw')
# Compensation (if needed)
comp_matrix = sample.metadata['spill']
data_comp = np.dot(data, np.linalg.inv(comp_matrix))
# Arcsinh transformation
cofactor = 150 # For flow cytometry
data_trans = np.arcsinh(data_comp / cofactor)
# Clustering
scaler = StandardScaler()
data_scaled = scaler.fit_transform(data_trans)
kmeans = KMeans(n_clusters=10, random_state=42)
clusters = kmeans.fit_predict(data_scaled)
QC Checkpoints
| Stage | Check | Action if Failed |
|---|---|---|
| Loading | All FCS files read | Check file integrity |
| Compensation | Spillover values reasonable | Recalculate |
| Transformation | Distributions normalized | Adjust cofactor |
| Events | >10K cells per sample | Check acquisition |
| Clustering | 10-30 populations | Adjust K/resolution |
| DA | >3 replicates per group | Need more samples |
Workflow Variants
CyTOF Data
# CyTOF-specific settings
sce <- prepData(fs, panel, md,
transform = TRUE,
cofactor = 5, # CyTOF uses cofactor 5
FACS = FALSE) # Not flow cytometry
# Bead normalization should be done upstream (Fluidigm software)
Paired Design
# For paired samples (e.g., pre/post treatment)
design <- createDesignMatrix(ei(sce), cols_design = c('condition', 'patient_id'))
# Include patient as blocking factor
formula <- createFormula(ei(sce), cols_fixed = 'condition', cols_random = 'patient_id')
res_DA <- testDA_voom(sce, formula, contrast)
Related Skills
- flow-cytometry/fcs-handling - FCS file operations
- flow-cytometry/compensation-transformation - Data preprocessing
- flow-cytometry/gating-analysis - Manual gating
- flow-cytometry/clustering-phenotyping - Unsupervised clustering
- flow-cytometry/differential-analysis - Statistical testing
- flow-cytometry/doublet-detection - Remove doublet events
- fl
Content truncated.
More by GPTomics
View all skills by GPTomics →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversExtend your developer tools with GitHub MCP Server for advanced automation, supporting GitHub Student and student packag
Boost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
n8n offers conversational workflow automation, enabling seamless software workflow creation and management without platf
Unlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.