shap
Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.
Install
mkdir -p .claude/skills/shap && curl -L -o skill.zip "https://mcp.directory/api/skills/download/2284" && unzip -o skill.zip -d .claude/skills/shap && rm skill.zipInstalls to .claude/skills/shap
About this skill
SHAP (SHapley Additive exPlanations)
Overview
SHAP is a unified approach to explain machine learning model outputs using Shapley values from cooperative game theory. This skill provides comprehensive guidance for:
- Computing SHAP values for any model type
- Creating visualizations to understand feature importance
- Debugging and validating model behavior
- Analyzing fairness and bias
- Implementing explainable AI in production
SHAP works with all model types: tree-based models (XGBoost, LightGBM, CatBoost, Random Forest), deep learning models (TensorFlow, PyTorch, Keras), linear models, and black-box models.
When to Use This Skill
Trigger this skill when users ask about:
- "Explain which features are most important in my model"
- "Generate SHAP plots" (waterfall, beeswarm, bar, scatter, force, heatmap, etc.)
- "Why did my model make this prediction?"
- "Calculate SHAP values for my model"
- "Visualize feature importance using SHAP"
- "Debug my model's behavior" or "validate my model"
- "Check my model for bias" or "analyze fairness"
- "Compare feature importance across models"
- "Implement explainable AI" or "add explanations to my model"
- "Understand feature interactions"
- "Create model interpretation dashboard"
Quick Start Guide
Step 1: Select the Right Explainer
Decision Tree:
-
Tree-based model? (XGBoost, LightGBM, CatBoost, Random Forest, Gradient Boosting)
- Use
shap.TreeExplainer(fast, exact)
- Use
-
Deep neural network? (TensorFlow, PyTorch, Keras, CNNs, RNNs, Transformers)
- Use
shap.DeepExplainerorshap.GradientExplainer
- Use
-
Linear model? (Linear/Logistic Regression, GLMs)
- Use
shap.LinearExplainer(extremely fast)
- Use
-
Any other model? (SVMs, custom functions, black-box models)
- Use
shap.KernelExplainer(model-agnostic but slower)
- Use
-
Unsure?
- Use
shap.Explainer(automatically selects best algorithm)
- Use
See references/explainers.md for detailed information on all explainer types.
Step 2: Compute SHAP Values
import shap
# Example with tree-based model (XGBoost)
import xgboost as xgb
# Train model
model = xgb.XGBClassifier().fit(X_train, y_train)
# Create explainer
explainer = shap.TreeExplainer(model)
# Compute SHAP values
shap_values = explainer(X_test)
# The shap_values object contains:
# - values: SHAP values (feature attributions)
# - base_values: Expected model output (baseline)
# - data: Original feature values
Step 3: Visualize Results
For Global Understanding (entire dataset):
# Beeswarm plot - shows feature importance with value distributions
shap.plots.beeswarm(shap_values, max_display=15)
# Bar plot - clean summary of feature importance
shap.plots.bar(shap_values)
For Individual Predictions:
# Waterfall plot - detailed breakdown of single prediction
shap.plots.waterfall(shap_values[0])
# Force plot - additive force visualization
shap.plots.force(shap_values[0])
For Feature Relationships:
# Scatter plot - feature-prediction relationship
shap.plots.scatter(shap_values[:, "Feature_Name"])
# Colored by another feature to show interactions
shap.plots.scatter(shap_values[:, "Age"], color=shap_values[:, "Education"])
See references/plots.md for comprehensive guide on all plot types.
Core Workflows
This skill supports several common workflows. Choose the workflow that matches the current task.
Workflow 1: Basic Model Explanation
Goal: Understand what drives model predictions
Steps:
- Train model and create appropriate explainer
- Compute SHAP values for test set
- Generate global importance plots (beeswarm or bar)
- Examine top feature relationships (scatter plots)
- Explain specific predictions (waterfall plots)
Example:
# Step 1-2: Setup
explainer = shap.TreeExplainer(model)
shap_values = explainer(X_test)
# Step 3: Global importance
shap.plots.beeswarm(shap_values)
# Step 4: Feature relationships
shap.plots.scatter(shap_values[:, "Most_Important_Feature"])
# Step 5: Individual explanation
shap.plots.waterfall(shap_values[0])
Workflow 2: Model Debugging
Goal: Identify and fix model issues
Steps:
- Compute SHAP values
- Identify prediction errors
- Explain misclassified samples
- Check for unexpected feature importance (data leakage)
- Validate feature relationships make sense
- Check feature interactions
See references/workflows.md for detailed debugging workflow.
Workflow 3: Feature Engineering
Goal: Use SHAP insights to improve features
Steps:
- Compute SHAP values for baseline model
- Identify nonlinear relationships (candidates for transformation)
- Identify feature interactions (candidates for interaction terms)
- Engineer new features
- Retrain and compare SHAP values
- Validate improvements
See references/workflows.md for detailed feature engineering workflow.
Workflow 4: Model Comparison
Goal: Compare multiple models to select best interpretable option
Steps:
- Train multiple models
- Compute SHAP values for each
- Compare global feature importance
- Check consistency of feature rankings
- Analyze specific predictions across models
- Select based on accuracy, interpretability, and consistency
See references/workflows.md for detailed model comparison workflow.
Workflow 5: Fairness and Bias Analysis
Goal: Detect and analyze model bias across demographic groups
Steps:
- Identify protected attributes (gender, race, age, etc.)
- Compute SHAP values
- Compare feature importance across groups
- Check protected attribute SHAP importance
- Identify proxy features
- Implement mitigation strategies if bias found
See references/workflows.md for detailed fairness analysis workflow.
Workflow 6: Production Deployment
Goal: Integrate SHAP explanations into production systems
Steps:
- Train and save model
- Create and save explainer
- Build explanation service
- Create API endpoints for predictions with explanations
- Implement caching and optimization
- Monitor explanation quality
See references/workflows.md for detailed production deployment workflow.
Key Concepts
SHAP Values
Definition: SHAP values quantify each feature's contribution to a prediction, measured as the deviation from the expected model output (baseline).
Properties:
- Additivity: SHAP values sum to difference between prediction and baseline
- Fairness: Based on Shapley values from game theory
- Consistency: If a feature becomes more important, its SHAP value increases
Interpretation:
- Positive SHAP value → Feature pushes prediction higher
- Negative SHAP value → Feature pushes prediction lower
- Magnitude → Strength of feature's impact
- Sum of SHAP values → Total prediction change from baseline
Example:
Baseline (expected value): 0.30
Feature contributions (SHAP values):
Age: +0.15
Income: +0.10
Education: -0.05
Final prediction: 0.30 + 0.15 + 0.10 - 0.05 = 0.50
Background Data / Baseline
Purpose: Represents "typical" input to establish baseline expectations
Selection:
- Random sample from training data (50-1000 samples)
- Or use kmeans to select representative samples
- For DeepExplainer/KernelExplainer: 100-1000 samples balances accuracy and speed
Impact: Baseline affects SHAP value magnitudes but not relative importance
Model Output Types
Critical Consideration: Understand what your model outputs
- Raw output: For regression or tree margins
- Probability: For classification probability
- Log-odds: For logistic regression (before sigmoid)
Example: XGBoost classifiers explain margin output (log-odds) by default. To explain probabilities, use model_output="probability" in TreeExplainer.
Common Patterns
Pattern 1: Complete Model Analysis
# 1. Setup
explainer = shap.TreeExplainer(model)
shap_values = explainer(X_test)
# 2. Global importance
shap.plots.beeswarm(shap_values)
shap.plots.bar(shap_values)
# 3. Top feature relationships
top_features = X_test.columns[np.abs(shap_values.values).mean(0).argsort()[-5:]]
for feature in top_features:
shap.plots.scatter(shap_values[:, feature])
# 4. Example predictions
for i in range(5):
shap.plots.waterfall(shap_values[i])
Pattern 2: Cohort Comparison
# Define cohorts
cohort1_mask = X_test['Group'] == 'A'
cohort2_mask = X_test['Group'] == 'B'
# Compare feature importance
shap.plots.bar({
"Group A": shap_values[cohort1_mask],
"Group B": shap_values[cohort2_mask]
})
Pattern 3: Debugging Errors
# Find errors
errors = model.predict(X_test) != y_test
error_indices = np.where(errors)[0]
# Explain errors
for idx in error_indices[:5]:
print(f"Sample {idx}:")
shap.plots.waterfall(shap_values[idx])
# Investigate key features
shap.plots.scatter(shap_values[:, "Suspicious_Feature"])
Performance Optimization
Speed Considerations
Explainer Speed (fastest to slowest):
LinearExplainer- Nearly instantaneousTreeExplainer- Very fastDeepExplainer- Fast for neural networksGradientExplainer- Fast for neural networksKernelExplainer- Slow (use only when necessary)PermutationExplainer- Very slow but accurate
Optimization Strategies
For Large Datasets:
# Compute SHAP for subset
shap_values = explainer(X_test[:1000])
# Or use batching
batch_size = 100
all_shap_values = []
for i in range(0, len(X_test), batch_size):
batch_shap = explainer(X_test[i:i+batch_size])
all_shap_values.append(batch_shap)
For Visualizations:
# Sample subset for plots
shap.plots.beeswarm(shap_values[:1000])
# Adjust transparency for dense plots
shap.plots.scatter(shap_values[:, "Feature"], alpha=0.3)
For Production:
# Cache explainer
import joblib
joblib.dump(exp
---
*Content truncated.*
More by K-Dense-AI
View all skills by K-Dense-AI →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversEnhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Boost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Empower AI with the Exa MCP Server—an AI research tool for real-time web search, academic data, and smarter, up-to-date
Mobile Next offers fast, seamless mobile automation for iOS and Android. Automate apps, extract data, and simplify mobil
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.