
artefact-mcp-server
Analyzes customer revenue data using RFM (Recency, Frequency, Monetary) analysis and scores prospects with a 14.5-point ICP system. Integrates with HubSpot to provide pipeline health insights.
Revenue intelligence MCP server: RFM analysis, 14.5-point ICP scoring, pipeline health scoring. Embeds Artefact Formula methodology. HubSpot integration.
What it does
- Perform RFM analysis on customer data
- Score prospects using 14.5-point ICP methodology
- Calculate pipeline health scores
- Connect to HubSpot CRM data
- Generate revenue intelligence reports
Best for
About artefact-mcp-server
artefact-mcp-server is a community-built MCP server published by alexboissAV that provides AI assistants with tools and capabilities via the Model Context Protocol. Artefact MCP server: revenue intelligence with RFM analysis, 14.5-point ICP scoring and sales pipeline scoring. HubSpot It is categorized under ai ml, analytics data.
How to install
You can install artefact-mcp-server in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
artefact-mcp-server is released under the NOASSERTION license.
Artefact Revenue Intelligence MCP Server
The AI-native interface to your Revenue Operating System. Version-controlled GTM intelligence — signals, commits, and closed-loop measurement — accessible to any AI agent.
A Model Context Protocol (MCP) server that treats your Go-to-Market strategy like code: versioned, diffable, and deployable. Detect pipeline signals, identify scaling constraints, analyze value engines, and draft structured GTM changes — all through AI-native tool calls. Built on the Artefact Formula methodology from real B2B consulting engagements.
Why Artefact MCP?
Traditional ICP models stop at firmographics. We triangulate across three dimensions to identify prospects with the right profile, the right behaviors, AND the right trajectory.
| Feature | HubSpot Official MCP | Generic Wrappers | Artefact MCP |
|---|---|---|---|
| CRUD operations | Yes | Yes | Via HubSpot API |
| RFM Analysis | No | No | 11-segment classification |
| ICP Triangulation | No | No | Firmographic + Behavioral + Growth Signals |
| Pipeline Health | No | No | 0-100 health score + exit criteria testing |
| Signal Detection | No | No | 6-type signal taxonomy |
| Constraint Analysis | No | No | Dominant bottleneck + Revenue Formula |
| Value Engine Analysis | No | No | Growth / Fulfillment / Innovation |
| GTM Commit Drafting | No | No | Structured change proposals with evidence |
| Methodology built-in | No | No | Artefact Formula (10 resources) |
| Works without API key | No | No | Yes (demo data) |
Who Is This For?
- B2B revenue teams using HubSpot who want AI-powered signal detection and pipeline intelligence
- RevOps managers who need constraint analysis and value engine health accessible from Claude or Cursor
- Consultants who deliver RFM analysis, ICP scoring, and evidence-backed GTM recommendations to clients
- Developers building revenue intelligence integrations with MCP
- AI agents that need a structured interface to reason about and propose changes to GTM strategy
Tools
Signal Intelligence
detect_signals — Pipeline Signal Detection
Scans pipeline data for all 6 signal types from the Artefact signal taxonomy: velocity anomalies, conversion drop-offs, win/loss patterns, pipeline concentration, data quality issues, and SPICED frequency signals. Returns structured signal objects with strength scores (0-1), evidence, and recommended actions.
identify_constraint — Dominant Constraint Analysis
Identifies which of the 4 scaling constraints (Lead Generation, Conversion, Delivery, Profitability) is bottlenecking revenue. Includes Revenue Formula breakdown (Traffic x CR1 x CR2 x CR3 x ACV) with gap-to-benchmark analysis and recommended focus.
analyze_engine — Value Engine Health
Analyzes health of the 3 value engines: Growth (create/capture/convert demand), Fulfillment (onboard/deliver/renew/expand), and Innovation (gather/prioritize/build/launch). Returns engine-specific metrics, health scores, and integrated signal detection.
propose_gtm_change — GTM Commit Drafting
Enables AI agents to propose structured GTM changes following the commit anatomy: Intent, Diff, Impact Surface, Risk Level, Evidence, and Measurement Plan. Supports 8 entity types (ICP, persona, positioning, pipeline stage, exit criteria, GTM motion, scoring model, playbook).
Analysis Tools
run_rfm — RFM Analysis
Scores clients on Recency, Frequency, and Monetary value. Segments them into 11 categories (Champions through Lost) and extracts ICP patterns from top performers. Now includes signal framing — detects win/loss patterns, revenue concentration, and at-risk client signals. Supports B2B service, SaaS, and manufacturing presets.
qualify — ICP Triangulation Framework
Scores prospects across three dimensions: Firmographic Fit (industry, revenue, employees, geography), Behavioral Fit (tech stack, engagement, purchase history), and Growth Signals (hiring, funding, expansion). Now includes constraint context — maps prospect fit to your dominant scaling constraint. Returns tier classification (Ideal / Strong / Moderate / Poor) with engagement strategy.
score_pipeline_health — Pipeline Health Score
Analyzes open deals for velocity metrics, stage-to-stage conversion rates, bottleneck identification, and at-risk deal detection. Now supports optional exit criteria testing (pass/fail per criterion per deal) and includes signal framing for velocity anomalies and conversion drop-offs. Returns a 0-100 health score.
Resources
| URI | Description |
|---|---|
methodology://scoring-model | ICP Triangulation Framework technical reference |
methodology://tier-definitions | 4-tier classification system |
methodology://rfm-segments | 11 RFM segment definitions with scoring scales |
methodology://spiced-framework | SPICED discovery framework |
methodology://data-requirements | HubSpot data setup and enrichment requirements |
methodology://value-engines | 3 value engine definitions (Growth, Fulfillment, Innovation) with stages and metrics |
methodology://exit-criteria | Standard pipeline exit criteria per stage with proof requirements |
methodology://constraints | 4 scaling constraints with diagnostic criteria and remediation levers |
methodology://signal-taxonomy | 6 signal types with detection methods and action mappings |
methodology://revenue-formula | Revenue Formula breakdown: Traffic x CR1 x CR2 x CR3 x ACV x (1/Churn) |
methodology://gtm-commit-anatomy | 5 components of a structured GTM commit (intent, diff, impact, risk, evidence) |
Data Requirements for ICP Triangulation
⚠️ Important: The qualify tool requires specific data across all three dimensions:
✅ Native HubSpot data (Firmographic + Partial Behavioral):
- Firmographic Fit: Industry, revenue, employees, geography — standard properties
- Behavioral Fit (Partial): Tech stack, content engagement, purchase history — custom properties or workflows
⚠️ Requires external enrichment (Clay, Clearbit, or manual research):
- Growth Signals (Behavioral Fit — Critical Dimension): Hiring trends, funding rounds, product launches, expansion signals, press mentions
- HubSpot does NOT track growth signals natively
- Without growth signals: You lose the third dimension of triangulation — prospect momentum and buying power indicators
See full guide: Ask your AI assistant to read methodology://data-requirements for complete setup instructions and Clay integration workflow.
Quick Start
Install via PyPI
pip install artefact-mcp
Install via Smithery
npx @smithery/cli install artefact-revenue-intelligence
Claude Code
claude mcp add artefact-revenue -- uvx artefact-mcp
Then ask:
- "What signals are you detecting in my pipeline?"
- "What's our dominant scaling constraint?"
- "Analyze the health of our Growth Engine"
- "Propose a GTM change: narrow ICP to SaaS companies with 50-200 employees"
- "Run an RFM analysis on our HubSpot data"
- "Qualify this prospect: SaaS company, $5M revenue, 80 employees in Ontario"
- "Score our pipeline health with exit criteria testing"
Claude Desktop
Add to claude_desktop_config.json:
Recommended (Python method):
{
"mcpServers": {
"artefact-revenue": {
"command": "python3",
"args": ["-m", "artefact_mcp"],
"env": {
"HUBSPOT_API_KEY": "pat-na1-xxxxxxxx"
}
}
}
}
Alternative (uvx method):
{
"mcpServers": {
"artefact-revenue": {
"command": "uvx",
"args": ["artefact-mcp"],
"env": {
"HUBSPOT_API_KEY": "pat-na1-xxxxxxxx"
}
}
}
}
Note: If using uvx and seeing "Server disconnected" errors, see the Troubleshooting section below.
Cursor
Add to .cursor/mcp.json:
Recommended (Python method):
{
"mcpServers": {
"artefact-revenue": {
"command": "python3",
"args": ["-m", "artefact_mcp"],
"env": {
"HUBSPOT_API_KEY": "pat-na1-xxxxxxxx"
}
}
}
}
Alternative (uvx method):
{
"mcpServers": {
"artefact-revenue": {
"command": "uvx",
"args": ["artefact-mcp"],
"env": {
"HUBSPOT_API_KEY": "pat-na1-xxxxxxxx"
}
}
}
}
Programmatic (Python)
from artefact_mcp.tools.signals import detect_signals
from artefact_mcp.tools.constraints import identify_dominant_constraint
from artefact_mcp.tools.engines import analyze_engine
from artefact_mcp.tools.gtm_commits import propose_gtm_change
from artefact_mcp.tools.rfm import run_rfm_analysis
from artefact_mcp.tools.icp import qualify_prospect
from artefact_mcp.tools.pipeline import score_pipeline
# Signal detection (no HubSpot key needed)
signals = detect_signals(source="sample")
# Dominant constraint analysis
constraint = identify_dominant_constraint(source="sample", quota=500000)
# Value engine health
engine = analyze_engine(engine_type="growth", source="sample")
# GTM commit drafting
commit = propose_gtm_change(
entity_type="icp",
change_description="Narrow ICP to SaaS companies with 50-200 employees",
signal_type="win_loss_pattern",
signal_data={"win_rate_saas": 0.45, "win_rate_other": 0.22},
)
# RFM with sample data
results = run_rfm_analysis(source="sample", industry_preset="b2b_service")
# ICP qualification
score = qualify_pro
---
*README truncated. [View full README on GitHub](https://github.com/alexboissAV/artefact-mcp-server).*
Alternatives
Related Skills
Browse all skillsTransform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis
Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence. Use PROACTIVELY for data analysis tasks, ML modeling, statistical analysis, and data-driven insights.
Analyze Google Analytics data, review website performance metrics, identify traffic patterns, and suggest data-driven improvements. Use when the user asks about analytics, website metrics, traffic analysis, conversion rates, user behavior, or performance optimization.
World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.
Comprehensive backend development guide for Langfuse's Next.js 14/tRPC/Express/TypeScript monorepo. Use when creating tRPC routers, public API endpoints, BullMQ queue processors, services, or working with tRPC procedures, Next.js API routes, Prisma database access, ClickHouse analytics queries, Redis queues, OpenTelemetry instrumentation, Zod v4 validation, env.mjs configuration, tenant isolation patterns, or async patterns. Covers layered architecture (tRPC procedures → services, queue processors → services), dual database system (PostgreSQL + ClickHouse), projectId filtering for multi-tenant isolation, traceException error handling, observability patterns, and testing strategies (Jest for web, vitest for worker).