metrics-tracking

19
2
Source

Define, track, and analyze product metrics with frameworks for goal setting and dashboard design. Use when setting up OKRs, building metrics dashboards, running weekly metrics reviews, identifying trends, or choosing the right metrics for a product area.

Install

mkdir -p .claude/skills/metrics-tracking && curl -L -o skill.zip "https://mcp.directory/api/skills/download/934" && unzip -o skill.zip -d .claude/skills/metrics-tracking && rm skill.zip

Installs to .claude/skills/metrics-tracking

About this skill

Metrics Tracking Skill

You are an expert at product metrics — defining, tracking, analyzing, and acting on product metrics. You help product managers build metrics frameworks, set goals, run reviews, and design dashboards that drive decisions.

Product Metrics Hierarchy

North Star Metric

The single metric that best captures the core value your product delivers to users. It should be:

  • Value-aligned: Moves when users get more value from the product
  • Leading: Predicts long-term business success (revenue, retention)
  • Actionable: The product team can influence it through their work
  • Understandable: Everyone in the company can understand what it means and why it matters

Examples by product type:

  • Collaboration tool: Weekly active teams with 3+ members contributing
  • Marketplace: Weekly transactions completed
  • SaaS platform: Weekly active users completing core workflow
  • Content platform: Weekly engaged reading/viewing time
  • Developer tool: Weekly deployments using the tool

L1 Metrics (Health Indicators)

The 5-7 metrics that together paint a complete picture of product health. These map to the key stages of the user lifecycle:

Acquisition: Are new users finding the product?

  • New signups or trial starts (volume and trend)
  • Signup conversion rate (visitors to signups)
  • Channel mix (where are new users coming from)
  • Cost per acquisition (for paid channels)

Activation: Are new users reaching the value moment?

  • Activation rate: % of new users who complete the key action that predicts retention
  • Time to activate: how long from signup to activation
  • Setup completion rate: % who complete onboarding steps
  • First value moment: when users first experience the core product value

Engagement: Are active users getting value?

  • DAU / WAU / MAU: active users at different timeframes
  • DAU/MAU ratio (stickiness): what fraction of monthly users come back daily
  • Core action frequency: how often users do the thing that matters most
  • Session depth: how much users do per session
  • Feature adoption: % of users using key features

Retention: Are users coming back?

  • D1, D7, D30 retention: % of users who return after 1 day, 7 days, 30 days
  • Cohort retention curves: how retention evolves for each signup cohort
  • Churn rate: % of users or revenue lost per period
  • Resurrection rate: % of churned users who come back

Monetization: Is value translating to revenue?

  • Conversion rate: free to paid (for freemium)
  • MRR / ARR: monthly or annual recurring revenue
  • ARPU / ARPA: average revenue per user or account
  • Expansion revenue: revenue growth from existing customers
  • Net revenue retention: revenue retention including expansion and contraction

Satisfaction: How do users feel about the product?

  • NPS: Net Promoter Score
  • CSAT: Customer Satisfaction Score
  • Support ticket volume and resolution time
  • App store ratings and review sentiment

L2 Metrics (Diagnostic)

Detailed metrics used to investigate changes in L1 metrics:

  • Funnel conversion at each step
  • Feature-level usage and adoption
  • Segment-specific breakdowns (by plan, company size, geography, user role)
  • Performance metrics (page load time, error rate, API latency)
  • Content-specific engagement (which features, pages, or content types drive engagement)

Common Product Metrics

DAU / WAU / MAU

What they measure: Unique users who perform a qualifying action in a day, week, or month.

Key decisions:

  • What counts as "active"? A login? A page view? A core action? Define this carefully — different definitions tell different stories.
  • Which timeframe matters most? DAU for daily-use products (messaging, email). WAU for weekly-use products (project management). MAU for less frequent products (tax software, travel booking).

How to use them:

  • DAU/MAU ratio (stickiness): values above 0.5 indicate a daily habit. Below 0.2 suggests infrequent usage.
  • Trend matters more than absolute number. Is active usage growing, flat, or declining?
  • Segment by user type. Power users and casual users behave very differently.

Retention

What it measures: Of users who started in period X, what % are still active in period Y?

Common retention timeframes:

  • D1 (next day): Was the first experience good enough to come back?
  • D7 (one week): Did the user establish a habit?
  • D30 (one month): Is the user retained long-term?
  • D90 (three months): Is this a durable user?

How to use retention:

  • Plot retention curves by cohort. Look for: initial drop-off (activation problem), steady decline (engagement problem), or flattening (good — you have a stable retained base).
  • Compare cohorts over time. Are newer cohorts retaining better than older ones? That means product improvements are working.
  • Segment retention by activation behavior. Users who completed onboarding vs those who did not. Users who used feature X vs those who did not.

Conversion

What it measures: % of users who move from one stage to the next.

Common conversion funnels:

  • Visitor to signup
  • Signup to activation (key value moment)
  • Free to paid (trial conversion)
  • Trial to paid subscription
  • Monthly to annual plan

How to use conversion:

  • Map the full funnel and measure conversion at each step
  • Identify the biggest drop-off points — these are your highest-leverage improvement opportunities
  • Segment conversion by source, plan, user type. Different segments convert very differently.
  • Track conversion over time. Is it improving as you iterate on the experience?

Activation

What it measures: % of new users who reach the moment where they first experience the product's core value.

Defining activation:

  • Look at retained users vs churned users. What actions did retained users take that churned users did not?
  • The activation event should be strongly predictive of long-term retention
  • It should be achievable within the first session or first few days
  • Examples: created first project, invited a teammate, completed first workflow, connected an integration

How to use activation:

  • Track activation rate for every signup cohort
  • Measure time to activate — faster is almost always better
  • Build onboarding flows that guide users to the activation moment
  • A/B test activation flows and measure impact on retention, not just activation rate

Goal Setting Frameworks

OKRs (Objectives and Key Results)

Objectives: Qualitative, aspirational goals that describe what you want to achieve.

  • Inspiring and memorable
  • Time-bound (quarterly or annually)
  • Directional, not metric-specific

Key Results: Quantitative measures that tell you if you achieved the objective.

  • Specific and measurable
  • Time-bound with a clear target
  • Outcome-based, not output-based
  • 2-4 Key Results per Objective

Example:

Objective: Make our product indispensable for daily workflows

Key Results:
- Increase DAU/MAU ratio from 0.35 to 0.50
- Increase D30 retention for new users from 40% to 55%
- 3 core workflows with >80% task completion rate

OKR Best Practices

  • Set OKRs that are ambitious but achievable. 70% completion is the target for stretch OKRs.
  • Key Results should measure outcomes (user behavior, business results), not outputs (features shipped, tasks completed).
  • Do not have too many OKRs. 2-3 objectives with 2-4 KRs each is plenty.
  • OKRs should be uncomfortable. If you are confident you will hit all of them, they are not ambitious enough.
  • Review OKRs at mid-period. Adjust effort allocation if some KRs are clearly off track.
  • Grade OKRs honestly at end of period. 0.0-0.3 = missed, 0.4-0.6 = progress, 0.7-1.0 = achieved.

Setting Metric Targets

  • Baseline: What is the current value? You need a reliable baseline before setting a target.
  • Benchmark: What do comparable products achieve? Industry benchmarks provide context.
  • Trajectory: What is the current trend? If the metric is already improving at 5% per month, a 6% target is not ambitious.
  • Effort: How much investment are you putting behind this? Bigger bets warrant more ambitious targets.
  • Confidence: How confident are you in hitting the target? Set a "commit" (high confidence) and a "stretch" (ambitious).

Metric Review Cadences

Weekly Metrics Check

Purpose: Catch issues quickly, monitor experiments, stay in touch with product health. Duration: 15-30 minutes. Attendees: Product manager, maybe engineering lead.

What to review:

  • North Star metric: current value, week-over-week change
  • Key L1 metrics: any notable movements
  • Active experiments: results and statistical significance
  • Anomalies: any unexpected spikes or drops
  • Alerts: anything that triggered a monitoring alert

Action: If something looks off, investigate. Otherwise, note it and move on.

Monthly Metrics Review

Purpose: Deeper analysis of trends, progress against goals, strategic implications. Duration: 30-60 minutes. Attendees: Product team, key stakeholders.

What to review:

  • Full L1 metric scorecard with month-over-month trends
  • Progress against quarterly OKR targets
  • Cohort analysis: are newer cohorts performing better?
  • Feature adoption: how are recent launches performing?
  • Segment analysis: any divergence between user segments?

Action: Identify 1-3 areas to investigate or invest in. Update priorities if metrics reveal new information.

Quarterly Business Review

Purpose: Strategic assessment of product performance, goal-setting for next quarter. Duration: 60-90 minutes. Attendees: Product, engineering, design, leadership.

What to review:

  • OKR scoring for the quarter
  • Trend analysis for all L1 metrics over the quarter
  • Year-over-year comparisons
  • Competitive context: market changes and competitor movements
  • What worked and what did not

Action: Set OKRs for next quarter. Adjust product strategy based on what the data shows.

Dashboard Design Principles


Content truncated.

frontend-design

anthropics

Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.

362318

webapp-testing

anthropics

Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs.

387215

pptx

anthropics

Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying or editing content, (3) Working with layouts, (4) Adding comments or speaker notes, or any other presentation tasks

468212

mcp-builder

anthropics

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

193109

brand-voice

anthropics

Apply and enforce brand voice, style guide, and messaging pillars across content. Use when reviewing content for brand consistency, documenting a brand voice, adapting tone for different audiences, or checking terminology and style guide compliance.

27295

competitive-analysis

anthropics

Analyze competitors with feature comparison matrices, positioning analysis, and strategic implications. Use when researching a competitor, comparing product capabilities, assessing competitive positioning, or preparing a competitive brief for product strategy.

28492

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,5551,368

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

1,0791,170

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,4011,103

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

1,174738

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

1,131678

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,274602

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.