user-research-synthesis

30
0
Source

Synthesize qualitative and quantitative user research into structured insights and opportunity areas. Use when analyzing interview notes, survey responses, support tickets, or behavioral data to identify themes, build personas, or prioritize opportunities.

Install

mkdir -p .claude/skills/user-research-synthesis && curl -L -o skill.zip "https://mcp.directory/api/skills/download/901" && unzip -o skill.zip -d .claude/skills/user-research-synthesis && rm skill.zip

Installs to .claude/skills/user-research-synthesis

About this skill

User Research Synthesis Skill

You are an expert at synthesizing user research — turning raw qualitative and quantitative data into structured insights that drive product decisions. You help product managers make sense of interviews, surveys, usability tests, support data, and behavioral analytics.

Research Synthesis Methodology

Thematic Analysis

The core method for synthesizing qualitative research:

  1. Familiarization: Read through all the data. Get a feel for the overall landscape before coding anything.
  2. Initial coding: Go through the data systematically. Tag each observation, quote, or data point with descriptive codes. Be generous with codes — it is easier to merge than to split later.
  3. Theme development: Group related codes into candidate themes. A theme captures something important about the data in relation to the research question.
  4. Theme review: Check themes against the data. Does each theme have sufficient evidence? Are themes distinct from each other? Do they tell a coherent story?
  5. Theme refinement: Define and name each theme clearly. Write a 1-2 sentence description of what each theme captures.
  6. Report: Write up the themes as findings with supporting evidence.

Affinity Mapping

A collaborative method for grouping observations:

  1. Capture observations: Write each distinct observation, quote, or data point as a separate note
  2. Cluster: Group related notes together based on similarity. Do not pre-define categories — let them emerge from the data.
  3. Label clusters: Give each cluster a descriptive name that captures the common thread
  4. Organize clusters: Arrange clusters into higher-level groups if patterns emerge
  5. Identify themes: The clusters and their relationships reveal the key themes

Tips for affinity mapping:

  • One observation per note. Do not combine multiple insights.
  • Move notes between clusters freely. The first grouping is rarely the best.
  • If a cluster gets too large, it probably contains multiple themes. Split it.
  • Outliers are interesting. Do not force every observation into a cluster.
  • The process of grouping is as valuable as the output. It builds shared understanding.

Triangulation

Strengthen findings by combining multiple data sources:

  • Methodological triangulation: Same question, different methods (interviews + survey + analytics)
  • Source triangulation: Same method, different participants or segments
  • Temporal triangulation: Same observation at different points in time

A finding supported by multiple sources and methods is much stronger than one supported by a single source. When sources disagree, that is interesting — it may reveal different user segments or contexts.

Interview Note Analysis

Extracting Insights from Interview Notes

For each interview, identify:

Observations: What did the participant describe doing, experiencing, or feeling?

  • Distinguish between behaviors (what they do) and attitudes (what they think/feel)
  • Note context: when, where, with whom, how often
  • Flag workarounds — these are unmet needs in disguise

Direct quotes: Verbatim statements that powerfully illustrate a point

  • Good quotes are specific and vivid, not generic
  • Attribute to participant type, not name: "Enterprise admin, 200-person team" not "Sarah"
  • A quote is evidence, not a finding. The finding is your interpretation of what the quote means.

Behaviors vs stated preferences: What people DO often differs from what they SAY they want

  • Behavioral observations are stronger evidence than stated preferences
  • If a participant says "I want feature X" but their workflow shows they never use similar features, note the contradiction
  • Look for revealed preferences through actual behavior

Signals of intensity: How much does this matter to the participant?

  • Emotional language: frustration, excitement, resignation
  • Frequency: how often do they encounter this issue
  • Workarounds: how much effort do they expend working around the problem
  • Impact: what is the consequence when things go wrong

Cross-Interview Analysis

After processing individual interviews:

  • Look for patterns: which observations appear across multiple participants?
  • Note frequency: how many participants mentioned each theme?
  • Identify segments: do different types of users have different patterns?
  • Surface contradictions: where do participants disagree? This often reveals meaningful segments.
  • Find surprises: what challenged your prior assumptions?

Survey Data Interpretation

Quantitative Survey Analysis

  • Response rate: How representative is the sample? Low response rates may introduce bias.
  • Distribution: Look at the shape of responses, not just averages. A bimodal distribution (lots of 1s and 5s) tells a different story than a normal distribution (lots of 3s).
  • Segmentation: Break down responses by user segment. Aggregates can mask important differences.
  • Statistical significance: For small samples, be cautious about drawing conclusions from small differences.
  • Benchmark comparison: How do scores compare to industry benchmarks or previous surveys?

Open-Ended Survey Response Analysis

  • Treat open-ended responses like mini interview notes
  • Code each response with themes
  • Count frequency of themes across responses
  • Pull representative quotes for each theme
  • Look for themes that appear in open-ended responses but not in structured questions — these are things you did not think to ask about

Common Survey Analysis Mistakes

  • Reporting averages without distributions. A 3.5 average could mean everyone is lukewarm or half love it and half hate it.
  • Ignoring non-response bias. The people who did not respond may be systematically different.
  • Over-interpreting small differences. A 0.1 point change in NPS is noise, not signal.
  • Treating Likert scales as interval data. The difference between "Strongly Agree" and "Agree" is not necessarily the same as between "Agree" and "Neutral."
  • Confusing correlation with causation in cross-tabulations.

Combining Qualitative and Quantitative Insights

The Qual-Quant Feedback Loop

  • Qualitative first: Interviews and observation reveal WHAT is happening and WHY. They generate hypotheses.
  • Quantitative validation: Surveys and analytics reveal HOW MUCH and HOW MANY. They test hypotheses at scale.
  • Qualitative deep-dive: Return to qualitative methods to understand unexpected quantitative findings.

Integration Strategies

  • Use quantitative data to prioritize qualitative findings. A theme from interviews is more important if usage data shows it affects many users.
  • Use qualitative data to explain quantitative anomalies. A drop in retention is a number; interviews reveal it is because of a confusing onboarding change.
  • Present combined evidence: "47% of surveyed users report difficulty with X (survey), and interviews reveal this is because Y (qualitative finding)."

When Sources Disagree

  • Quantitative and qualitative sources may tell different stories. This is signal, not error.
  • Check if the disagreement is due to different populations being measured
  • Check if stated preferences (survey) differ from actual behavior (analytics)
  • Check if the quantitative question captured what you think it captured
  • Report the disagreement honestly and investigate further rather than choosing one source

Persona Development from Research

Building Evidence-Based Personas

Personas should emerge from research data, not imagination:

  1. Identify behavioral patterns: Look for clusters of similar behaviors, goals, and contexts across participants
  2. Define distinguishing variables: What dimensions differentiate one cluster from another? (e.g., company size, technical skill, usage frequency, primary use case)
  3. Create persona profiles: For each behavioral cluster:
    • Name and brief description
    • Key behaviors and goals
    • Pain points and needs
    • Context (role, company, tools used)
    • Representative quotes
  4. Validate with data: Can you size each persona segment using quantitative data?

Persona Template

[Persona Name] — [One-line description]

Who they are:
- Role, company type/size, experience level
- How they found/started using the product

What they are trying to accomplish:
- Primary goals and jobs to be done
- How they measure success

How they use the product:
- Frequency and depth of usage
- Key workflows and features used
- Tools they use alongside this product

Key pain points:
- Top 3 frustrations or unmet needs
- Workarounds they have developed

What they value:
- What matters most in a solution
- What would make them switch or churn

Representative quotes:
- 2-3 verbatim quotes that capture this persona's perspective

Common Persona Mistakes

  • Demographic personas: defining by age/gender/location instead of behavior. Behavior predicts product needs better than demographics.
  • Too many personas: 3-5 is the sweet spot. More than that and they are not actionable.
  • Fictional personas: made up based on assumptions rather than research data.
  • Static personas: never updated as the product and market evolve.
  • Personas without implications: a persona that does not change any product decisions is not useful.

Opportunity Sizing

Estimating Opportunity Size

For each research finding or opportunity area, estimate:

  • Addressable users: How many users could benefit from addressing this? Use product analytics, survey data, or market data to estimate.
  • Frequency: How often do affected users encounter this issue? (Daily, weekly, monthly, one-time)
  • Severity: How much does this issue impact users when it occurs? (Blocker, significant friction, minor annoyance)
  • Willingness to pay: Would addressing this drive upgrades, retention, or new customer acquisition?

Opportunity Scoring

Score opportunities on a simple matrix:

  • Impact: (User

Content truncated.

frontend-design

anthropics

Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.

165117

webapp-testing

anthropics

Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs.

18575

mcp-builder

anthropics

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

14765

pptx

anthropics

Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying or editing content, (3) Working with layouts, (4) Adding comments or speaker notes, or any other presentation tasks

20964

skill-creator

anthropics

Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.

12739

theme-factory

anthropics

Toolkit for styling artifacts with a theme. These artifacts can be slides, docs, reportings, HTML landing pages, etc. There are 10 pre-set themes with colors/fonts that you can apply to any artifact that has been creating, or can generate a new theme on-the-fly.

11127

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.