pi-share
Load and parse session transcripts from shittycodingagent.ai/buildwithpi.ai/buildwithpi.com (pi-share) URLs. Fetches gists, decodes embedded session data, and extracts conversation history.
Install
mkdir -p .claude/skills/pi-share && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5525" && unzip -o skill.zip -d .claude/skills/pi-share && rm skill.zipInstalls to .claude/skills/pi-share
About this skill
pi-share / buildwithpi Session Loader
Load and parse session transcripts from pi-share URLs (shittycodingagent.ai, buildwithpi.ai, buildwithpi.com, pi.dev).
When to Use
Loading sessions: Use this skill when the user provides a URL like:
https://shittycodingagent.ai/session/?<gist_id>https://buildwithpi.ai/session/?<gist_id>https://buildwithpi.com/session/?<gist_id>https://pi.dev/session/?<gist_id>https://pi.dev/session/#<gist_id>- Or just a gist ID like
46aee35206aefe99257bc5d5e60c6121 - Or hash-prefixed shorthand like
#46aee35206aefe99257bc5d5e60c6121
Human summaries: Use --human-summary when the user asks you to:
- Summarize what a human did in a pi/coding agent session
- Understand how a user interacted with an agent
- Analyze user behavior, steering patterns, or prompting style
- Get a human-centric view of a session (not what the agent did, but what the human did)
The human summary focuses on: initial goals, re-prompts, steering/corrections, interventions, and overall prompting style.
How It Works
- Session exports are stored as GitHub Gists
- The URL contains a gist ID after the
? - The gist contains a
session.htmlfile with base64-encoded session data - The helper script fetches and decodes this to extract the full conversation
Usage
# Get full session data (default)
node ~/.pi/agent/skills/pi-share/fetch-session.mjs "<url-or-gist-id>"
# Get just the header
node ~/.pi/agent/skills/pi-share/fetch-session.mjs <gist-id> --header
# Get entries as JSON lines (one entry per line)
node ~/.pi/agent/skills/pi-share/fetch-session.mjs <gist-id> --entries
# Get the system prompt
node ~/.pi/agent/skills/pi-share/fetch-session.mjs <gist-id> --system
# Get tool definitions
node ~/.pi/agent/skills/pi-share/fetch-session.mjs <gist-id> --tools
# Get human-centric summary (what did the human do in this session?)
node ~/.pi/agent/skills/pi-share/fetch-session.mjs <gist-id> --human-summary
Human Summary
The --human-summary flag generates a ~300 word summary focused on the human's experience:
- What was their initial goal?
- How often did they re-prompt or steer the agent?
- What kind of interventions did they make? (corrections, clarifications, frustration)
- How specific or vague were their instructions?
This uses claude-haiku-4-5 via pi -p to analyze the condensed session transcript.
Session Data Structure
The decoded session contains:
interface SessionData {
header: {
type: "session";
version: number;
id: string; // Session UUID
timestamp: string; // ISO timestamp
cwd: string; // Working directory
};
entries: SessionEntry[]; // Conversation entries (JSON lines format)
leafId: string | null; // Current branch leaf
systemPrompt?: string; // System prompt text
tools?: { name: string; description: string }[];
}
Entry types include:
message- User/assistant/toolResult messages with content blocksmodel_change- Model switchesthinking_level_change- Thinking mode changescompaction- Context compaction events
Message content block types:
text- Text contenttoolCall- Tool invocation withtoolNameandargsthinking- Model thinking contentimage- Embedded images
Example: Analyze a Session
# Pipe entries through jq to filter
node ~/.pi/agent/skills/pi-share/fetch-session.mjs "<url>" --entries | jq 'select(.type == "message" and .message.role == "user")'
# Count tool calls
node ~/.pi/agent/skills/pi-share/fetch-session.mjs "<url>" --entries | jq -s '[.[] | select(.type == "message") | .message.content[]? | select(.type == "toolCall")] | length'
More by mitsuhiko
View all skills by mitsuhiko →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversLLMS.txt Documentation: Easily fetch and parse llms.txt files to provide instant AI-driven documentation lookup during c
Ambient Code Platform MCP Server offloads long-running AI tasks to Kubernetes-hosted Claude agents on OpenShift, enablin
Enhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Interactive MCP server for collecting user feedback and executing commands during AI-assisted development. Features a we
Cipher empowers agents with persistent memory using vector databases and embeddings for seamless context retention and t
Control Ableton Live for advanced music production—track creation, MIDI editing, playback, and sound design. Perfect for
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.