daggr
Build DAG-based AI pipelines connecting Gradio Spaces, HuggingFace models, and Python functions into visual workflows. Use when asked to create a workflow, build a pipeline, connect AI models, chain Gradio Spaces, create a daggr app, build multi-step AI applications, or orchestrate ML models. Triggers on: "build a workflow", "create a pipeline", "connect models", "daggr", "chain Spaces", "AI pipeline".
Install
mkdir -p .claude/skills/daggr && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7219" && unzip -o skill.zip -d .claude/skills/daggr && rm skill.zipInstalls to .claude/skills/daggr
About this skill
daggr
Build visual DAG pipelines connecting Gradio Spaces, HF Inference Providers, and Python functions.
Full docs: https://raw.githubusercontent.com/gradio-app/daggr/refs/heads/main/README.md
Quick Start
from daggr import GradioNode, FnNode, InferenceNode, Graph, ItemList
import gradio as gr
graph = Graph(name="My Workflow", nodes=[node1, node2, ...])
graph.launch() # Starts web server with visual DAG UI
Node Types
GradioNode - Gradio Spaces
node = GradioNode(
space_or_url="owner/space-name",
api_name="/endpoint",
inputs={
"param": gr.Textbox(label="Input"), # UI input
"other": other_node.output_port, # Port connection
"fixed": "constant_value", # Fixed value
},
postprocess=lambda *returns: returns[0], # Transform response
outputs={"result": gr.Image(label="Output")},
)
# Example: image generation
img = GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate",
inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"},
postprocess=lambda imgs, *_: imgs[0]["image"],
outputs={"image": gr.Image()})
Find Spaces with semantic queries (describe what you need): https://huggingface.co/api/spaces/semantic-search?q=generate+music+for+a+video&sdk=gradio&includeNonRunning=false
Or by category: https://huggingface.co/api/spaces/semantic-search?category=image-generation&sdk=gradio&includeNonRunning=false
(categories: image-generation | video-generation | text-generation | speech-synthesis | music-generation | voice-cloning | image-editing | background-removal | image-upscaling | ocr | style-transfer | image-captioning)
FnNode - Python Functions
def process(input1: str, input2: int) -> str:
return f"{input1}: {input2}"
node = FnNode(
fn=process,
inputs={"input1": gr.Textbox(), "input2": other_node.port},
outputs={"result": gr.Textbox()},
)
InferenceNode - HF Inference Providers
Find models: https://huggingface.co/api/models?inference_provider=all&pipeline_tag=text-to-image
(swap pipeline_tag: text-to-image | image-to-image | image-to-text | image-to-video | text-to-video | text-to-speech | automatic-speech-recognition)
VLM/LLM models: https://router.huggingface.co/v1/models
node = InferenceNode(
model="org/model:provider", # model:provider (fal-ai, replicate, together, etc.)
inputs={"image": other_node.image, "prompt": gr.Textbox()},
outputs={"image": gr.Image()},
)
Auth: InferenceNode and ZeroGPU Spaces require a HF token. If not in env, ask user to create one:
https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained
Out of quota? Pro gives 8x ZeroGPU + 10x inference: https://huggingface.co/subscribe/pro
Port Connections
Pass ports via inputs={...}:
inputs={"param": previous_node.output_port} # Basic connection
inputs={"item": items_node.items.field_name} # Scattered (per-item)
inputs={"all": scattered_node.output.all()} # Gathered (collect list)
ItemList - Dynamic Lists
def gen_items(n: int) -> list:
return [{"text": f"Item {i}"} for i in range(n)]
items = FnNode(fn=gen_items,
outputs={"items": ItemList(text=gr.Textbox())})
# Runs once per item
process = FnNode(fn=process_item,
inputs={"text": items.items.text},
outputs={"result": gr.Textbox()})
# Collect all results
final = FnNode(fn=combine,
inputs={"all": process.result.all()},
outputs={"out": gr.Textbox()})
Checklist
-
Check API before using a Space:
curl -s "https://<space-subdomain>.hf.space/gradio_api/openapi.json"Replace
<space-subdomain>with the Space's subdomain (e.g.,Tongyi-MAI/Z-Image-Turbo→tongyi-mai-z-image-turbo). (Spaces also have "Use via API" link in footer with endpoints and code snippets) -
Handle files (Gradio returns dicts):
path = file.get("path") if isinstance(file, dict) else file -
Use postprocess for multi-return APIs:
postprocess=lambda imgs, seed, num: imgs[0]["image"] -
Debug with
.test()to validate a node in isolation:node.test(param="value")
Common Patterns
# Image Generation
GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate",
inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"},
postprocess=lambda imgs, *_: imgs[0]["image"],
outputs={"image": gr.Image()})
# Text-to-Speech
GradioNode("Qwen/Qwen3-TTS", api_name="/generate_voice_design",
inputs={"text": gr.Textbox(), "language": "English", "voice_description": "..."},
postprocess=lambda audio, status: audio,
outputs={"audio": gr.Audio()})
# Image-to-Video
GradioNode("alexnasa/ltx-2-TURBO", api_name="/generate_video",
inputs={"input_image": img.image, "prompt": gr.Textbox(), "duration": 5},
postprocess=lambda video, seed: video,
outputs={"video": gr.Video()})
# ffmpeg composition (import tempfile, subprocess)
def combine(video: str|dict, audio: str|dict) -> str:
v = video.get("path") if isinstance(video, dict) else video
a = audio.get("path") if isinstance(audio, dict) else audio
out = tempfile.mktemp(suffix=".mp4")
subprocess.run(["ffmpeg","-y","-i",v,"-i",a,"-shortest",out])
return out
Run
uvx --python 3.12 daggr workflow.py & # Launch in background, hot reloads on file changes
Authentication
Local development: Use hf auth login or set HF_TOKEN env var. This enables ZeroGPU quota tracking, private Spaces access, and gated models.
Deployed Spaces: Users can click "Login" in the UI and paste their HF token. This enables persistence (sheets) so they can save outputs and resume work later. The token is stored in browser localStorage.
When deploying: Pass secrets via --secret HF_TOKEN=xxx if your workflow needs server-side auth (e.g., for gated models in FnNode). Warning: this uses the deployer's token for all users.
Deploy to Hugging Face Spaces
Only deploy if the user has explicitly asked to publish/deploy their workflow.
daggr deploy workflow.py
This extracts the Graph, creates a Space named after it, and uploads everything.
Options:
daggr deploy workflow.py --name my-space # Custom Space name
daggr deploy workflow.py --org huggingface # Deploy to an organization
daggr deploy workflow.py --private # Private Space
daggr deploy workflow.py --hardware t4-small # GPU (t4-small, t4-medium, a10g-small, etc.)
daggr deploy workflow.py --secret KEY=value # Add secrets (repeatable)
daggr deploy workflow.py --dry-run # Preview without deploying
You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversBoost your productivity by managing Azure DevOps projects, pipelines, and repos in VS Code. Streamline dev workflows wit
Search Hugging Face models, datasets, and papers — connect dynamically to Gradio examples on Hugging Face Spaces for ext
Integrate with Buildkite CI/CD to access pipelines, builds, job logs, artifacts and user data for monitoring workflows a
Build persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Uno Platform — Documentation and prompts for building cross-platform .NET apps with a single codebase. Get guides, sampl
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.