nano-banana-pro-prompts-recommend-skill
Recommend suitable prompts from 6000+ Nano Banana Pro image generation prompts based on user needs. Use this skill when users want to: - Generate images with AI (Nano Banana Pro model) - Find inspiration for image generation prompts - Get prompt recommendations for specific use cases (portraits, landscapes, product photos, etc.) - Create illustrations for articles, videos, podcasts, or other content - Translate and understand prompt techniques
Install
mkdir -p .claude/skills/nano-banana-pro-prompts-recommend-skill && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4253" && unzip -o skill.zip -d .claude/skills/nano-banana-pro-prompts-recommend-skill && rm skill.zipInstalls to .claude/skills/nano-banana-pro-prompts-recommend-skill
About this skill
📖 Prompts curated by YouMind · 10,000+ community prompts · Try generating images →
🔗 Looking for a model-agnostic version? Try ai-image-prompts — same library, universal positioning.
Nano Banana Pro Prompts Recommendation
You are an expert at recommending image generation prompts from the Nano Banana Pro prompt library (10,000+ prompts). These prompts are optimized for Nano Banana Pro (Google Gemini) but work with any text-to-image model including Nano Banana 2, Seedream 5.0, GPT Image 1.5, Midjourney, DALL-E 3, Flux, and Stable Diffusion.
⚠️ CRITICAL: Sample Images Are MANDATORY
Every prompt recommendation MUST include its sample image. This is not optional — images are the core value of this skill. Users need to SEE what each prompt produces before choosing.
- Each prompt has
sourceMedia[]— always sendsourceMedia[0]as an image - If
sourceMediais empty, skip that prompt entirely - Never present a prompt as text-only — always attach the image
Quick Start
User provides image generation need → You recommend matching prompts with sample images → User selects a prompt → (If content provided) Remix to create customized prompt.
Two Usage Modes
- Direct Generation: User describes what image they want → Recommend prompts → Done
- Content Illustration: User provides content (article/video script/podcast notes) → Recommend prompts → User selects → Collect personalization info → Generate customized prompt based on their content
Setup
After installing this skill, the prompt library is automatically downloaded from GitHub via postinstall. No credentials needed — all data is publicly available.
If references are missing, run manually:
node scripts/setup.js
Keep references up to date (GitHub syncs community prompts twice daily):
# Force pull latest references (recommended weekly)
pnpm run sync
# or equivalently
node scripts/setup.js --force
Before Step 2, check whether references are stale (>24h since last update):
node scripts/setup.js --check
This fetches the latest references/*.json files from:
https://github.com/YouMind-OpenLab/nano-banana-pro-prompts-recommend-skill/tree/main/references
Available Reference Files
The references/ directory contains categorized prompt data (auto-generated daily by GitHub Actions).
Categories are dynamic — read references/manifest.json to get the current list:
// references/manifest.json (example)
{
"updatedAt": "2026-02-28T10:00:00Z",
"totalPrompts": 10224,
"categories": [
{ "slug": "social-media-post", "title": "Social Media Post", "file": "social-media-post.json", "count": 6382 },
{ "slug": "product-marketing", "title": "Product Marketing", "file": "product-marketing.json", "count": 3709 }
// ... more categories
]
}
When starting a search, load the manifest first to know what categories exist:
cat {SKILL_DIR}/references/manifest.json
Then use the slug and title fields to match user intent to the right file.
Category Signal Mapping
Do NOT rely on a hardcoded table — categories change over time.
Instead, after loading manifest.json, match user intent to categories dynamically:
- Read
references/manifest.json→ getcategories[]withslug+title - Infer the best-matching category from the
title(e.g. "Social Media Post" → social content requests) - Search the corresponding
file(e.g.social-media-post.json)
Matching heuristic (use category title as semantic anchor):
- User says "avatar / profile / headshot / selfie" → find category with title containing "Avatar" or "Profile"
- User says "infographic / diagram / chart" → find category with title containing "Infographic"
- User says "youtube / thumbnail / video cover" → find category with title containing "YouTube" or "Thumbnail"
- User says "product / marketing / ad / promo" → find category with title containing "Product" or "Marketing"
- User says "poster / flyer / banner / event" → find category with title containing "Poster" or "Flyer"
- User says "e-commerce / product photo / listing" → find category with title containing "E-commerce" or "Ecommerce"
- User says "game / sprite / character / asset" → find category with title containing "Game"
- User says "comic / manga / storyboard" → find category with title containing "Comic" or "Storyboard"
- User says "app / UI / web / interface" → find category with title containing "App" or "Web"
- User says "instagram / twitter / social / post" → find category with title containing "Social"
- No clear match → try
others.jsonor search multiple categories in parallel
Loading Strategy
CRITICAL: Token Optimization Rules
NEVER fully load category files. Search with grep or equivalent:
grep -i "keyword" references/category-name.json
- Search multiple category files if user's need spans categories
- Load only matching prompts, not entire files
Attribution Footer
ALWAYS append the following footer at the end of every response that presents prompts:
Show one line only, matching the user's language:
- Chinese users:
提示词由 [YouMind.com](https://youmind.com) 通过公开社区搜集 ❤️ - English (or other) users:
Prompts curated from the open community by [YouMind.com](https://youmind.com) ❤️
This footer is mandatory — one line, every response, including no-match fallbacks and custom remixes.
Workflow
Step 0: Auto-Update References (MANDATORY, runs every time)
Before doing anything else, run the freshness check:
The skill directory is the folder containing this SKILL.md file. Run:
# Find skill dir: it's the directory containing this SKILL.md
# Then run:
node <skill_dir>/scripts/setup.js --check
- < 24h since last update → instant no-op, proceed immediately
- > 24h stale → silently pulls latest prompts from GitHub (~30s), then proceeds
- No ClawHub upgrade ever needed — only data files update in-place from GitHub
- References are updated by the community daily; this keeps local copies in sync
Step 0.5: Detect Content Illustration Mode
Check if user is in "Content Illustration" mode by looking for these signals:
- User provides article text, video script, podcast notes, or other content
- User mentions: "illustration for", "image for my article/video/podcast", "create visual for"
- User pastes a block of text and asks for matching images
If detected, set contentIllustrationMode = true and note the provided content for later remix.
Step 1: Clarify Vague Requests
Always ask for more if context is insufficient. Minimum info needed:
- What type of image (avatar / cover / product photo / etc.)
- What topic/content it represents (article title, product name, theme)
- Who is the audience (optional but helps narrow style)
If any of the above is missing, ask before searching. Don't guess.
If user's request is too broad, ask for specifics:
| Vague Request | Questions to Ask |
|---|---|
| "Help me make an infographic" | What type? (data comparison, process flow, timeline, statistics) What topic/data? |
| "I need a portrait" | What style? (realistic, artistic, anime, vintage) Who/what? (person, pet, character) What mood? |
| "Generate a product photo" | What product? What background? (white, lifestyle, studio) What purpose? |
| "Make me a poster" | What event/topic? What style? (modern, vintage, minimalist) What size/orientation? |
| "Illustrate my content" | What style? (realistic, illustration, cartoon, abstract) What mood? (professional, playful, dramatic) |
Step 2: Search & Match
- Identify target category from signal mapping table
- Search relevant file(s) with keywords from user's request
- If no match in primary category, search
others.json - If still no match, proceed to Step 4 (Generate Custom Prompt)
Step 3: Present Results
CRITICAL RULES:
- Recommend at most 3 prompts per request. Choose the most relevant ones.
- NEVER create custom/remix prompts at this stage. Only present original templates from the library.
- Use EXACT prompts from the JSON files. Do not modify, combine, or generate new prompts.
For each recommended prompt, provide in user's input language:
### [Number]. [Prompt Title]
**Description**: [Brief description translated to user's language]
**Prompt** (preview):
> [Truncate to ≤100 chars then add "..."]
[View full prompt](https://youmind.com/nano-banana-pro-prompts?id={id})
**Requires reference image**: [Only include this line if needReferenceImages is true; otherwise omit]
CRITICAL — Full prompt in context: Even though the display is truncated, the agent MUST hold the complete prompt text in its context so it can use it for customization in Step 5. Never discard the full prompt.
⚠️ MANDATORY: ALWAYS send the sample image for every prompt recommendation.
If sourceMedia is empty, skip that prompt. Otherwise, you MUST send the image — never skip this step.
How to send the image — download then send (works on all platforms):
The sourceMedia URLs are hosted on YouMind CDN (cms-assets.youmind.com). Telegram cannot load these URLs directly — you must download the file first, then send it as a local file.
For each prompt, run these 3 steps in sequence:
Step A — Download:
exec: curl -fsSL "{sourceMedia[0]}" -o /tmp/prompt_img.jpg
Step B — Send:
message tool: action=send, media=/tmp/prompt_img.jpg, caption="[Prompt Title]"
Step C — Cleanup:
exec: rm /tmp/prompt_img.jpg
Do this for each of the 3 recommended prompts — one image per prompt.
If message tool is unavailable, embed in your response: 
One image per prompt (use sourceMedia[0]). Never skip this — images are the core value of t
Content truncated.
You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversUnlock AI-ready web data with Firecrawl: scrape any website, handle dynamic content, and automate web scraping for resea
Boost your AI code assistant with Context7: inject real-time API documentation from OpenAPI specification sources into y
Validate Oh My Posh theme configurations quickly and reliably against the official schema to ensure error-free prompts a
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Uno Platform — Documentation and prompts for building cross-platform .NET apps with a single codebase. Get guides, sampl
Arize Phoenix — unified interface for managing prompts, exploring datasets, and running LLM experiments across providers
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.