Claude Nano Banana Pro skill: 10 image use cases
Ten real images — blog hero, app icon set, storybook panels, social batch, device mockup, palette swap, sketch-to-illustration, UI variants, campaign series, iterative refinement — each as a single Claude prompt against Gemini 3 Pro Image (gemini-3-pro-image-preview).
Already know what skills are? Skip to the cookbook. First time? Read the explainer then come back. Need the install? It’s on the /skills/nano-banana-pro page.

On this page · 21 sections▾
- What this skill does
- The cookbook
- Install + README
- Watch it built
- 01 · Blog hero with rendered headline text
- 02 · App icon set, four sizes, one consistent style
- 03 · Storybook illustrations with one consistent character
- 04 · Product mockup — hero shot in real context
- 05 · Palette swap on an existing image
- 06 · Photoreal product shot for an e-commerce listing
- 07 · Architectural / interior rendering from a sketch
- 08 · Infographic with embedded text and data labels
- 09 · Brand-coloured marketing banner batch
- 10 · Logo concept sheet with iterative refinement
- Community signal
- The contrarian take
- Real images shipped
- Gotchas
- Pairs well with
- FAQ
- Sources
What this skill actually does
Sixty seconds of context before the cookbook — what the Nano Banana Pro skill is, what Claude returns when you invoke it, and the one thing it does NOT do for you.
What this skill actually does
“Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API.”
— garg-aayush · nano-banana-pro SKILL.md · /skills/nano-banana-pro
What Claude returns
Calls Google's gemini-3-pro-image-preview model through the Gemini API, decodes the returned inline_data.data bytes, and writes a timestamped PNG into your current working directory. Supports text-to-image generation and image-to-image editing via --input-image (up to 14 references for character consistency), aspect-ratio strings (1:1, 16:9, 9:16, 4:3, 3:4, 21:9, etc.), and --resolution 1K|2K|4K. Returns the saved file path plus the model's text response describing what it rendered.
What it does NOT do
It does not provision your Google AI Studio key or enable billing — you still need GEMINI_API_KEY in the shell and a billing-enabled Google Cloud project before any prompt will run; the free tier silently falls back to the older Nano Banana model.
How you trigger it
generate an image of …edit this image — change the background to …create a 4K hero illustration showing …Cost when idle
About 120 tokens of skill metadata stay loaded per turn. The full SKILL.md and the generate_image.py script are only read when Claude actually decides to render an image, so day-to-day chat cost is unchanged.
One naming note. Nano Banana Pro is the marketing codename Google uses for Gemini 3 Pro Image (model ID gemini-3-pro-image-preview). The skill speaks to that model directly, and it is the only Google image model that does multi-step reasoning, 4K output, character consistency across up to 14 reference images, and reliable in-image text. Gemini 2.5 Flash Image (the original Nano Banana) and Gemini 3.1 Flash Image (Nano Banana 2) are faster and cheaper, but lose those four properties.
The cookbook
Each entry below is one image you could ship today. They run in the order I’d teach them — the early ones (hero, icon set, storybook) lean on the basic generation surface, the later ones (palette swap, sketch-to-illustration, iterative refinement) use the edit-image API and the up-to-14 reference-image conditioning. Every entry pairs with one or two skills or MCP servers you already have on mcp.directory.
Install + README
If the skill isn’t on your machine yet, here’s the one-liner. The full install panel (Codex, Copilot, Antigravity variants) is on the skill page. You also need a GEMINI_API_KEY from Google AI Studio with billing enabled before any of the cookbook prompts will run.
One-line install · by garg-aayush
Open skill pageInstall
mkdir -p .claude/skills/nano-banana-pro && curl -L -o skill.zip "https://mcp.directory/api/skills/download/50" && unzip -o skill.zip -d .claude/skills/nano-banana-pro && rm skill.zipInstalls to .claude/skills/nano-banana-pro
Watch it built
A clean walkthrough of what Gemini 3 Pro Image can do — the studio controls, the 4K output, grounding via Google Search, and the reasoning step that makes character consistency work. Useful before the cookbook because it anchors what the rendered output actually looks like.
Blog hero with rendered headline text
One 16:9 hero illustration for a blog post where the post title needs to be legible inside the image — the case Nano Banana Pro is uniquely good at.
ForMarketing engineers and indie writers who keep losing an hour every post to Figma + stock photography.
The prompt
Generate a 2K, 16:9 editorial hero for the post titled "Shipping AI Agents in Production". Cinematic depth-of-field, midnight-navy background, a glowing teal flow-arc behind floating UI cards. Render the title text crisply along the lower third in a clean geometric sans (Inter Display or similar). Use the nano-banana-pro skill, save as hero.png in the current folder.What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "editorial hero, 16:9, midnight-navy bg, teal flow-arc, floating UI cards, render title 'Shipping AI Agents in Production' in Inter Display along lower third" \
--filename 2026-04-28-14-22-09-hero-shipping-ai-agents.png \
--resolution 2K
→ Saved: ./2026-04-28-14-22-09-hero-shipping-ai-agents.png (1920×1080, 1.4 MB)
Model: gemini-3-pro-image-preview · aspect_ratio=16:9 · image_size=2KOne-line tweak
If the headline glyphs come out smudged, re-prompt with the exact font name and an explicit "sharp letterforms, no kerning artifacts" clause — Pro respects font-family hints surprisingly well.
App icon set, four sizes, one consistent style
Generate a square app icon and three exact-pixel resizes (1024, 512, 192, 64) that stay visually coherent.
ForSolo iOS / Android devs about to ship an MVP and not paying for a designer round-trip.
The prompt
Generate a 1:1 square app icon for a focus-timer app. Soft gradient (deep indigo → magenta), centered glyph: a stylised hourglass made of two thin curves. Flat, no drop-shadow. Render at 1K. Then generate three more 1:1 versions tightened for 512, 192, and 64 px display — strip detail at the smaller sizes so the glyph stays readable.What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "1:1 app icon, focus timer, indigo-magenta gradient, hourglass glyph, flat, no shadow" \
--filename icon-1024.png --resolution 1K
→ Saved: ./icon-1024.png (1024×1024)
# Then 3 follow-up calls with --input-image icon-1024.png and 'simplify for 512px / 192px / 64px display'
→ Saved: icon-512.png, icon-192.png, icon-64.png — all four share the same glyph silhouetteOne-line tweak
For the 64 px variant always pass the 1024 as --input-image rather than re-prompting from text — the model preserves the silhouette much more reliably than re-rolling.
Storybook illustrations with one consistent character
Eight illustrations across a children's-book story where the same character (a small fox in a yellow coat) stays recognisable page-to-page.
ForIndie authors and parents who want a printable picture book without hiring an illustrator.
The prompt
Pass 1: generate the canonical character — "a small fox in a yellow coat, watercolor style, gentle expression", 1:1, 2K, save as fox-ref.png. Then for each scene call the skill with --input-image fox-ref.png and a per-page prompt: "the fox stands at a forest crossroads at dusk", "the fox crosses a wooden bridge in the rain", etc. Keep watercolor style, 4:3 aspect, 1K.What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "the fox crosses a wooden bridge in the rain, watercolor, 4:3" \
--input-image fox-ref.png \
--filename page-04-bridge.png --resolution 1K
→ Saved: ./page-04-bridge.png — same fox silhouette and coat colour as fox-ref.png
Model: gemini-3-pro-image-preview · 8 reference slots used (max 14)One-line tweak
If face drift creeps in by page 6, stack two references on the next call — the canonical fox-ref.png plus the most recent in-style page. Pro will average the two and snap back.
Product mockup — hero shot in real context
Drop a product (mug, t-shirt, packaged box) into a believable lifestyle scene without a photo studio.
ForE-commerce founders launching a Shopify store and skipping the $1,200 product-shoot quote.
The prompt
Use --input-image product-flat.png (the bare product on white). Generate a lifestyle shot: "the mug sits on a sun-lit oak desk with a half-open notebook and a houseplant blurred in the background, golden-hour light, photorealistic, 3:2", 2K. Keep the product geometry exactly — no logo distortion.What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "the mug sits on a sun-lit oak desk, half-open notebook, blurred houseplant, golden-hour, photoreal, 3:2" \
--input-image product-flat.png \
--filename mug-lifestyle-01.png --resolution 2K
→ Saved: ./mug-lifestyle-01.png (3072×2048) — logo intact, depth-of-field correct
Tip: pass --input-image to lock product geometry; text-only prompts re-imagine the productOne-line tweak
If the model re-paints your logo, lower the prompt's scene complexity and add "do not modify the printed logo on the mug" — the constraint is respected better than you'd expect.
Palette swap on an existing image
Take one finished illustration and produce three brand-coloured variants (e.g. for A/B-testing landing-page hero).
ForGrowth engineers running palette tests without re-commissioning the artwork.
The prompt
--input-image hero-original.png. Generate three variants: (1) "recolour to brand palette: #0E1B3F base, #1AC4B7 accent, keep all geometry", (2) "warm sunset palette: #1B0E2A base, #FF8A3D accent", (3) "high-contrast monochrome, near-black + single neon-lime accent". 16:9, 2K each.What slides.md looks like
$ for VARIANT in cool-teal warm-sunset mono-lime; do
uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "recolour to ${VARIANT} palette, keep all geometry and composition exactly" \
--input-image hero-original.png \
--filename hero-${VARIANT}.png --resolution 2K
done
→ 3 files written, geometry preserved, only palette shiftedOne-line tweak
Hex codes in the prompt outperform colour-name descriptors. Pro reads "#0E1B3F" more reliably than "deep navy".
Photoreal product shot for an e-commerce listing
A clean white-background product photo (the kind Amazon and Shopify require) from a single rough phone snap.
ForEtsy and Shopify sellers without a lightbox.
The prompt
--input-image phone-snap.jpg. "Photorealistic e-commerce product shot, pure white seamless background, soft top-down studio light, no shadow under product, 1:1 square, 2K, sharp focus on product, remove any background clutter from the source."What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "photoreal e-commerce shot, pure white seamless bg, soft top-down studio light, 1:1, sharp focus, no clutter" \
--input-image phone-snap.jpg \
--filename listing-front.png --resolution 2K
→ Saved: ./listing-front.png (2048×2048) — Amazon-spec compliant
Watermark: SynthID is baked in (invisible) — that's a hard requirement to know about for stock-photo licensingOne-line tweak
Generate four shots (front, three-quarter, top-down, detail) using the same --input-image. Pro keeps the product geometry locked across all four when you reuse the reference.
Pairs with
Architectural / interior rendering from a sketch
Turn a hand-drawn floor plan or rough perspective into a photoreal interior visualisation.
ForArchitects, interior designers, and Airbnb hosts pre-renovation.
The prompt
--input-image sketch-livingroom.jpg. "Photorealistic interior, mid-century modern living room derived from this sketch, walnut floor, off-white walls, warm afternoon light through tall windows, single large bouclé sofa, terrazzo coffee table. 16:9, 4K."What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "photoreal mid-century modern living room from this sketch, walnut floor, bouclé sofa, terrazzo coffee table, warm afternoon light, 16:9" \
--input-image sketch-livingroom.jpg \
--filename livingroom-render-4k.png --resolution 4K
→ Saved: ./livingroom-render-4k.png (5632×3072, ~24 MB)
Cost: ~$0.24 per 4K render — set a Google Cloud budget alert before iteratingOne-line tweak
4K is where Pro genuinely beats every other model right now, but it doubles per-image cost. Iterate at 2K, then re-render the chosen composition once at 4K.
Infographic with embedded text and data labels
A single 4:5 social-ready infographic with five labelled segments and a headline — text rendered crisply inside the image.
ForNewsletter authors, B2B marketers, and policy researchers who keep paying Canva for the same 5-segment template.
The prompt
Generate a 4:5 infographic, headline "5 Causes of MCP Server Bloat", five horizontal rows each with a 3-word label and a 12-word body, light card backgrounds on a navy gradient, accent colour #1AC4B7, render all text crisply in Inter, 2K.What slides.md looks like
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "4:5 infographic, headline '5 Causes of MCP Server Bloat', 5 rows, navy gradient, accent #1AC4B7, all text crisp in Inter" \
--filename infographic-mcp-bloat.png --resolution 2K
→ Saved: ./infographic-mcp-bloat.png (1638×2048) — 5 labels + body copy all legibly rendered
Note: text rendering is the killer feature here — no other API model nails 60+ characters of in-image typeOne-line tweak
Quote your label and body copy verbatim inside the prompt with quotes — Pro respects literal strings far more reliably than paraphrased descriptions.
Brand-coloured marketing banner batch
Five 1200×628 (1.91:1) social banners — one per platform — using the same brand palette and a rotating tagline.
ForSolo marketers running a launch week across LinkedIn, X, Facebook, Instagram, Threads.
The prompt
Generate five banners at 16:9 (then crop to 1.91:1 with Pillow). Brand palette: #0E1B3F base, #1AC4B7 accent, white text. Each banner shares the product wordmark on the left and rotates the tagline: "Ship faster" / "Ship safer" / "Ship together" / "Ship smarter" / "Ship calmly". 2K each.What slides.md looks like
$ for TAG in "Ship faster" "Ship safer" "Ship together" "Ship smarter" "Ship calmly"; do
uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "16:9 marketing banner, palette #0E1B3F + #1AC4B7, wordmark left, tagline right: '${TAG}'" \
--filename banner-${TAG// /-}.png --resolution 2K
done
# Then crop to 1.91:1 (1200×628) — the only common social ratio Pro doesn't natively output
$ python -c "from PIL import Image; [Image.open(f).crop((0, 224, 3072, 1840)).save(f) for f in __import__('glob').glob('banner-*.png')]"One-line tweak
1.91:1 (LinkedIn / Open Graph) isn't in Pro's native aspect-ratio list — generate at 16:9 and centre-crop to 1.91:1 with Pillow. Cropping is lossless from a 2K source.
Logo concept sheet with iterative refinement
Six logo concepts on one sheet, then iterate the chosen concept through three refinement passes (simpler, stronger contrast, monogram-only).
ForFounders pre-brand-identity work, agencies pitching concept sheets, and side-project naming sessions.
The prompt
Pass 1: "Generate a 6-cell concept sheet for a logo for 'Loomwave', a real-time data-streaming startup. Six distinct directions: geometric, wordmark, monogram, abstract symbol, line-illustration, badge. 2:3, 2K." Pass 2: pick concept #4 by passing it as --input-image and prompting "isolate concept 4 only, simplify, increase stroke weight, 1:1". Pass 3: "reduce to monogram letterform only, single accent colour".What slides.md looks like
# Pass 1 — concept sheet
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "6-cell logo concept sheet for 'Loomwave', directions: geometric, wordmark, monogram, abstract, line-illustration, badge" \
--filename loomwave-sheet.png --resolution 2K
→ Saved: ./loomwave-sheet.png
# Pass 3 — final monogram, refined twice
$ uv run ~/.claude/skills/nano-banana-pro/scripts/generate_image.py \
--prompt "reduce to monogram letterform only, single #1AC4B7 accent, 1:1" \
--input-image loomwave-iter-2.png \
--filename loomwave-final.png --resolution 2KOne-line tweak
If the iteration drifts an attribute you wanted to keep (e.g. you asked for thicker strokes and the colour also shifted), split the prompt into two single-edit passes — Pro respects single-axis edits much more cleanly than compound ones.
Pairs with
Community signal
Three voices from people putting Gemini 3 Pro Image through real workloads. The first is the clearest comparative endorsement; the second is the daily-use story; the third is the launch-day pricing pushback that every team feels on day two.
“Nano Banana Pro aka gemini-3-pro-image-preview is the best available image generation model. … this is an astonishingly capable image generation model.”
Simon Willison · Blog
Launch-day review on simonwillison.net. Willison tested 4K output (a 24.1 MB 5632×3072 image) and infographic text rendering before declaring it the new default.
“Nano Banana Pro drops today! I've had about a week with it now and it's really impressive. Combine up to 6 images into a single image. … The best text rendering in an image generator we've seen so far.”
Matt Wolfe (@mreflow) · X / Twitter
Pre-launch impressions thread on X from a creator with early access. The text-rendering line became the most-quoted endorsement on launch day.
“One annoying aspect of the thinking step is that it makes generation time inconsistent: I've had 2K generations take anywhere from 20 seconds to one minute, sometimes even longer during peak hours.”
Max Woolf (minimaxir) · Blog
December 2025 deep-dive review. Woolf is otherwise very positive on quality but lands hard on the latency/cost trade — the part every team feels on day two.
The contrarian take
Not everyone is keeping Pro as their default. The most precise criticism on the launch threads is from Max Woolf:
“The increased cost and generation time is a severe constraint on many fun use cases outside of one-off generations.”
Max Woolf · Blog
From Max Woolf's December 2025 deep-dive review.
Fair. Gemini 3 Pro Image runs a thinking trace before every render, so a 2K generation is closer to 8–15 seconds than the sub-second feel of the Flash variants. The cookbook leans into that trade explicitly: use Pro for hero illustrations, character consistency, and text-heavy infographics where reasoning beats speed; reach for Gemini 2.5 Flash Image or the gemini-imagegen skill when you want quick variations and bulk edits. The skill is roughly the same shape; only the model string and the per-image price change.
One more alternative worth naming: there is no single canonical Nano Banana Pro MCP server, but several Gemini-image MCP servers in the catalog will route a tool call to gemini-3-pro-image-preview — Gemini Image Generation, Gemini 2.5 Flash Image, and the broader Replicate server for the model-router pattern. The trade-off is the usual skill-vs-MCP one: the skill is ~120 idle tokens, an MCP’s tool schemas load every turn. Pick the MCP only when multiple AI clients share one billing project — otherwise stick with the nano-banana-pro skill in this cookbook.
Real images shipped with Nano Banana Pro
Concrete examples from public reviews and the launch wave. None of these used the Claude skill specifically — they’re here to show what production Pro outputs look like, so you have a target shape in mind when you write the prompt.
- Simon Willison — 4K test render, 5632×3072 px, 24.1 MB output via the Gemini API
- Max Woolf — typography and Google Fonts surfacing test, posted on X
- Matt Wolfe — 6-image composite + text rendering walkthrough on X
- Rohan Paul — character-consistency selfie batch via reference images
- Digital Trends (Sara Cuddy) — 20 social-media posts created with Nano Banana Pro, LinkedIn impressions up 57% during the test
- Jakob Nielsen — visual-communication essays illustrated with Nano Banana Pro infographics
Gotchas (the four that bite)
Sourced from the Nano Banana Pro launch HN thread and Max Woolf’s December 2025 review.
The thinking trace adds latency to every call
Gemini 3 Pro Image always runs a reasoning step before rendering — 8–15 seconds for 2K, longer for 4K. There is no flag to skip it. Plan your loop accordingly: do not generate a 100-image bulk batch synchronously.
Pricing is per-image, not per-token
1K and 2K cost ~13.4 cents each, 4K is ~24 cents, image inputs are ~0.11 cents each. A 10-image cookbook run is ~$1.50 in API spend before any retries — set a budget alert in Google Cloud before you start iterating.
The model bakes a SynthID watermark into every output
Even after manual editing, Google's SynthID detector still recognises the image as AI-generated. If your downstream is provenance-sensitive (stock photo licensing, legal evidence), this is a hard no — use a different pipeline.
Multi-edit prompts drift on attributes you did not name
Asking for two simultaneous edits in one prompt sometimes shifts a third attribute you wanted to keep. If you see drift, split into two single-edit passes — use case 10 has the exact fallback.
Pairs well with
Curated to match the cookbook’s actual integrations: the visual-output skills the cookbook reaches for (svg-precision, drawio-diagrams-enhanced, excalidraw-architect, comfyui-workflow-builder, mobile-ios-design) plus the Gemini-image and vision-routing MCP servers the longer use cases lean on.
Related skills
Related MCP servers
Two posts that compose well with this cookbook: What are Claude Code skills? covers the underlying mechanism, and Claude Code best practices covers the orchestration patterns the longer use cases (3, 9, 10) lean on.
Frequently asked questions
Is there a Nano Banana Pro MCP server I can use instead of the skill?
Yes, several. The mcp.directory catalog lists Gemini Image Generation, Gemini 2.5 Flash Image, and Replicate as Gemini-image-capable MCP servers. The trade-off is the usual one: the nano-banana-pro skill costs about 120 idle tokens, while an MCP server's tool schemas load every turn of the conversation. Reach for an MCP only when multiple AI clients (Claude Code, Cursor, an internal agent) need to share a billing project — otherwise the skill is the cheaper composition for solo image work.
What is the difference between Nano Banana, Nano Banana 2, and Nano Banana Pro?
Three Google models, three names, one API surface. 'Nano Banana' is gemini-2.5-flash-image — fast, cheap, original. 'Nano Banana 2' is gemini-3.1-flash-image-preview — faster, sharper text, the everyday default. 'Nano Banana Pro' is gemini-3-pro-image-preview — slowest, most expensive (~13 cents per 1K/2K, 24 cents per 4K), but the only one that does multi-step reasoning, character consistency across 14 reference images, and reliable text rendering. Use Pro for hero illustrations and asset variations; use Flash for bulk variations.
Do I need a paid Google AI Studio account to run the Nano Banana Pro skill?
Yes. Generate an API key at aistudio.google.com and enable billing on the linked Google Cloud project. The free tier reverts to the original Nano Banana model. Once billing is on, set GEMINI_API_KEY in your shell and the skill's bundled `uv run` script picks it up — no per-prompt key handling.
What aspect ratios and resolutions does Gemini 3 Pro Image support?
Aspect ratios: 1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9, 1:4, 4:1, 1:8, 8:1. Image sizes: 1K (~1024 px on the long edge), 2K (~2048 px), and 4K (~4096 px). 1K and 2K cost the same; 4K roughly doubles the price. The skill exposes resolution as `--resolution 1K|2K|4K`. If you need an unsupported ratio (1.91:1 for LinkedIn), generate at 16:9 and crop with Pillow — use case 4 has the exact code.
Can the skill edit an existing image, or only generate new ones?
Both. Pass `--input-image path/to/source.png` and the prompt becomes an edit instruction — change the background, swap the palette, composite onto a device frame. The model can take up to 14 reference images in one call (use case 3 covers character consistency). Every output is watermarked with SynthID, which is detectable even after manual editing — plan for that if your downstream is provenance-sensitive.
Why is the skill called Nano Banana Pro when the API model string is gemini-3-pro-image-preview?
Google ships the marketing name (Nano Banana Pro) and the engineering name (Gemini 3 Pro Image, model ID gemini-3-pro-image-preview) in parallel. The skill SKILL.md references both so search queries like 'nano banana pro skill', 'nano banana skill', and 'gemini 3 pro image claude skill' all route to the same install. If you see the model string in API errors, that's the same product.
How long does a single Pro image generation take, and is it slower than Flash?
Yes, noticeably. Gemini 3 Pro Image runs a thinking trace before every render — 8 to 15 seconds for 2K, longer for 4K. Gemini 2.5 Flash Image returns in under two seconds. Plan accordingly: use Pro for the hero shots in the cookbook above, switch to Flash for bulk variations in use cases 4 and 9 if latency starts to bite.
Why are 'banana skill claude' and 'banana claude skill' showing impressions but no clicks?
Those queries are people searching for 'banana' in some other context (food, kids' apps, design metaphors) and Google is serving this page as a partial match. The actual product here is the Gemini 3 Pro Image skill that Google internally codenamed Nano Banana Pro — if you typed those queries hoping for an unrelated banana tool, this page is not it. Stay if you wanted Google's image-generation skill for Claude.
Sources
Primary
- garg-aayush nano-banana-pro SKILL.md (the skill manifest)
- Gemini API: Image generation reference
- Google blog: Nano Banana Pro launch
- Google for Developers: Build with Gemini 3 Pro Image
- DeepMind model card: Gemini 3 Pro Image (Nano Banana Pro)
Community
- Simon Willison — Blog
- Matt Wolfe (@mreflow) — X / Twitter
- Max Woolf (minimaxir) — Blog
- ceroxylon — Hacker News
- Rohan Paul (@rohanpaul_ai) — X / Twitter
- Max Woolf (@minimaxir) — X / Twitter
Critical and contrarian
Internal