claudish-usage
CRITICAL - Guide for using Claudish CLI ONLY through sub-agents to run Claude Code with any AI model (OpenRouter, Gemini, OpenAI, local models). NEVER run Claudish directly in main context unless user explicitly requests it. Use when user mentions external AI models, Claudish, OpenRouter, Gemini, OpenAI, Ollama, or alternative models. Includes mandatory sub-agent delegation patterns, agent selection guide, file-based instructions, and strict rules to prevent context window pollution.
Install
mkdir -p .claude/skills/claudish-usage && curl -L -o skill.zip "https://mcp.directory/api/skills/download/1278" && unzip -o skill.zip -d .claude/skills/claudish-usage && rm skill.zipInstalls to .claude/skills/claudish-usage
About this skill
Claudish Usage Skill
Version: 2.0.0 Purpose: Guide AI agents on how to use Claudish CLI to run Claude Code with any AI model Status: Production Ready
⚠️ CRITICAL RULES - READ FIRST
🚫 NEVER Run Claudish from Main Context
Claudish MUST ONLY be run through sub-agents unless the user explicitly requests direct execution.
Why:
- Running Claudish directly pollutes main context with 10K+ tokens (full conversation + reasoning)
- Destroys context window efficiency
- Makes main conversation unmanageable
When you can run Claudish directly:
- ✅ User explicitly says "run claudish directly" or "don't use a sub-agent"
- ✅ User is debugging and wants to see full output
- ✅ User specifically requests main context execution
When you MUST use sub-agent:
- ✅ User says "use Grok to implement X" (delegate to sub-agent)
- ✅ User says "ask GPT-5.3 to review X" (delegate to sub-agent)
- ✅ User mentions any model name without "directly" (delegate to sub-agent)
- ✅ Any production task (always delegate)
📋 Workflow Decision Tree
User Request
↓
Does it mention Claudish/OpenRouter/model name? → NO → Don't use this skill
↓ YES
↓
Does user say "directly" or "in main context"? → YES → Run in main context (rare)
↓ NO
↓
Find appropriate agent or create one → Delegate to sub-agent (default)
🤖 Agent Selection Guide
Step 1: Find the Right Agent
When user requests Claudish task, follow this process:
- Check for existing agents that support proxy mode or external model delegation
- If no suitable agent exists:
- Suggest creating a new proxy-mode agent for this task type
- Offer to proceed with generic
general-purposeagent if user declines
- If user declines agent creation:
- Warn about context pollution
- Ask if they want to proceed anyway
Step 2: Agent Type Selection Matrix
| Task Type | Recommended Agent | Fallback | Notes |
|---|---|---|---|
| Code implementation | Create coding agent with proxy mode | general-purpose | Best: custom agent for project-specific patterns |
| Code review | Use existing code review agent + proxy | general-purpose | Check if plugin has review agent first |
| Architecture planning | Use existing architect agent + proxy | general-purpose | Look for architect or planner agents |
| Testing | Use existing test agent + proxy | general-purpose | Look for test-architect or tester agents |
| Refactoring | Create refactoring agent with proxy | general-purpose | Complex refactors benefit from specialized agent |
| Documentation | general-purpose | - | Simple task, generic agent OK |
| Analysis | Use existing analysis agent + proxy | general-purpose | Check for analyzer or detective agents |
| Other | general-purpose | - | Default for unknown task types |
Step 3: Agent Creation Offer (When No Agent Exists)
Template response:
I notice you want to use [Model Name] for [task type].
RECOMMENDATION: Create a specialized [task type] agent with proxy mode support.
This would:
✅ Provide better task-specific guidance
✅ Reusable for future [task type] tasks
✅ Optimized prompting for [Model Name]
Options:
1. Create specialized agent (recommended) - takes 2-3 minutes
2. Use generic general-purpose agent - works but less optimized
3. Run directly in main context (NOT recommended - pollutes context)
Which would you prefer?
Step 4: Common Agents by Plugin
Frontend Plugin:
typescript-frontend-dev- Use for UI implementation with external modelsfrontend-architect- Use for architecture planning with external modelssenior-code-reviewer- Use for code review (can delegate to external models)test-architect- Use for test planning/implementation
Bun Backend Plugin:
backend-developer- Use for API implementation with external modelsapi-architect- Use for API design with external models
Code Analysis Plugin:
codebase-detective- Use for investigation tasks with external models
No Plugin:
general-purpose- Default fallback for any task
Step 5: Example Agent Selection
Example 1: User says "use Grok to implement authentication"
Task: Code implementation (authentication)
Plugin: Bun Backend (if backend) or Frontend (if UI)
Decision:
1. Check for backend-developer or typescript-frontend-dev agent
2. Found backend-developer? → Use it with Grok proxy
3. Not found? → Offer to create custom auth agent
4. User declines? → Use general-purpose with file-based pattern
Example 2: User says "ask GPT-5.3 to review my API design"
Task: Code review (API design)
Plugin: Bun Backend
Decision:
1. Check for api-architect or senior-code-reviewer agent
2. Found? → Use it with GPT-5.3 proxy
3. Not found? → Use general-purpose with review instructions
4. Never run directly in main context
Example 3: User says "use Gemini to refactor this component"
Task: Refactoring (component)
Plugin: Frontend
Decision:
1. No specialized refactoring agent exists
2. Offer to create component-refactoring agent
3. User declines? → Use typescript-frontend-dev with proxy
4. Still no agent? → Use general-purpose with file-based pattern
Overview
Claudish is a CLI tool that allows running Claude Code with any AI model via prefix-based routing. Supports OpenRouter (100+ models), direct Google Gemini API, direct OpenAI API, and local models (Ollama, LM Studio, vLLM, MLX).
Key Principle: ALWAYS use Claudish through sub-agents with file-based instructions to avoid context window pollution.
What is Claudish?
Claudish (Claude-ish) is a proxy tool that:
- ✅ Runs Claude Code with any AI model via prefix-based routing
- ✅ Supports OpenRouter, Gemini, OpenAI, and local models
- ✅ Uses local API-compatible proxy server
- ✅ Supports 100% of Claude Code features
- ✅ Provides cost tracking and model selection
- ✅ Enables multi-model workflows
Model Routing
| Prefix | Backend | Example |
|---|---|---|
| (none) | OpenRouter | openai/gpt-5.3 |
g/ gemini/ | Google Gemini | g/gemini-2.0-flash |
oai/ openai/ | OpenAI | oai/gpt-4o |
ollama/ | Ollama | ollama/llama3.2 |
lmstudio/ | LM Studio | lmstudio/model |
http://... | Custom | http://localhost:8000/model |
Use Cases:
- Run tasks with different AI models (Grok for speed, GPT-5.3 for reasoning, Gemini for large context)
- Use direct APIs for lower latency (Gemini, OpenAI)
- Use local models for free, private inference (Ollama, LM Studio)
- Compare model performance on same task
- Reduce costs with cheaper models for simple tasks
Requirements
System Requirements
- Claudish CLI - Install with:
npm install -g claudishorbun install -g claudish - Claude Code - Must be installed
- At least one API key (see below)
Environment Variables
# API Keys (at least one required)
export OPENROUTER_API_KEY='sk-or-v1-...' # OpenRouter (100+ models)
export GEMINI_API_KEY='...' # Direct Gemini API (g/ prefix)
export OPENAI_API_KEY='sk-...' # Direct OpenAI API (oai/ prefix)
# Placeholder (required to prevent Claude Code dialog)
export ANTHROPIC_API_KEY='sk-ant-api03-placeholder'
# Custom endpoints (optional)
export GEMINI_BASE_URL='https://...' # Custom Gemini endpoint
export OPENAI_BASE_URL='https://...' # Custom OpenAI/Azure endpoint
export OLLAMA_BASE_URL='http://...' # Custom Ollama server
export LMSTUDIO_BASE_URL='http://...' # Custom LM Studio server
# Default model (optional)
export CLAUDISH_MODEL='openai/gpt-5.3' # Default model
Get API Keys:
- OpenRouter: https://openrouter.ai/keys (free tier available)
- Gemini: https://aistudio.google.com/apikey
- OpenAI: https://platform.openai.com/api-keys
- Local models: No API key needed
Quick Start Guide
Step 1: Install Claudish
# With npm (works everywhere)
npm install -g claudish
# With Bun (faster)
bun install -g claudish
# Verify installation
claudish --version
Step 2: Get Available Models
# List ALL OpenRouter models grouped by provider
claudish --models
# Fuzzy search models by name, ID, or description
claudish --models gemini
claudish --models "grok code"
# Show top recommended programming models (curated list)
claudish --top-models
# JSON output for parsing
claudish --models --json
claudish --top-models --json
# Force update from OpenRouter API
claudish --models --force-update
Step 3: Run Claudish
Interactive Mode (default):
# Shows model selector, persistent session
claudish
Single-shot Mode:
# One task and exit (requires --model)
claudish --model x-ai/grok-code-fast-1 "implement user authentication"
With stdin for large prompts:
# Read prompt from stdin (useful for git diffs, code review)
git diff | claudish --stdin --model openai/gpt-5-codex "Review these changes"
Recommended Models
Top Models for Development (v3.1.1):
| Model | Provider | Best For |
|---|---|---|
openai/gpt-5.3 | OpenAI | Default - Most advanced reasoning |
minimax/minimax-m2.1 | MiniMax | Budget-friendly, fast |
z-ai/glm-4.7 | Z.AI | Balanced performance |
google/gemini-3-pro-preview | 1M context window | |
moonshotai/kimi-k2-thinking | MoonShot | Extended thinking |
deepseek/deepseek-v3.2 | DeepSeek | Code specialist |
qwen/qwen3-vl-235b-a22b-thinking | Alibaba | Vision + reasoning |
Direct API Options (lower latency):
| Model | Backend | Best For |
|---|---|---|
g/gemini-2.0-flash | Gemini | Fast tasks, large context |
oai/gpt-4o | OpenAI | General purpose |
ollama/llama3.2 | Local | Free, private |
Get Latest Models:
# List all models (auto-updates every 2 days)
claudish --models
# Search for specific m
---
*Content truncated.*
More by MadAppGang
View all skills by MadAppGang →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversConnect Supabase projects to AI with Supabase MCP Server. Standardize LLM communication for secure, efficient developmen
Guide your software projects with structured prompts from requirements to code using the waterfall development model and
Streamline project docs with Specs Workflow: automate software project plan templates, tracking, and OpenAPI-driven prog
Enhance Java code with Aibolit Java Code Analyzer. Identify design, maintainability, and architectural issues beyond sur
Organize projects using leading project track software. Convert tasks with dependency tracking for optimal time manageme
Glasses automates website screenshot capture with headless Chrome, offering device emulation and flexible formats for we
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.