echokit-config-generator
Generate config.toml for EchoKit servers with interactive setup for ASR, TTS, LLM services, MCP servers, API key entry, and server launch
Install
mkdir -p .claude/skills/echokit-config-generator && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3137" && unzip -o skill.zip -d .claude/skills/echokit-config-generator && rm skill.zipInstalls to .claude/skills/echokit-config-generator
About this skill
EchoKit Config Generator
Overview
This SKILL generates config.toml files for EchoKit servers through an interactive five-phase process that includes configuration generation, API key entry, and server launch.
Announce at start: "I'm using the EchoKit Config Generator to create your config.toml."
Handling User Input
Throughout this SKILL, when asking questions with default values:
Always phrase it as: "Question? (default: {VALUE})"
Handle responses:
- Empty response or Enter → Use the default value
- User provides value → Use user's value
- User enters "default" → Use the default value explicitly
Example:
AI: How many messages should it remember? (default: 5)
User: [Enter]
AI: [Uses 5]
AI: How many messages should it remember? (default: 5)
User: 10
AI: [Uses 10]
This applies to ALL questions with defaults throughout the SKILL.
Phase 1: Assistant Definition
Ask these questions one at a time:
- "What is your AI assistant's primary purpose? (Describe in 1-2 sentences)"
- "What tone should it have? (professional, casual, friendly, expert, or describe your own)"
- "What specific capabilities should it have?"
- Prompt with examples if needed: "code generation", "data analysis", "creative writing", "problem-solving", "teaching", etc.
- "Any response format requirements?"
- Examples: "short answers", "detailed explanations", "step-by-step", "conversational", "formal reports"
- "Any domain-specific knowledge areas?" (Optional)
- Examples: "programming", "medicine", "law", "finance", etc.
- "Any constraints or guidelines?" (Optional)
- Examples: "no bullet points", "always cite sources", "avoid jargon", "max 3 sentences"
- "Any additional instructions or preferences?" (Optional)
Generate sophisticated system prompt using this structure:
[[llm.sys_prompts]]
role = "system"
content = """
You are a {TONE} AI assistant specialized in {PURPOSE}.
## Core Purpose
{PURPOSE_EXPANDED - elaborate based on user input}
## Your Capabilities
- List the capabilities provided by user
- Each capability on its own line
- Be specific about what you can do
## Response Style
{FORMAT_REQUIREMENTS}
## Domain Knowledge
{DOMAIN_KNOWLEDGE if provided, otherwise omit this section}
## Behavioral Guidelines
{BEHAVIORS_FROM_USER}
- Add any relevant default behaviors based on tone
## Constraints
{CONSTRAINTS_FROM_USER}
- Always maintain {TONE} tone
- {RESPONSE_FORMAT_RULES}
## Additional Instructions
{ADDITIONAL_NOTES if provided}
---
Remember: Stay in character as a {TONE} {DOMAIN} assistant. Always prioritize helpfulness and accuracy.
"""
Enhanced defaults based on tone:
If user doesn't provide specific behaviors, use these expanded defaults:
-
professional:
- Provide accurate, well-researched information
- Maintain formal, business-appropriate language
- Acknowledge limitations and uncertainty
- Structure responses logically
-
casual:
- Be conversational and engaging
- Use natural, relaxed language
- Show personality when appropriate
- Keep interactions friendly and approachable
-
friendly:
- Be warm and welcoming
- Use simple, clear language
- Show empathy and understanding
- Make users feel comfortable
-
expert:
- Provide comprehensive technical details
- Cite sources and references when relevant
- Explain trade-offs and alternatives
- Use appropriate terminology correctly
- Acknowledge edge cases and limitations
Phase 2: Platform Selection
For each service category (ASR, TTS, LLM):
- Read platform data from
platforms/{category}.ymlusing the Read tool - Display available options with this format:
Available {SERVICE} Services:
1. {PLATFORM_1.name}
URL: {PLATFORM_1.url}
Model: {PLATFORM_1.model}
Get API key: {PLATFORM_1.api_key_url}
Notes: {PLATFORM_1.notes}
2. {PLATFORM_2.name}
URL: {PLATFORM_2.url}
Model: {PLATFORM_2.model}
Get API key: {PLATFORM_2.api_key_url}
Notes: {PLATFORM_2.notes}
C. Custom - Specify your own platform/model
Your choice (1-{N} or C):
-
User selection:
If user selects a number (1-{N}):
- Store the selected platform data from the YAML file
- Continue to next service
If user selects 'C' (Custom):
Step 1: Get platform name
- Ask: "What's the platform name?" (e.g., "groq", "deepseek", "mistral", "together")
Step 2: Auto-fetch API information
- Use WebSearch to find API documentation
- Search query:
"{PLATFORM_NAME} API endpoint {SERVICE_TYPE} 2025"- For ASR: "speech to text API", "transcription API"
- For TTS: "text to speech API"
- For LLM: "chat completions API", "LLM API"
- Extract from search results:
- API endpoint URL
- API documentation URL
- Authentication method
- Default model names
Step 3: Confirm with user Display what was found:
I found the following for {PLATFORM_NAME} {SERVICE}: API Endpoint: {FOUND_URL} Documentation: {FOUND_DOCS_URL} Authentication: {FOUND_AUTH_METHOD} Default Models: {FOUND_MODELS} Is this correct? (y/edit)Step 4: Gather additional details
- Ask: "What model should I use? (suggested: {FOUND_MODELS} or enter custom)"
- Ask for additional settings:
- LLM: "How many messages should it remember? (default: 5)"
- TTS: "What voice should it use? (default: default)"
- ASR: "What language? (default: en)"
Step 5: Store custom platform
name: "{PLATFORM_NAME}" platform: "{INFERRED_TYPE from API docs or user}" url: "{CONFIRMED_URL}" model: "{USER_MODEL_CHOICE}" history/voice/lang: {USER_SETTINGS} api_key_url: "{FOUND_DOCS_URL}" notes: "Custom {PLATFORM_NAME} - auto-configured" -
Continue to next service
Load platforms in order:
- ASR from
platforms/asr.yml - TTS from
platforms/tts.yml - LLM from
platforms/llm.yml
Note on WebSearch: Use WebSearch tool with year-specific queries (2025) to get current API information. For common platforms, you can also infer from patterns:
- OpenAI-compatible:
https://api.{platform}.com/v1/chat/completions - Anthropic-compatible:
https://api.{platform}.com/v1/messages - Together/Groq: OpenAI-compatible format
Phase 3: MCP Server (Optional)
Ask: "Do you need an MCP server? (y/n)"
If yes, ask: "What's your MCP server URL?"
Default: http://localhost:8000/mcp
Add MCP configuration to LLM section:
The MCP server is configured within the LLM configuration as:
[[llm.mcp_server]]
server = "{USER_PROVIDED_URL or http://localhost:8000/mcp}"
type = "http_streamable"
call_mcp_message = "Please hold on a few seconds while I am searching for an answer!"
Explain to user:
- The MCP server will be added to the LLM section
typecan be: "http_streamable" or "http"call_mcp_messageis shown to users when MCP is being called
Phase 4: Generate Files
Step 1: Preview config.toml
IMPORTANT: EchoKit server requires a specific TOML structure:
- Section order MUST be:
[tts]→[asr]→[llm] - No comments allowed at the beginning of the file
- Field names vary by platform (check platform-specific requirements)
Display complete configuration with this format:
addr = "0.0.0.0:8080"
hello_wav = "hello.wav"
[tts]
platform = "{SELECTED_TTS.platform}"
url = "{SELECTED_TTS.url}"
{TTS_API_KEY_FIELD} = "YOUR_API_KEY_HERE"
{TTS_MODEL_FIELD} = "{SELECTED_TTS.model}"
voice = "{SELECTED_TTS.voice}"
[asr]
platform = "{SELECTED_ASR.platform}"
url = "{SELECTED_ASR.url}"
{ASR_API_KEY_FIELD} = "YOUR_API_KEY_HERE"
model = "{SELECTED_ASR.model}"
lang = "{ASR_LANG}"
prompt = "Hello\\n你好\\n(noise)\\n(bgm)\\n(silence)\\n"
vad_url = "http://localhost:9093/v1/audio/vad"
[llm]
platform = "{SELECTED_LLM.platform}"
url = "{SELECTED_LLM.url}"
{LLM_API_KEY_FIELD} = "YOUR_API_KEY_HERE"
model = "{SELECTED_LLM.model}"
history = {SELECTED_LLM.history}
{GENERATED_SYSTEM_PROMPT}
{MCP_CONFIGURATION if enabled}
Platform-specific field mappings:
TTS platforms:
openai: usesapi_keyandmodelelevenlabs: usestokenandmodel_idgroq: usesapi_keyandmodel
ASR platforms:
openai/whisper: usesapi_keyandmodel
LLM platforms:
openai_chat: usesapi_key(optional, can be empty string)
When generating the config, replace {TTS_API_KEY_FIELD}, {ASR_API_KEY_FIELD}, and {LLM_API_KEY_FIELD} with the appropriate field name for the selected platform:
- For ElevenLabs TTS: use
token - For OpenAI/Groq TTS: use
api_key - For Whisper ASR: use
api_key - For OpenAI Chat LLM: use
api_key
Step 2: Ask for confirmation
"Does this configuration look correct? (y/edit/regenerate)"
- y - Proceed to write files
- e - Ask which section to edit (asr/tts/llm/system_prompt)
- r - Restart from Phase 1
Step 3: Determine output location
Ask: "Where should I save the config files? (press Enter for default: echokit_server/)"
Handle user input:
- Empty/Enter → Use default:
echokit_server/ - Custom path → Use user-provided path
- Relative path → Use as-is (e.g.,
my_configs/) - Absolute path → Use as-is (e.g.,
/Users/username/echokit/)
After path is determined:
- Check if directory exists
- If not, ask: "Directory '{OUTPUT_DIR}' doesn't exist. Create it? (y/n)"
- If y: Create with
mkdir -p {OUTPUT_DIR} - If n: Ask for different path
- If y: Create with
- Verify write permissions
- Test by attempting to create a temporary file
- If fails, ask for different location
Step 4: Write files
Use the Write tool to create:
{OUTPUT_DIR}/config.toml- Main configuration (includes MCP server if enabled){OUTPUT_DIR}/SETUP_GUIDE.md- Setup instructions (use template fromtemplates/SETUP_GUIDE.md)
Step 5: Display success message
✓ Configuration generated su
---
*Content truncated.*
You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversGenerate Custom MCPs lets you create tailored MCP servers easily using the MCP SDK to automate AI tool setup and resourc
VS Code Button Generator creates install buttons and badges for MCP servers, enabling one-click setup with automated URL
Connect Blender to Claude AI for seamless 3D modeling. Use AI 3D model generator tools for faster, intuitive, interactiv
Pipedream — Access hosted MCP servers or deploy your own for 2,500+ APIs (Slack, GitHub, Notion, Google Drive) with buil
The fullstack MCP framework for developing MCP apps for ChatGPT, Claude, and building MCP servers for AI agents. Connect
Search and discover MCP servers with the official MCP Registry — browse an up-to-date MCP server list to find MCP server
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.