
Fish Audio
Converts text to speech using Fish Audio's API with support for multiple voice models, streaming, and various audio formats.
Integrates with Fish Audio's API to generate high-quality speech from text with configurable voice models, audio formats, and real-time streaming for creating conversational applications and automated content narration.
What it does
- Generate speech from text using AI voice models
- Stream audio in real-time for low-latency applications
- Select voices by ID, name, or tags from voice library
- Export audio in multiple formats (MP3, WAV, PCM, Opus)
- Clone and create custom voice models
- Control speech prosody and emotions
Best for
About Fish Audio
Fish Audio is a community-built MCP server published by da-okazaki that provides AI assistants with tools and capabilities via the Model Context Protocol. Convert text to speech with Fish Audio. Use our AI voice generator for real-time, high-quality speech to voice, free for It is categorized under ai ml.
How to install
You can install Fish Audio in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Fish Audio is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Fish Audio MCP Server
An MCP (Model Context Protocol) server that provides seamless integration between Fish Audio's Text-to-Speech API and LLMs like Claude, enabling natural language-driven speech synthesis.
What is Fish Audio?
Fish Audio is a cutting-edge Text-to-Speech platform that offers:
- π State-of-the-art voice synthesis with natural-sounding output
- π― Voice cloning capabilities to create custom voice models
- π Multilingual support including English, Japanese, Chinese, and more
- β‘ Low-latency streaming for real-time applications
- π¨ Fine-grained control over speech prosody and emotions
This MCP server brings Fish Audio's powerful capabilities directly to your LLM workflows.
Features
- ποΈ High-Quality TTS: Leverage Fish Audio's state-of-the-art TTS models
- π Streaming Support: Real-time audio streaming for low-latency applications
- π¨ Multiple Voices: Support for custom voice models via reference IDs
- π― Smart Voice Selection: Select voices by ID, name, or tags
- π Voice Library Management: Configure and manage multiple voice references
- π§ Flexible Configuration: Environment variable-based configuration
- π¦ Multiple Audio Formats: Support for MP3, WAV, PCM, and Opus
- π Easy Integration: Simple setup with any MCP-compatible client
Quick Start
Installation
You can run this MCP server directly using npx:
npx @alanse/fish-audio-mcp-server
Or install it globally:
npm install -g @alanse/fish-audio-mcp-server
Configuration
-
Get your Fish Audio API key from Fish Audio
-
Set up environment variables:
export FISH_API_KEY=your_fish_audio_api_key_here
- Add to your MCP settings configuration:
Single Voice Mode (Simple)
{
"mcpServers": {
"fish-audio": {
"command": "npx",
"args": ["-y", "@alanse/fish-audio-mcp-server"],
"env": {
"FISH_API_KEY": "your_fish_audio_api_key_here",
"FISH_MODEL_ID": "speech-1.6",
"FISH_REFERENCE_ID": "your_voice_reference_id_here",
"FISH_OUTPUT_FORMAT": "mp3",
"FISH_STREAMING": "false",
"FISH_LATENCY": "balanced",
"FISH_MP3_BITRATE": "128",
"FISH_AUTO_PLAY": "false",
"AUDIO_OUTPUT_DIR": "~/.fish-audio-mcp/audio_output"
}
}
}
}
Multiple Voice Mode (Advanced)
{
"mcpServers": {
"fish-audio": {
"command": "npx",
"args": ["-y", "@alanse/fish-audio-mcp-server"],
"env": {
"FISH_API_KEY": "your_fish_audio_api_key_here",
"FISH_MODEL_ID": "speech-1.6",
"FISH_REFERENCES": "[{'reference_id':'id1','name':'Alice','tags':['female','english']},{'reference_id':'id2','name':'Bob','tags':['male','japanese']},{'reference_id':'id3','name':'Carol','tags':['female','japanese','anime']}]",
"FISH_DEFAULT_REFERENCE": "id1",
"FISH_OUTPUT_FORMAT": "mp3",
"FISH_STREAMING": "false",
"FISH_LATENCY": "balanced",
"FISH_MP3_BITRATE": "128",
"FISH_AUTO_PLAY": "false",
"AUDIO_OUTPUT_DIR": "~/.fish-audio-mcp/audio_output"
}
}
}
}
Environment Variables
| Variable | Description | Default | Required |
|---|---|---|---|
FISH_API_KEY | Your Fish Audio API key | - | Yes |
FISH_MODEL_ID | TTS model to use (s1, speech-1.5, speech-1.6) | s1 | Optional |
FISH_REFERENCE_ID | Default voice reference ID (single reference mode) | - | Optional |
FISH_REFERENCES | Multiple voice references (see below) | - | Optional |
FISH_DEFAULT_REFERENCE | Default reference ID when using multiple references | - | Optional |
FISH_OUTPUT_FORMAT | Default audio format (mp3, wav, pcm, opus) | mp3 | Optional |
FISH_STREAMING | Enable streaming mode (HTTP/WebSocket) | false | Optional |
FISH_LATENCY | Latency mode (normal, balanced) | balanced | Optional |
FISH_MP3_BITRATE | MP3 bitrate (64, 128, 192) | 128 | Optional |
FISH_AUTO_PLAY | Auto-play audio and enable real-time playback | false | Optional |
AUDIO_OUTPUT_DIR | Directory for audio file output | ~/.fish-audio-mcp/audio_output | Optional |
Configuring Multiple Voice References
You can configure multiple voice references in two ways:
JSON Array Format (Recommended)
Use the FISH_REFERENCES environment variable with a JSON array:
FISH_REFERENCES='[
{"reference_id":"id1","name":"Alice","tags":["female","english"]},
{"reference_id":"id2","name":"Bob","tags":["male","japanese"]},
{"reference_id":"id3","name":"Carol","tags":["female","japanese","anime"]}
]'
FISH_DEFAULT_REFERENCE="id1"
Individual Format (Backward Compatibility)
Use numbered environment variables:
FISH_REFERENCE_1_ID=id1
FISH_REFERENCE_1_NAME=Alice
FISH_REFERENCE_1_TAGS=female,english
FISH_REFERENCE_2_ID=id2
FISH_REFERENCE_2_NAME=Bob
FISH_REFERENCE_2_TAGS=male,japanese
Usage
Once configured, the Fish Audio MCP server provides two tools to LLMs.
Tool 1: fish_audio_tts
Generates speech from text using Fish Audio's TTS API.
Parameters
text(required): Text to convert to speech (max 10,000 characters)reference_id(optional): Voice model reference IDreference_name(optional): Select voice by namereference_tag(optional): Select voice by tagstreaming(optional): Enable streaming modeformat(optional): Output format (mp3, wav, pcm, opus)mp3_bitrate(optional): MP3 bitrate (64, 128, 192)normalize(optional): Enable text normalization (default: true)latency(optional): Latency mode (normal, balanced)output_path(optional): Custom output file pathauto_play(optional): Automatically play the generated audiowebsocket_streaming(optional): Use WebSocket streaming instead of HTTPrealtime_play(optional): Play audio in real-time during WebSocket streaming
Voice Selection Priority: reference_id > reference_name > reference_tag > default
Tool 2: fish_audio_list_references
Lists all configured voice references.
Parameters
No parameters required.
Returns
- List of configured voice references with their IDs, names, and tags
- Default reference ID
Examples
Basic Text-to-Speech
User: "Generate speech saying 'Hello, world! Welcome to Fish Audio TTS.'"
Claude: I'll generate speech for that text using Fish Audio TTS.
[Uses fish_audio_tts tool with text parameter]
Result: Audio file saved to ./audio_output/tts_2025-01-03T10-30-00.mp3
Using Custom Voice by ID
User: "Generate speech with voice model xyz123 saying 'This is a custom voice test'"
Claude: I'll generate speech using the specified voice model.
[Uses fish_audio_tts tool with text and reference_id parameters]
Result: Audio generated with custom voice model xyz123
Using Voice by Name
User: "Use Alice's voice to say 'Hello from Alice'"
Claude: I'll generate speech using Alice's voice.
[Uses fish_audio_tts tool with reference_name: "Alice"]
Result: Audio generated with Alice's voice
Using Voice by Tag
User: "Generate Japanese speech saying 'γγγ«γ‘γ―' with an anime voice"
Claude: I'll generate Japanese speech with an anime-style voice.
[Uses fish_audio_tts tool with reference_tag: "anime"]
Result: Audio generated with anime voice style
List Available Voices
User: "What voices are available?"
Claude: I'll list all configured voice references.
[Uses fish_audio_list_references tool]
Result:
- Alice (id: id1) - Tags: female, english [Default]
- Bob (id: id2) - Tags: male, japanese
- Carol (id: id3) - Tags: female, japanese, anime
HTTP Streaming Mode
User: "Generate a long speech in streaming mode about the benefits of AI"
Claude: I'll generate the speech in streaming mode for faster response.
[Uses fish_audio_tts tool with streaming: true]
Result: Streaming audio saved to ./audio_output/tts_2025-01-03T10-35-00.mp3
WebSocket Real-time Streaming
User: "Stream and play in real-time: 'Welcome to the future of AI'"
Claude: I'll stream the speech via WebSocket and play it in real-time.
[Uses fish_audio_tts tool with websocket_streaming: true, realtime_play: true]
Result: Audio streamed and played in real-time via WebSocket
Development
Local Development
- Clone the repository:
git clone https://github.com/da-okazaki/mcp-fish-audio-server.git
cd mcp-fish-audio-server
- Install dependencies:
npm install
- Create
.envfile:
cp .env.example .env
# Edit .env with your API key
- Build the project:
npm run build
- Run in development mode:
npm run dev
Testing
Run the test suite:
npm test
Project Structure
mcp-fish-audio-server/
βββ src/
β βββ index.ts # MCP server entry point
β βββ tools/
β β βββ tts.ts # TTS tool implementation
β βββ services/
β β βββ fishAudio.ts # Fish Audio API client
β βββ types/
β β βββ index.ts # TypeScript definitions
β βββ utils/
β βββ config.ts # Configuration management
βββ tests/ # Test files
βββ audio_output/ # Default audio output directory
βββ package.json
βββ tsconfig.json
βββ README.md
API Documentation
Fish Audio Service
The service provides two main methods:
-
generateSpeech: Standard TTS generation
- Returns audio buffer
- Suitable for short texts
- Lower memory usage
-
generateSpeechStream: Streamin
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsConvert various file formats (PDF, Office documents, images, audio, web content, structured data) to Markdown optimized for LLM processing. Use when converting documents to markdown, extracting text from PDFs/Office files, transcribing audio, performing OCR on images, extracting YouTube transcripts, or processing batches of files. Supports 20+ formats including DOCX, XLSX, PPTX, PDF, HTML, EPUB, CSV, JSON, images with OCR, and audio with transcription.
Search GitHub issues, pull requests, and discussions across any repository. Activates when researching external dependencies (whisper.cpp, NAudio), looking for similar bugs, or finding implementation examples.
Comprehensive video/audio processing with FFmpeg. Use for: (1) Video transcoding and format conversion, (2) Cutting and merging clips, (3) Audio extraction and manipulation, (4) Thumbnail and GIF generation, (5) Resolution scaling and quality adjustment, (6) Adding subtitles or watermarks, (7) Speed adjustment (slow/fast motion), (8) Color correction and filters.
Expert audio engineer specializing in spatial audio, procedural sound design, interactive audio systems, and real-time DSP
Process video files with audio extraction, format conversion (mp4, webm), and Whisper
Tools, patterns, and utilities for creating music with code. Output as a .mp3 file with realistic instrument sounds. Write custom compositions to bring creativity to life through music. This skill should be used whenever the user asks for music to be created. Never use this skill for replicating songs, beats, riffs, or other sensitive works. The skill is not suitable for vocal/lyrical music, audio mixing/mastering (reverb, EQ, compression), real-time MIDI playback, or professional studio recording quality.