Fish Audio

Fish Audio

da-okazaki

Converts text to speech using Fish Audio's API with support for multiple voice models, streaming, and various audio formats.

Integrates with Fish Audio's API to generate high-quality speech from text with configurable voice models, audio formats, and real-time streaming for creating conversational applications and automated content narration.

10718 views8Local (stdio)

What it does

  • Generate speech from text using AI voice models
  • Stream audio in real-time for low-latency applications
  • Select voices by ID, name, or tags from voice library
  • Export audio in multiple formats (MP3, WAV, PCM, Opus)
  • Clone and create custom voice models
  • Control speech prosody and emotions

Best for

Building conversational AI applicationsAutomated content narration and voiceoversCreating multilingual speech synthesisReal-time voice generation for chatbots
Multilingual supportReal-time streaming capabilityVoice cloning features

About Fish Audio

Fish Audio is a community-built MCP server published by da-okazaki that provides AI assistants with tools and capabilities via the Model Context Protocol. Convert text to speech with Fish Audio. Use our AI voice generator for real-time, high-quality speech to voice, free for It is categorized under ai ml.

How to install

You can install Fish Audio in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Fish Audio is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Fish Audio MCP Server

Fish Audio Logo

npm version License: MIT

An MCP (Model Context Protocol) server that provides seamless integration between Fish Audio's Text-to-Speech API and LLMs like Claude, enabling natural language-driven speech synthesis.

What is Fish Audio?

Fish Audio is a cutting-edge Text-to-Speech platform that offers:

  • 🌊 State-of-the-art voice synthesis with natural-sounding output
  • 🎯 Voice cloning capabilities to create custom voice models
  • 🌍 Multilingual support including English, Japanese, Chinese, and more
  • ⚑ Low-latency streaming for real-time applications
  • 🎨 Fine-grained control over speech prosody and emotions

This MCP server brings Fish Audio's powerful capabilities directly to your LLM workflows.

Features

  • πŸŽ™οΈ High-Quality TTS: Leverage Fish Audio's state-of-the-art TTS models
  • 🌊 Streaming Support: Real-time audio streaming for low-latency applications
  • 🎨 Multiple Voices: Support for custom voice models via reference IDs
  • 🎯 Smart Voice Selection: Select voices by ID, name, or tags
  • πŸ“š Voice Library Management: Configure and manage multiple voice references
  • πŸ”§ Flexible Configuration: Environment variable-based configuration
  • πŸ“¦ Multiple Audio Formats: Support for MP3, WAV, PCM, and Opus
  • πŸš€ Easy Integration: Simple setup with any MCP-compatible client

Quick Start

Installation

You can run this MCP server directly using npx:

npx @alanse/fish-audio-mcp-server

Or install it globally:

npm install -g @alanse/fish-audio-mcp-server

Configuration

  1. Get your Fish Audio API key from Fish Audio

  2. Set up environment variables:

export FISH_API_KEY=your_fish_audio_api_key_here
  1. Add to your MCP settings configuration:

Single Voice Mode (Simple)

{
  "mcpServers": {
    "fish-audio": {
      "command": "npx",
      "args": ["-y", "@alanse/fish-audio-mcp-server"],
      "env": {
        "FISH_API_KEY": "your_fish_audio_api_key_here",
        "FISH_MODEL_ID": "speech-1.6",
        "FISH_REFERENCE_ID": "your_voice_reference_id_here",
        "FISH_OUTPUT_FORMAT": "mp3",
        "FISH_STREAMING": "false",
        "FISH_LATENCY": "balanced",
        "FISH_MP3_BITRATE": "128",
        "FISH_AUTO_PLAY": "false",
        "AUDIO_OUTPUT_DIR": "~/.fish-audio-mcp/audio_output"
      }
    }
  }
}

Multiple Voice Mode (Advanced)

{
  "mcpServers": {
    "fish-audio": {
      "command": "npx",
      "args": ["-y", "@alanse/fish-audio-mcp-server"],
      "env": {
        "FISH_API_KEY": "your_fish_audio_api_key_here",
        "FISH_MODEL_ID": "speech-1.6",
        "FISH_REFERENCES": "[{'reference_id':'id1','name':'Alice','tags':['female','english']},{'reference_id':'id2','name':'Bob','tags':['male','japanese']},{'reference_id':'id3','name':'Carol','tags':['female','japanese','anime']}]",
        "FISH_DEFAULT_REFERENCE": "id1",
        "FISH_OUTPUT_FORMAT": "mp3",
        "FISH_STREAMING": "false",
        "FISH_LATENCY": "balanced",
        "FISH_MP3_BITRATE": "128",
        "FISH_AUTO_PLAY": "false",
        "AUDIO_OUTPUT_DIR": "~/.fish-audio-mcp/audio_output"
      }
    }
  }
}

Environment Variables

VariableDescriptionDefaultRequired
FISH_API_KEYYour Fish Audio API key-Yes
FISH_MODEL_IDTTS model to use (s1, speech-1.5, speech-1.6)s1Optional
FISH_REFERENCE_IDDefault voice reference ID (single reference mode)-Optional
FISH_REFERENCESMultiple voice references (see below)-Optional
FISH_DEFAULT_REFERENCEDefault reference ID when using multiple references-Optional
FISH_OUTPUT_FORMATDefault audio format (mp3, wav, pcm, opus)mp3Optional
FISH_STREAMINGEnable streaming mode (HTTP/WebSocket)falseOptional
FISH_LATENCYLatency mode (normal, balanced)balancedOptional
FISH_MP3_BITRATEMP3 bitrate (64, 128, 192)128Optional
FISH_AUTO_PLAYAuto-play audio and enable real-time playbackfalseOptional
AUDIO_OUTPUT_DIRDirectory for audio file output~/.fish-audio-mcp/audio_outputOptional

Configuring Multiple Voice References

You can configure multiple voice references in two ways:

JSON Array Format (Recommended)

Use the FISH_REFERENCES environment variable with a JSON array:

FISH_REFERENCES='[
  {"reference_id":"id1","name":"Alice","tags":["female","english"]},
  {"reference_id":"id2","name":"Bob","tags":["male","japanese"]},
  {"reference_id":"id3","name":"Carol","tags":["female","japanese","anime"]}
]'
FISH_DEFAULT_REFERENCE="id1"

Individual Format (Backward Compatibility)

Use numbered environment variables:

FISH_REFERENCE_1_ID=id1
FISH_REFERENCE_1_NAME=Alice
FISH_REFERENCE_1_TAGS=female,english

FISH_REFERENCE_2_ID=id2
FISH_REFERENCE_2_NAME=Bob
FISH_REFERENCE_2_TAGS=male,japanese

Usage

Once configured, the Fish Audio MCP server provides two tools to LLMs.

Tool 1: fish_audio_tts

Generates speech from text using Fish Audio's TTS API.

Parameters

  • text (required): Text to convert to speech (max 10,000 characters)
  • reference_id (optional): Voice model reference ID
  • reference_name (optional): Select voice by name
  • reference_tag (optional): Select voice by tag
  • streaming (optional): Enable streaming mode
  • format (optional): Output format (mp3, wav, pcm, opus)
  • mp3_bitrate (optional): MP3 bitrate (64, 128, 192)
  • normalize (optional): Enable text normalization (default: true)
  • latency (optional): Latency mode (normal, balanced)
  • output_path (optional): Custom output file path
  • auto_play (optional): Automatically play the generated audio
  • websocket_streaming (optional): Use WebSocket streaming instead of HTTP
  • realtime_play (optional): Play audio in real-time during WebSocket streaming

Voice Selection Priority: reference_id > reference_name > reference_tag > default

Tool 2: fish_audio_list_references

Lists all configured voice references.

Parameters

No parameters required.

Returns

  • List of configured voice references with their IDs, names, and tags
  • Default reference ID

Examples

Basic Text-to-Speech

User: "Generate speech saying 'Hello, world! Welcome to Fish Audio TTS.'"

Claude: I'll generate speech for that text using Fish Audio TTS.

[Uses fish_audio_tts tool with text parameter]

Result: Audio file saved to ./audio_output/tts_2025-01-03T10-30-00.mp3

Using Custom Voice by ID

User: "Generate speech with voice model xyz123 saying 'This is a custom voice test'"

Claude: I'll generate speech using the specified voice model.

[Uses fish_audio_tts tool with text and reference_id parameters]

Result: Audio generated with custom voice model xyz123

Using Voice by Name

User: "Use Alice's voice to say 'Hello from Alice'"

Claude: I'll generate speech using Alice's voice.

[Uses fish_audio_tts tool with reference_name: "Alice"]

Result: Audio generated with Alice's voice

Using Voice by Tag

User: "Generate Japanese speech saying 'こんにけは' with an anime voice"

Claude: I'll generate Japanese speech with an anime-style voice.

[Uses fish_audio_tts tool with reference_tag: "anime"]

Result: Audio generated with anime voice style

List Available Voices

User: "What voices are available?"

Claude: I'll list all configured voice references.

[Uses fish_audio_list_references tool]

Result:
- Alice (id: id1) - Tags: female, english [Default]
- Bob (id: id2) - Tags: male, japanese
- Carol (id: id3) - Tags: female, japanese, anime

HTTP Streaming Mode

User: "Generate a long speech in streaming mode about the benefits of AI"

Claude: I'll generate the speech in streaming mode for faster response.

[Uses fish_audio_tts tool with streaming: true]

Result: Streaming audio saved to ./audio_output/tts_2025-01-03T10-35-00.mp3

WebSocket Real-time Streaming

User: "Stream and play in real-time: 'Welcome to the future of AI'"

Claude: I'll stream the speech via WebSocket and play it in real-time.

[Uses fish_audio_tts tool with websocket_streaming: true, realtime_play: true]

Result: Audio streamed and played in real-time via WebSocket

Development

Local Development

  1. Clone the repository:
git clone https://github.com/da-okazaki/mcp-fish-audio-server.git
cd mcp-fish-audio-server
  1. Install dependencies:
npm install
  1. Create .env file:
cp .env.example .env
# Edit .env with your API key
  1. Build the project:
npm run build
  1. Run in development mode:
npm run dev

Testing

Run the test suite:

npm test

Project Structure

mcp-fish-audio-server/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ index.ts          # MCP server entry point
β”‚   β”œβ”€β”€ tools/
β”‚   β”‚   └── tts.ts        # TTS tool implementation
β”‚   β”œβ”€β”€ services/
β”‚   β”‚   └── fishAudio.ts  # Fish Audio API client
β”‚   β”œβ”€β”€ types/
β”‚   β”‚   └── index.ts      # TypeScript definitions
β”‚   └── utils/
β”‚       └── config.ts     # Configuration management
β”œβ”€β”€ tests/                # Test files
β”œβ”€β”€ audio_output/         # Default audio output directory
β”œβ”€β”€ package.json
β”œβ”€β”€ tsconfig.json
└── README.md

API Documentation

Fish Audio Service

The service provides two main methods:

  1. generateSpeech: Standard TTS generation

    • Returns audio buffer
    • Suitable for short texts
    • Lower memory usage
  2. generateSpeechStream: Streamin


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
markitdown

Convert various file formats (PDF, Office documents, images, audio, web content, structured data) to Markdown optimized for LLM processing. Use when converting documents to markdown, extracting text from PDFs/Office files, transcribing audio, performing OCR on images, extracting YouTube transcripts, or processing batches of files. Supports 20+ formats including DOCX, XLSX, PPTX, PDF, HTML, EPUB, CSV, JSON, images with OCR, and audio with transcription.

90
context7

Search GitHub issues, pull requests, and discussions across any repository. Activates when researching external dependencies (whisper.cpp, NAudio), looking for similar bugs, or finding implementation examples.

58
ffmpeg-cli

Comprehensive video/audio processing with FFmpeg. Use for: (1) Video transcoding and format conversion, (2) Cutting and merging clips, (3) Audio extraction and manipulation, (4) Thumbnail and GIF generation, (5) Resolution scaling and quality adjustment, (6) Adding subtitles or watermarks, (7) Speed adjustment (slow/fast motion), (8) Color correction and filters.

45
sound-engineer

Expert audio engineer specializing in spatial audio, procedural sound design, interactive audio systems, and real-time DSP

16
video-processor

Process video files with audio extraction, format conversion (mp4, webm), and Whisper

16
code-to-music

Tools, patterns, and utilities for creating music with code. Output as a .mp3 file with realistic instrument sounds. Write custom compositions to bring creativity to life through music. This skill should be used whenever the user asks for music to be created. Never use this skill for replicating songs, beats, riffs, or other sensitive works. The skill is not suitable for vocal/lyrical music, audio mixing/mastering (reverb, EQ, compression), real-time MIDI playback, or professional studio recording quality.

12