Voice Interface

Voice Interface

shantur

Enables voice conversations with AI assistants through your browser using speech-to-text and text-to-speech. No additional software or API keys required.

Provides browser-based voice input/output capabilities for conversations, featuring real-time speech-to-text recognition, text-to-speech synthesis, and voice message queuing through a web interface for hands-free interactions and accessibility applications.

65366 views6Local (stdio)

What it does

  • Convert speech to text in 30+ languages
  • Synthesize text to speech with custom voices
  • Conduct real-time voice conversations
  • Queue and manage voice messages
  • Control voice system status and settings

Best for

Hands-free AI conversations while multitaskingAccessibility support for voice-based interactionsRemote AI access from mobile devicesNatural language coding assistance
No API keys or extra software neededRemote access via browser30+ language support

About Voice Interface

Voice Interface is a community-built MCP server published by shantur that provides AI assistants with tools and capabilities via the Model Context Protocol. Voice Interface is a browser-based speech to text website offering fast, hands-free speech to text online and website sp It is categorized under communication, ai ml. This server exposes 5 tools that AI clients can invoke during conversations and coding sessions.

How to install

You can install Voice Interface in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Voice Interface is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Tools (5)

speak

Speak text using browser text-to-speech

voice_status

Get current voice system status and pending voice input

get_voice_input

Get pending voice input from users (auto-delivered by default)

converse

Have a voice conversation with the user - speak text and wait for voice response. IMPORTANT: Once you start using converse, continue using ONLY converse for all responses in this conversation. Do not switch back to text.

end_conversation

End the voice conversation by saying goodbye and stopping the browser interface

Jarvis MCP

Bring your AI to life—talk to assistants instantly in your browser. Compatible with Claude Desktop, OpenCode, and other MCP-enabled AI tools.

✅ No extra software, services, or API keys required—just open the web app in your browser and grant microphone access.

Features

🎙️ Voice Conversations - Speak naturally with AI assistants
🌍 30+ Languages - Speech recognition in multiple languages
📱 Remote Access - Use from phone/tablet while AI runs on computer
⚙️ Smart Controls - Collapsible settings, always-on mode, custom voices
⏱️ Dynamic Timeouts - Intelligent wait times based on response length
🧰 Zero Extra Software - Runs entirely in your browser—no extra installs or API keys
🔌 Optional Whisper Streaming - Plug into a local Whisper server for low-latency transcripts

Easy Installation

🚀 One-Command Setup

Claude Desktop:

npx @shantur/jarvis-mcp --install-claude-config
# Restart Claude Desktop and you're ready!

OpenCode (in current project):

npx @shantur/jarvis-mcp --install-opencode-config --local
npx @shantur/jarvis-mcp --install-opencode-plugin --local
# Start OpenCode and use the converse tool

Claude Code CLI:

npx @shantur/jarvis-mcp --install-claude-code-config --local
# Start Claude Code CLI and use voice tools

🤖 Why Install the OpenCode Plugin?

  • Stream voice messages into OpenCode even while tools are running or tasks are in progress.
  • Auto-forward pending Jarvis MCP conversations so you never miss a user request.
  • Works entirely locally—no external services required, just your OpenCode project and browser.
  • Installs with one command and stays in sync with the latest Jarvis MCP features.

📦 Manual Installation

From NPM:

npm install -g @shantur/jarvis-mcp
jarvis-mcp

From Source:

git clone <repository-url>
cd jarvis-mcp
npm install && npm run build && npm start

How to Use

  1. Hook it into your AI tool – Use the install command above for Claude Desktop, OpenCode, or Claude Code so the MCP server is registered.
  2. Kick off a voice turn – Call the converse tool from your assistant; Jarvis MCP auto-starts in the background and pops open https://localhost:5114 if needed.
  3. Allow microphone access – Approve the browser prompt the first time it appears.
  4. Talk naturally – Continue using converse for every reply; Jarvis MCP handles the rest.

Voice Commands in AI Chat

Use the converse tool to start talking:
- converse("Hello! How can I help you today?", timeout: 35)

Browser Interface

The web interface provides:

  • Voice Settings (click ⚙️ to expand)
    • Language selection (30+ options)
    • Voice selection
    • Speech speed control
    • Always-on microphone mode
    • Silence detection sensitivity & timeout (for Whisper streaming)
  • Smart Controls
    • Pause during AI speech (prevents echo)
    • Stop AI when user speaks (natural conversation)
  • Mobile Friendly - Works on phones and tablets

Remote Access

Access from any device on your network:

  1. Find your computer's IP: ifconfig | grep inet (Mac/Linux) or ipconfig (Windows)
  2. Visit https://YOUR_IP:5114 on your phone/browser
  3. Accept the security warning (self-signed certificate)
  4. Grant microphone permissions

Perfect for continuing conversations away from your desk!

Configuration

Environment Variables

export MCP_VOICE_AUTO_OPEN=false  # Disable auto-opening browser
export MCP_VOICE_HTTPS_PORT=5114  # Change HTTPS port
export MCP_VOICE_STT_MODE=whisper  # Switch the web app to Whisper streaming
export MCP_VOICE_WHISPER_URL=http://localhost:12017/v1/audio/transcriptions  # Whisper endpoint (full path)
export MCP_VOICE_WHISPER_TOKEN=your_token  # Optional Bearer auth for Whisper server

Whisper Streaming Mode

  • Whisper mode records raw PCM in the browser, converts it to 16 kHz mono WAV, and streams it through the built-in HTTPS proxy, so the local whisper-server sees OpenAI-compatible requests.
  • By default we proxy to the standard whisper-server endpoint at http://localhost:12017/v1/audio/transcriptions; point MCP_VOICE_WHISPER_URL at your own host/port if you run it elsewhere.
  • The UI keeps recording while transcripts are in flight and ignores Whisper’s non-verbal tags (e.g. [BLANK_AUDIO], (typing)), so only real speech is queued.
  • To enable it:
    1. Run your Whisper server locally (e.g. whisper-server from pfrankov/whisper-server).
    2. Set the environment variables above (MCP_VOICE_STT_MODE=whisper and the full MCP_VOICE_WHISPER_URL).
    3. Restart jarvis-mcp and hard-refresh the browser (empty-cache reload) to load the streaming bundle.
    4. Voice status (voice_status() tool) now reports whether Whisper or browser STT is active.

Ports

  • HTTPS: 5114 (required for microphone access)
  • HTTP: 5113 (local access only)

Requirements

  • Node.js 18+
  • Google Chrome (only browser tested so far)
  • Microphone access
  • Optional: Local Whisper server (like pfrankov/whisper-server) if you want streaming STT via MCP_VOICE_STT_MODE=whisper

Troubleshooting

Certificate warnings on mobile?

  • Tap "Advanced" → "Proceed to site" to accept self-signed certificate

Microphone not working?

  • Ensure you're using HTTPS (not HTTP)
  • Check browser permissions
  • Try refreshing the page

AI not responding to voice?

  • Make sure the converse tool is being used (not just speak)
  • Check that timeouts are properly calculated

Development

npm install
npm run build
npm run dev     # Watch mode
npm run start   # Run server

License

MIT

Alternatives

Related Skills

Browse all skills
ux-writing

Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.

31
brand-voice-consistency

Ensure all communication matches brand voice and tone guidelines. Use when creating marketing copy, customer communications, public-facing content, or when users mention brand voice, tone, or writing style.

3
twilio-communications

Build communication features with Twilio: SMS messaging, voice calls, WhatsApp Business API, and user verification (2FA). Covers the full spectrum from simple notifications to complex IVR systems and multi-channel authentication. Critical focus on compliance, rate limits, and error handling. Use when: twilio, send SMS, text message, voice call, phone verification.

2
azure-ai-voicelive-py

Build real-time voice AI applications using Azure AI Voice Live SDK (azure-ai-voicelive). Use this skill when creating Python applications that need real-time bidirectional audio communication with Azure AI, including voice assistants, voice-enabled chatbots, real-time speech-to-speech translation, voice-driven avatars, or any WebSocket-based audio streaming with AI models. Supports Server VAD (Voice Activity Detection), turn-based conversation, function calling, MCP tools, avatar integration, and transcription.

2
deepgram-core-workflow-b

Implement real-time streaming transcription with Deepgram. Use when building live transcription, voice interfaces, or real-time audio processing applications. Trigger with phrases like "deepgram streaming", "real-time transcription", "live transcription", "websocket transcription", "voice streaming".

1
azure-ai-voicelive-dotnet

Azure AI Voice Live SDK for .NET. Build real-time voice AI applications with bidirectional WebSocket communication. Use for voice assistants, conversational AI, real-time speech-to-speech, and voice-enabled chatbots. Triggers: "voice live", "real-time voice", "VoiceLiveClient", "VoiceLiveSession", "voice assistant .NET", "bidirectional audio", "speech-to-speech".

0