
AI Translation
Translates JSON i18n files using AI providers (Google Gemini, OpenAI, Ollama/DeepSeek) while preserving exact JSON structure and minimizing API costs through intelligent caching and deduplication.
Translates JSON internationalization files using multiple translation providers (Google Gemini, OpenAI, Ollama/DeepSeek) with intelligent caching, deduplication across files, and format preservation to minimize API costs while maintaining exact JSON structure and consistent results across target languages.
What it does
- Translate JSON internationalization files to multiple languages
- Process multiple files with automatic deduplication
- Cache translations incrementally to avoid re-translating
- Batch translations for optimal API performance
- Preserve JSON structure and formatting exactly
- Detect source language automatically
Best for
About AI Translation
AI Translation is a community-built MCP server published by datanoisetv that provides AI assistants with tools and capabilities via the Model Context Protocol. AI Translation offers an advanced AI translation and machine translation service, auto-translating JSON files with perfe It is categorized under ai ml, developer tools.
How to install
You can install AI Translation in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
AI Translation is released under the NOASSERTION license.
translator-ai
Fast and efficient JSON i18n translator supporting multiple AI providers (Google Gemini, OpenAI & Ollama/DeepSeek) with intelligent caching, multi-file deduplication, and MCP integration.
Features
- Multiple AI Providers: Choose between Google Gemini, OpenAI (cloud) or Ollama/DeepSeek (local) for translations
- Multi-File Support: Process multiple files with automatic deduplication to save API calls
- Incremental Caching: Only translates new or modified strings, dramatically reducing API calls
- Batch Processing: Intelligently batches translations for optimal performance
- Path Preservation: Maintains exact JSON structure including nested objects and arrays
- Cross-Platform: Works on Windows, macOS, and Linux with automatic cache directory detection
- Developer Friendly: Built-in performance statistics and progress indicators
- Cost Effective: Minimizes API usage through smart caching and deduplication
- Language Detection: Automatically detect source language instead of assuming English
- Multiple Target Languages: Translate to multiple languages in a single command
- Translation Metadata: Optionally include translation details in output files for tracking
- Dry Run Mode: Preview what would be translated without making API calls
- Format Preservation: Maintains URLs, emails, dates, numbers, and template variables unchanged
Installation
Global Installation (Recommended)
npm install -g translator-ai
Local Installation
npm install translator-ai
Configuration
Option 1: Google Gemini API (Cloud)
Create a .env file in your project root or set the environment variable:
GEMINI_API_KEY=your_gemini_api_key_here
Get your API key from Google AI Studio.
Option 2: OpenAI API (Cloud)
Create a .env file in your project root or set the environment variable:
OPENAI_API_KEY=your_openai_api_key_here
Get your API key from OpenAI Platform.
Option 3: Ollama with DeepSeek-R1 (Local)
For completely local translation without API costs:
- Install Ollama
- Pull the DeepSeek-R1 model:
ollama pull deepseek-r1:latest - Use the
--provider ollamaflag:translator-ai source.json -l es -o spanish.json --provider ollama
Usage
Basic Usage
# Translate a single file
translator-ai source.json -l es -o spanish.json
# Translate multiple files with deduplication
translator-ai src/locales/en/*.json -l es -o "{dir}/{name}.{lang}.json"
# Use glob patterns
translator-ai "src/**/*.en.json" -l fr -o "{dir}/{name}.fr.json"
Command Line Options
translator-ai <inputFiles...> [options]
Arguments:
inputFiles Path(s) to source JSON file(s) or glob patterns
Options:
-l, --lang <langCodes> Target language code(s), comma-separated for multiple
-o, --output <pattern> Output file path or pattern
--stdout Output to stdout instead of file
--stats Show detailed performance statistics
--no-cache Disable incremental translation cache
--cache-file <path> Custom cache file path
--provider <type> Translation provider: gemini, openai, or ollama (default: gemini)
--ollama-url <url> Ollama API URL (default: http://localhost:11434)
--ollama-model <model> Ollama model name (default: deepseek-r1:latest)
--gemini-model <model> Gemini model name (default: gemini-2.0-flash-lite)
--openai-model <model> OpenAI model name (default: gpt-4o-mini)
--list-providers List available translation providers
--verbose Enable verbose output for debugging
--detect-source Auto-detect source language instead of assuming English
--dry-run Preview what would be translated without making API calls
--preserve-formats Preserve URLs, emails, numbers, dates, and other formats
--metadata Add translation metadata to output files (may break some i18n parsers)
--sort-keys Sort output JSON keys alphabetically
--check-keys Verify all source keys exist in output (exit with error if keys are missing)
-h, --help Display help
-V, --version Display version
Output Pattern Variables (for multiple files):
{dir} - Original directory path
{name} - Original filename without extension
{lang} - Target language code
Examples
Translate a single file
translator-ai en.json -l es -o es.json
Translate multiple files with pattern
# All JSON files in a directory
translator-ai locales/en/*.json -l es -o "locales/es/{name}.json"
# Recursive glob pattern
translator-ai "src/**/en.json" -l fr -o "{dir}/fr.json"
# Multiple specific files
translator-ai file1.json file2.json file3.json -l de -o "{name}.de.json"
Translate with deduplication savings
# Shows statistics including how many API calls were saved
translator-ai src/i18n/*.json -l ja -o "{dir}/{name}.{lang}.json" --stats
Output to stdout (useful for piping)
translator-ai en.json -l de --stdout > de.json
Parse output with jq
translator-ai en.json -l de --stdout | jq
Disable caching for fresh translation
translator-ai en.json -l ja -o ja.json --no-cache
Use custom cache location
translator-ai en.json -l ko -o ko.json --cache-file /path/to/cache.json
Use Ollama for local translation
# Basic usage with Ollama
translator-ai en.json -l es -o es.json --provider ollama
# Use a different Ollama model
translator-ai en.json -l fr -o fr.json --provider ollama --ollama-model llama2:latest
# Connect to remote Ollama instance
translator-ai en.json -l de -o de.json --provider ollama --ollama-url http://192.168.1.100:11434
# Check available providers
translator-ai --list-providers
Advanced Features
# Detect source language automatically
translator-ai content.json -l es -o spanish.json --detect-source
# Translate to multiple languages at once
translator-ai en.json -l es,fr,de,ja -o translations/{lang}.json
# Dry run - see what would be translated without making API calls
translator-ai en.json -l es -o es.json --dry-run
# Preserve formats (URLs, emails, dates, numbers, template variables)
translator-ai app.json -l fr -o app-fr.json --preserve-formats
# Include translation metadata (disabled by default to ensure compatibility)
translator-ai en.json -l fr -o fr.json --metadata
# Sort keys alphabetically for consistent output
translator-ai en.json -l fr -o fr.json --sort-keys
# Verify all keys are present in the translation
translator-ai en.json -l fr -o fr.json --check-keys
# Use a different Gemini model
translator-ai en.json -l es -o es.json --gemini-model gemini-2.5-flash
# Combine features
translator-ai src/**/*.json -l es,fr,de -o "{dir}/{name}.{lang}.json" \
--detect-source --preserve-formats --stats --check-keys
Available Gemini Models
The --gemini-model option allows you to choose from various Gemini models. Popular options include:
gemini-2.0-flash-lite(default) - Fast and efficient for most translationsgemini-2.5-flash- Enhanced performance with newer capabilitiesgemini-pro- More sophisticated understanding for complex translationsgemini-1.5-pro- Previous generation pro modelgemini-1.5-flash- Previous generation fast model
Example usage:
# Use the latest flash model
translator-ai en.json -l es -o es.json --gemini-model gemini-2.5-flash
# Use the default lightweight model
translator-ai en.json -l fr -o fr.json --gemini-model gemini-2.0-flash-lite
Available OpenAI Models
The --openai-model option allows you to choose from various OpenAI models. Popular options include:
gpt-4o-mini(default) - Cost-effective and fast for most translationsgpt-4o- Most capable model with advanced understandinggpt-4-turbo- Previous generation flagship modelgpt-3.5-turbo- Fast and efficient for simpler translations
Example usage:
# Use OpenAI with the default model
translator-ai en.json -l es -o es.json --provider openai
# Use GPT-4o for complex translations
translator-ai en.json -l ja -o ja.json --provider openai --openai-model gpt-4o
# Use GPT-3.5-turbo for faster, simpler translations
translator-ai en.json -l fr -o fr.json --provider openai --openai-model gpt-3.5-turbo
Translation Metadata
When enabled with the --metadata flag, translator-ai adds metadata to help track translations:
{
"_translator_metadata": {
"tool": "translator-ai v1.1.0",
"repository": "https://github.com/DatanoiseTV/translator-ai",
"provider": "Google Gemini",
"source_language": "English",
"target_language": "fr",
"timestamp": "2025-06-20T12:34:56.789Z",
"total_strings": 42,
"source_file": "en.json"
},
"greeting": "Bonjour",
"farewell": "Au revoir"
}
Metadata is disabled by default to ensure compatibility with i18n parsers. Use --metadata to enable it.
Key Sorting
Use the --sort-keys flag to sort all JSON keys alphabetically in the output:
translator-ai en.json -l es -o es.json --sort-keys
This ensures consistent ordering across translations and makes diffs cleaner. Keys are sorted:
- Case-insensitiv
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsUI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".
Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals. Use PROACTIVELY for API documentation or developer portal creation.
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.
Guide for building TypeScript CLIs with Bun. Use when creating command-line tools, adding subcommands to existing CLIs, or building developer tooling. Covers argument parsing, subcommand patterns, output formatting, and distribution.
Integrate Vercel AI SDK applications with You.com tools (web search, AI agent, content extraction). Use when developer mentions AI SDK, Vercel AI SDK, generateText, streamText, or You.com integration with AI SDK.