Lara Translate

Lara Translate

Official
translated

Connects to the Lara Translate API to provide professional-grade text translations with automatic language detection and context preservation.

Bridges to the Lara Translation API for accurate, context-aware text translations between languages with automatic language detection capabilities.

79474 views18Local (stdio)

What it does

  • Translate text between multiple languages
  • Automatically detect source language
  • Access translation memories for consistency
  • Perform context-aware translations
  • Handle batch translation requests

Best for

Content creators working with multilingual contentDevelopers building international applicationsBusinesses needing professional translation servicesAI applications requiring translation capabilities
Professional-grade translation APIContext-aware translationsTranslation memory support

About Lara Translate

Lara Translate is an official MCP server published by translated that provides AI assistants with tools and capabilities via the Model Context Protocol. Lara Translate offers language translation with automatic detection, from Latin translation to translate English to Geor It is categorized under ai ml.

How to install

You can install Lara Translate in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Lara Translate is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Lara Translate MCP Server

A Model Context Protocol (MCP) Server for Lara Translate API, enabling powerful translation capabilities with support for language detection, context-aware translations and translation memories.

License Docker Pulls npm downloads

๐Ÿ“š Table of Contents

๐Ÿ“– Introduction

What is MCP?

Model Context Protocol (MCP) is an open standardized communication protocol that enables AI applications to connect with external tools, data sources, and services. Think of MCP like a USB-C port for AI applications - just as USB-C provides a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources and tools.

Lara Translate MCP Server enables AI applications to access Lara Translate's powerful translation capabilities through this standardized protocol.

More info about Model Context Protocol on: https://modelcontextprotocol.io/

How Lara Translate MCP Works

Lara Translate MCP Server implements the Model Context Protocol to provide seamless translation capabilities to AI applications. The integration follows this flow:

  1. Connection Establishment: When an MCP-compatible AI application starts, it connects to configured MCP servers, including the Lara Translate MCP Server
  2. Tool & Resource Discovery: The AI application discovers available translation tools and resources provided by the Lara Translate MCP Server
  3. Request Processing: When translation needs are identified:
    • The AI application formats a structured request with text to translate, language pairs, and optional context
    • The MCP server validates the request and transforms it into Lara Translate API calls
    • The request is securely sent to Lara Translate's API using your credentials
  4. Translation & Response: Lara Translate processes the translation using advanced AI models
  5. Result Integration: The translation results are returned to the AI application, which can then incorporate them into its response

This integration architecture allows AI applications to access professional-grade translations without implementing the API directly, while maintaining the security of your API credentials and offering flexibility to adjust translation parameters through natural language instructions.

Why to use Lara inside an LLM

Integrating Lara with LLMs creates a powerful synergy that significantly enhances translation quality for non-English languages.

Why General LLMs Fall Short in Translation

While large language models possess broad linguistic capabilities, they often lack the specialized expertise and up-to-date terminology required for accurate translations in specific domains and languages.

Laraโ€™s Domain-Specific Advantage

Lara overcomes this limitation by leveraging Translation Language Models (T-LMs) trained on billions of professionally translated segments. These models provide domain-specific machine translation that captures cultural nuances and industry terminology that generic LLMs may miss. The result: translations that are contextually accurate and sound natural to native speakers.

Designed for Non-English Strength

Lara has a strong focus on non-English languages, addressing the performance gap found in models such as GPT-4. The dominance of English in datasets such as Common Crawl and Wikipedia results in lower quality output in other languages. Lara helps close this gap by providing higher quality understanding, generation, and restructuring in a multilingual context.

Faster, Smarter Multilingual Performance

By offloading complex translation tasks to specialized T-LMs, Lara reduces computational overhead and minimizes latencyโ€”a common issue for LLMs handling non-English input. Its architecture processes translations in parallel with the LLM, enabling for real-time, high-quality output without compromising speed or efficiency.

Cost-Efficient Translation at Scale

Lara also lowers the cost of using models like GPT-4 in non-English workflows. Since tokenization (and pricing) is optimized for English, using Lara allows translation to take place before hitting the LLM, meaning that only the translated English content is processed. This improves cost efficiency and supports competitive scalability for global enterprises.

๐Ÿ›  Available Tools

Translation Tools

translate - Translate text between languages

Inputs:

  • text (array): An array of text blocks to translate, each with:
    • text (string): The text content
    • translatable (boolean): Whether this block should be translated
  • source (optional string): Source language code (e.g., 'en-EN')
  • target (string): Target language code (e.g., 'it-IT')
  • context (optional string): Additional context to improve translation quality
  • instructions (optional string[]): Instructions to adjust translation behavior
  • source_hint (optional string): Guidance for language detection
  • glossaries (optional string[]): Array of glossary IDs to enforce terminology (e.g., ['gls_xyz123'])
  • no_trace (optional boolean): Privacy flag - if true, request won't be traced/logged
  • priority (optional string): Translation priority - 'normal' or 'background'
  • timeout_in_millis (optional number): Custom timeout in milliseconds

Returns: Translated text blocks maintaining the original structure

Glossaries Tools

list_glossaries - List all glossaries

Inputs: None

Returns: Array of glossaries with their details (id, name, createdAt, updatedAt, ownerId)

get_glossary - Get a specific glossary by ID

Inputs:

  • id (string): The glossary ID (e.g., 'gls_xyz123')

Returns: Glossary object or null if not found

Translation Memories Tools

list_memories - List saved translation memories

Returns: Array of memories and their details

create_memory - Create a new translation memory

Inputs:

  • name (string): Name of the new memory
  • external_id (optional string): ID of the memory to import from MyMemory (e.g., 'ext_my_[MyMemory ID]')

Returns: Created memory data

update_memory - Update translation memory name

Inputs:

  • id (string): ID of the memory to update
  • name (string): The new name for the memory

Returns: Updated memory data

delete_memory - Delete a translation memory

Inputs:

  • id (string): ID of the memory to delete

Returns: Deleted memory data

add_translation - Add a translation unit to memory

Inputs:

  • id (string | string[]): ID or IDs of memories where to add the translation unit
  • source (string): Source language code
  • target (string): Target language code
  • sentence (string): The source sentence
  • translation (string): The translated sentence
  • tuid (optional string): Translation Unit unique identifier
  • sentence_before (optional string): Context sentence before
  • sentence_after (optional string): Context sentence after

Returns: Added translation details

delete_translation - Delete a translation unit from memory

Inputs:

  • id (string): ID of the memory
  • source (string): Source language code
  • target (string): Target language code
  • sentence (string): The source sentence
  • translation (string): The translated sentence
  • tuid (optional string): Translation Unit unique identifier
  • sentence_before (optional string): Context sentence before
  • sentence_after (optional string): Context sentence after

Returns: Removed translation details

import_tmx - Import a TMX file into a memory

Inputs:

  • id (string): ID of the memory to update
  • tmx_content (string): The content of the tmx file to upload
  • gzip (boolean): Indicates if the file is compressed (.gz)

Returns: Import details

check_import_status - Checks the status of a TMX file import

Inputs:

  • id (string): The ID of the import job

Returns: Import details

๐Ÿš€ Getting Started

Lara supports both the STDIO and streamable HTTP protocols. For a hassle-free setup, we recommend using the HTTP protocol. If you prefer to use STDIO, it must be installed locally on your machine.

You'll find setup instructions for both protocols in the sections below.

โš ๏ธ Security Note

Important: When running your own HTTP server instance (not using the remote https://mcp.laratranslate.com/v1), all connected clients share the same Lara API credentials configured via LARA_ACCESS_KEY_ID and LARA_ACCESS_KEY_SECRET environment


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
figma

Use the Figma MCP server to fetch design context, screenshots, variables, and assets from Figma, and to translate Figma nodes into production code. Trigger when a task involves Figma URLs, node IDs, design-to-code implementation, or Figma MCP setup and troubleshooting.

14
figma-implement-design

Translate Figma nodes into production-ready code with 1:1 visual fidelity using the Figma MCP workflow (design context, screenshots, assets, and project-convention translation). Trigger when the user provides Figma URLs or node IDs, or asks to implement designs or components that must match Figma specs. Requires a working Figma MCP server connection.

11
laravel-specialist

Use when building Laravel 10+ applications requiring Eloquent ORM, API resources, or queue systems. Invoke for Laravel models, Livewire components, Sanctum authentication, Horizon queues.

10
pdf-translator

Extract text from PDF files, translate it to a target language, and save the result as a Markdown file. Use this skill when the user wants to translate a PDF document.

10
implement-design

Translates Figma designs into production-ready code with 1:1 visual fidelity. Use when implementing UI from Figma files, when user mentions "implement design", "generate code", "implement component", "build Figma design", provides Figma URLs, or asks to build components matching Figma specs. Requires Figma MCP server connection.

9
laravel-pdf

Generate PDFs from Blade views or HTML using spatie/laravel-pdf. Covers creating, formatting, saving, downloading, and testing PDFs with the Browsershot, Cloudflare, or DOMPDF driver.

8