Trino (SQL Query Engine)

Trino (SQL Query Engine)

alaturqua

Connects AI systems to Trino databases for executing SQL queries, exploring data, and managing Iceberg table maintenance operations.

Connects AI systems to Trino/Iceberg databases for SQL execution, data exploration, and table optimization with seamless connection management and catalog navigation.

23272 views13Local (stdio)

What it does

  • Execute SQL queries on Trino databases
  • Browse database catalogs and schemas
  • Explore table structures and metadata
  • Perform Iceberg table optimization
  • Navigate multi-catalog environments
  • Format and display query results

Best for

Data analysts exploring large datasetsAI applications requiring SQL database accessAutomated Iceberg table maintenance workflowsInteractive data analysis with Trino clusters
Supports Trino and Iceberg integrationDocker containerized deploymentMulti-catalog database navigation

About Trino (SQL Query Engine)

Trino (SQL Query Engine) is a community-built MCP server published by alaturqua that provides AI assistants with tools and capabilities via the Model Context Protocol. Seamlessly connect AI systems to Trino (SQL Query Engine) for powerful SQL execution, data exploration, and efficient da It is categorized under databases, analytics data.

How to install

You can install Trino (SQL Query Engine) in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Trino (SQL Query Engine) is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

MseeP.ai Security Assessment Badge

MCP Trino Server

smithery badge Python 3.12+ VS Code Docker License

The MCP Trino Server is a Model Context Protocol (MCP) server that provides seamless integration with Trino and Iceberg, enabling advanced data exploration, querying, and table maintenance capabilities through a standard interface.

Use Cases

  • Interactive data exploration and analysis in Trino
  • Automated Iceberg table maintenance and optimization
  • Building AI-powered tools that interact with Trino databases
  • Executing and managing SQL queries with proper result formatting

Prerequisites

  1. A running Trino server (or Docker Compose for local development)
  2. Python 3.12 or higher
  3. Docker (optional, for containerized deployment)

Quick Start

1. Clone the Repository

git clone https://github.com/alaturqua/mcp-trino-python.git
cd mcp-trino-python

2. Create Environment File

Create a .env file in the root directory:

TRINO_HOST=localhost
TRINO_PORT=8080
TRINO_USER=trino
TRINO_CATALOG=tpch
TRINO_SCHEMA=tiny

3. Run Trino Locally (Optional)

docker-compose up -d trino

This starts a Trino server on localhost:8080 with sample TPC-H and TPC-DS data.

Installation

Installing via Smithery

To install MCP Trino Server for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @alaturqua/mcp-trino-python --client claude

Using uv (Recommended)

uv sync
uv run src/server.py

Using pip

pip install -e .
python src/server.py

Transport Modes

The server supports three transport modes:

TransportDescriptionUse Case
stdioStandard I/O (default)VS Code, Claude Desktop, local MCP clients
streamable-httpHTTP with streamingRemote access, web clients, Docker
sseServer-Sent EventsLegacy HTTP transport

Running with Different Transports

# stdio (default) - for VS Code and Claude Desktop
python src/server.py

# Streamable HTTP - for remote/web access
python src/server.py --transport streamable-http --host 0.0.0.0 --port 8000

# SSE - legacy HTTP transport
python src/server.py --transport sse --host 0.0.0.0 --port 8000

Usage with VS Code

Add to your VS Code settings (Ctrl+Shift+PPreferences: Open User Settings (JSON)):

{
  "mcp": {
    "servers": {
      "mcp-trino-python": {
        "command": "uv",
        "args": [
          "run",
          "--with",
          "mcp[cli]",
          "--with",
          "trino",
          "--with",
          "loguru",
          "mcp",
          "run",
          "/path/to/mcp-trino-python/src/server.py"
        ],
        "envFile": "/path/to/mcp-trino-python/.env"
      }
    }
  }
}

Or add to .vscode/mcp.json in your workspace (without the mcp wrapper key).

Usage with Claude Desktop

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "trino": {
      "command": "python",
      "args": ["./src/server.py"],
      "env": {
        "TRINO_HOST": "your-trino-host",
        "TRINO_PORT": "8080",
        "TRINO_USER": "trino"
      }
    }
  }
}

Docker Usage

Build the Image

docker build -t mcp-trino-python .

Run with stdio (for VS Code)

docker run -i --rm \
  -e TRINO_HOST=host.docker.internal \
  -e TRINO_PORT=8080 \
  -e TRINO_USER=trino \
  mcp-trino-python

Run with Streamable HTTP

docker run -p 8000:8000 \
  -e TRINO_HOST=host.docker.internal \
  -e TRINO_PORT=8080 \
  mcp-trino-python \
  --transport streamable-http --host 0.0.0.0 --port 8000

Docker Compose

# Start Trino + MCP server with Streamable HTTP
docker-compose up -d

# Start with SSE transport
docker-compose --profile sse up -d

# Run stdio for testing
docker-compose --profile stdio run --rm mcp-trino-stdio

VS Code with Docker

{
  "mcp": {
    "servers": {
      "mcp-trino-python": {
        "command": "docker",
        "args": [
          "run",
          "-i",
          "--rm",
          "--network",
          "mcp-trino-python_trino-network",
          "-e",
          "TRINO_HOST=trino",
          "-e",
          "TRINO_PORT=8080",
          "-e",
          "TRINO_USER=trino",
          "mcp-trino-python"
        ]
      }
    }
  }
}

Configuration

Environment Variables

VariableDescriptionDefault
TRINO_HOSTTrino server hostnamelocalhost
TRINO_PORTTrino server port8080
TRINO_USERTrino usernametrino
TRINO_CATALOGDefault catalogNone
TRINO_SCHEMADefault schemaNone
TRINO_HTTP_SCHEMEHTTP scheme (http/https)http
TRINO_PASSWORDTrino passwordNone

Tools

Query and Exploration Tools

  • show_catalogs

    • List all available catalogs
    • No parameters required
  • show_schemas

    • List all schemas in a catalog
    • Parameters:
      • catalog: Catalog name (string, required)
  • show_tables

    • List all tables in a schema
    • Parameters:
      • catalog: Catalog name (string, required)
      • schema: Schema name (string, required)
  • describe_table

    • Show detailed table structure and column information
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • execute_query

    • Execute a SQL query and return formatted results
    • Parameters:
      • query: SQL query to execute (string, required)
  • show_catalog_tree

    • Show a hierarchical tree view of catalogs, schemas, and tables
    • Returns a formatted tree structure with visual indicators
    • No parameters required
  • show_create_table

    • Show the CREATE TABLE statement for a table
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • show_create_view

    • Show the CREATE VIEW statement for a view
    • Parameters:
      • view: View name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • show_stats

    • Show statistics for a table
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)

Iceberg Table Maintenance

  • optimize

    • Optimize an Iceberg table by compacting small files
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • optimize_manifests

    • Optimize manifest files for an Iceberg table
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • expire_snapshots

    • Remove old snapshots from an Iceberg table
    • Parameters:
      • table: Table name (string, required)
      • retention_threshold: Age threshold (e.g., "7d") (string, optional)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)

Iceberg Metadata Inspection

  • show_table_properties

    • Show Iceberg table properties
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • show_table_history

    • Show Iceberg table history/changelog
    • Contains snapshot timing, lineage, and ancestry information
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • show_metadata_log_entries

    • Show Iceberg table metadata log entries
    • Contains metadata file locations and sequence information
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional)
  • show_snapshots

    • Show Iceberg table snapshots
    • Contains snapshot details including operations and manifest files
    • Parameters:
      • table: Table name (string, required)
      • catalog: Catalog name (string, optional)
      • schema: Schema name (string, optional

README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
senior-data-scientist

World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.

3
whodb

Database operations including querying, schema exploration, and data analysis. Activates for tasks involving PostgreSQL, MySQL, MariaDB, SQLite, MongoDB, Redis, Elasticsearch, or ClickHouse databases.

0
sql-query-optimizer

Optimize sql query optimizer operations. Auto-activating skill for Data Analytics. Triggers on: sql query optimizer, sql query optimizer Part of the Data Analytics skill category. Use when working with sql query optimizer functionality. Trigger with phrases like "sql query optimizer", "sql optimizer", "sql".

0
databases

Work with MongoDB (document database, BSON documents, aggregation pipelines, Atlas cloud) and PostgreSQL (relational database, SQL queries, psql CLI, pgAdmin). Use when designing database schemas, writing queries and aggregations, optimizing indexes for performance, performing database migrations, configuring replication and sharding, implementing backup and restore strategies, managing database users and permissions, analyzing query performance, or administering production databases.

0
clickhouse-io

ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.

0
db-query

Query project databases with automatic SSH tunnel management. Use when you need to execute SQL queries against configured databases, especially those accessible only via SSH tunnels. Automatically manages SSH connection lifecycle (establishes tunnel before query, closes after). Supports multiple databases distinguished by description/name from config file.

0