
Trino (SQL Query Engine)
Connects AI systems to Trino databases for executing SQL queries, exploring data, and managing Iceberg table maintenance operations.
Connects AI systems to Trino/Iceberg databases for SQL execution, data exploration, and table optimization with seamless connection management and catalog navigation.
What it does
- Execute SQL queries on Trino databases
- Browse database catalogs and schemas
- Explore table structures and metadata
- Perform Iceberg table optimization
- Navigate multi-catalog environments
- Format and display query results
Best for
About Trino (SQL Query Engine)
Trino (SQL Query Engine) is a community-built MCP server published by alaturqua that provides AI assistants with tools and capabilities via the Model Context Protocol. Seamlessly connect AI systems to Trino (SQL Query Engine) for powerful SQL execution, data exploration, and efficient da It is categorized under databases, analytics data.
How to install
You can install Trino (SQL Query Engine) in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
Trino (SQL Query Engine) is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
MCP Trino Server
The MCP Trino Server is a Model Context Protocol (MCP) server that provides seamless integration with Trino and Iceberg, enabling advanced data exploration, querying, and table maintenance capabilities through a standard interface.
Use Cases
- Interactive data exploration and analysis in Trino
- Automated Iceberg table maintenance and optimization
- Building AI-powered tools that interact with Trino databases
- Executing and managing SQL queries with proper result formatting
Prerequisites
- A running Trino server (or Docker Compose for local development)
- Python 3.12 or higher
- Docker (optional, for containerized deployment)
Quick Start
1. Clone the Repository
git clone https://github.com/alaturqua/mcp-trino-python.git
cd mcp-trino-python
2. Create Environment File
Create a .env file in the root directory:
TRINO_HOST=localhost
TRINO_PORT=8080
TRINO_USER=trino
TRINO_CATALOG=tpch
TRINO_SCHEMA=tiny
3. Run Trino Locally (Optional)
docker-compose up -d trino
This starts a Trino server on localhost:8080 with sample TPC-H and TPC-DS data.
Installation
Installing via Smithery
To install MCP Trino Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @alaturqua/mcp-trino-python --client claude
Using uv (Recommended)
uv sync
uv run src/server.py
Using pip
pip install -e .
python src/server.py
Transport Modes
The server supports three transport modes:
| Transport | Description | Use Case |
|---|---|---|
stdio | Standard I/O (default) | VS Code, Claude Desktop, local MCP clients |
streamable-http | HTTP with streaming | Remote access, web clients, Docker |
sse | Server-Sent Events | Legacy HTTP transport |
Running with Different Transports
# stdio (default) - for VS Code and Claude Desktop
python src/server.py
# Streamable HTTP - for remote/web access
python src/server.py --transport streamable-http --host 0.0.0.0 --port 8000
# SSE - legacy HTTP transport
python src/server.py --transport sse --host 0.0.0.0 --port 8000
Usage with VS Code
Add to your VS Code settings (Ctrl+Shift+P → Preferences: Open User Settings (JSON)):
{
"mcp": {
"servers": {
"mcp-trino-python": {
"command": "uv",
"args": [
"run",
"--with",
"mcp[cli]",
"--with",
"trino",
"--with",
"loguru",
"mcp",
"run",
"/path/to/mcp-trino-python/src/server.py"
],
"envFile": "/path/to/mcp-trino-python/.env"
}
}
}
}
Or add to .vscode/mcp.json in your workspace (without the mcp wrapper key).
Usage with Claude Desktop
Add to your Claude Desktop configuration:
{
"mcpServers": {
"trino": {
"command": "python",
"args": ["./src/server.py"],
"env": {
"TRINO_HOST": "your-trino-host",
"TRINO_PORT": "8080",
"TRINO_USER": "trino"
}
}
}
}
Docker Usage
Build the Image
docker build -t mcp-trino-python .
Run with stdio (for VS Code)
docker run -i --rm \
-e TRINO_HOST=host.docker.internal \
-e TRINO_PORT=8080 \
-e TRINO_USER=trino \
mcp-trino-python
Run with Streamable HTTP
docker run -p 8000:8000 \
-e TRINO_HOST=host.docker.internal \
-e TRINO_PORT=8080 \
mcp-trino-python \
--transport streamable-http --host 0.0.0.0 --port 8000
Docker Compose
# Start Trino + MCP server with Streamable HTTP
docker-compose up -d
# Start with SSE transport
docker-compose --profile sse up -d
# Run stdio for testing
docker-compose --profile stdio run --rm mcp-trino-stdio
VS Code with Docker
{
"mcp": {
"servers": {
"mcp-trino-python": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--network",
"mcp-trino-python_trino-network",
"-e",
"TRINO_HOST=trino",
"-e",
"TRINO_PORT=8080",
"-e",
"TRINO_USER=trino",
"mcp-trino-python"
]
}
}
}
}
Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
| TRINO_HOST | Trino server hostname | localhost |
| TRINO_PORT | Trino server port | 8080 |
| TRINO_USER | Trino username | trino |
| TRINO_CATALOG | Default catalog | None |
| TRINO_SCHEMA | Default schema | None |
| TRINO_HTTP_SCHEME | HTTP scheme (http/https) | http |
| TRINO_PASSWORD | Trino password | None |
Tools
Query and Exploration Tools
-
show_catalogs
- List all available catalogs
- No parameters required
-
show_schemas
- List all schemas in a catalog
- Parameters:
catalog: Catalog name (string, required)
-
show_tables
- List all tables in a schema
- Parameters:
catalog: Catalog name (string, required)schema: Schema name (string, required)
-
describe_table
- Show detailed table structure and column information
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
execute_query
- Execute a SQL query and return formatted results
- Parameters:
query: SQL query to execute (string, required)
-
show_catalog_tree
- Show a hierarchical tree view of catalogs, schemas, and tables
- Returns a formatted tree structure with visual indicators
- No parameters required
-
show_create_table
- Show the CREATE TABLE statement for a table
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
show_create_view
- Show the CREATE VIEW statement for a view
- Parameters:
view: View name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
show_stats
- Show statistics for a table
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
Iceberg Table Maintenance
-
optimize
- Optimize an Iceberg table by compacting small files
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
optimize_manifests
- Optimize manifest files for an Iceberg table
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
expire_snapshots
- Remove old snapshots from an Iceberg table
- Parameters:
table: Table name (string, required)retention_threshold: Age threshold (e.g., "7d") (string, optional)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
Iceberg Metadata Inspection
-
show_table_properties
- Show Iceberg table properties
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
show_table_history
- Show Iceberg table history/changelog
- Contains snapshot timing, lineage, and ancestry information
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
show_metadata_log_entries
- Show Iceberg table metadata log entries
- Contains metadata file locations and sequence information
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional)
-
show_snapshots
- Show Iceberg table snapshots
- Contains snapshot details including operations and manifest files
- Parameters:
table: Table name (string, required)catalog: Catalog name (string, optional)schema: Schema name (string, optional
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsWorld-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.
Design robust, scalable database schemas for SQL and NoSQL databases. Provides normalization guidelines, indexing strategies, migration patterns, constraint design, and performance optimization. Ensures data integrity, query performance, and maintainable data models.
ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
Master modern SQL with cloud-native databases, OLTP/OLAP optimization, and advanced query techniques. Expert in performance tuning, data modeling, and hybrid analytical systems. Use PROACTIVELY for database optimization or complex analysis.
Expert guidance for distributed NoSQL databases (Cassandra, DynamoDB). Focuses on mental models, query-first modeling, single-table design, and avoiding hot partitions in high-scale systems.
Database operations including querying, schema exploration, and data analysis. Activates for tasks involving PostgreSQL, MySQL, MariaDB, SQLite, MongoDB, Redis, Elasticsearch, or ClickHouse databases.
