Powerdrill

Powerdrill

Official
powerdrillai

Connects to Powerdrill datasets to perform AI-powered data analysis through natural language questions. Requires Powerdrill User ID and Project API Key for authentication.

Provides tools to interact with Powerdrill datasets to perform data work.

12454 views5Local (stdio)

What it does

  • List available datasets in your Powerdrill account
  • Get detailed information about specific datasets
  • Create and run analysis jobs using natural language questions
  • Query data through AI-powered analysis

Best for

Data analysts needing conversational data explorationTeams with existing Powerdrill datasetsAI-assisted business intelligence workflows
Natural language data queriesIntegrates with existing Powerdrill accounts

About Powerdrill

Powerdrill is an official MCP server published by powerdrillai that provides AI assistants with tools and capabilities via the Model Context Protocol. Powerdrill helps you interact with Powerdrill datasets, offering efficient tools for streamlined data work and analysis. It is categorized under ai ml, analytics data.

How to install

You can install Powerdrill in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Powerdrill is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Powerdrill MCP Server

smithery badge

A Model Context Protocol (MCP) server that provides tools to interact with Powerdrill datasets, authenticated with Powerdrill User ID and Project API Key.

Please go to https://chat.powerdrill.ai/ for AI data analysis individually or use with your Team.

If you have the Powerdrill User ID and Project API Key of your Team, you can manipulate the data via Powerdrill open sourced web clients:

Features

  • Authenticate with Powerdrill using User ID and Project API Key
  • List available datasets in your Powerdrill account
  • Get detailed information about specific datasets
  • Create and run jobs on datasets with natural language questions
  • Integration with Claude Desktop and other MCP-compatible clients

Installation

Installing via Smithery

To install powerdrill-mcp for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @powerdrillai/powerdrill-mcp --client claude

From npm

# Install globally
npm install -g @powerdrillai/powerdrill-mcp

# Or run directly with npx
npx @powerdrillai/powerdrill-mcp

From Source

Clone this repository and install dependencies:

git clone https://github.com/yourusername/powerdrill-mcp.git
cd powerdrill-mcp
npm install

CLI Usage

If installed globally:

# Start the MCP server
powerdrill-mcp

If using npx:

# Run the latest version
npx -y @powerdrillai/powerdrill-mcp@latest

You'll need to configure environment variables with your Powerdrill credentials before running:

# Set environment variables
export POWERDRILL_USER_ID="your_user_id"
export POWERDRILL_PROJECT_API_KEY="your_project_api_key"

Or create a .env file with these values.

Prerequisites

To use this MCP server, you'll need a Powerdrill account with valid API credentials (User ID and API Key). Here's how to obtain them:

  1. Sign up for a Powerdrill Team account if you haven't already
  2. Navigate to your account settings
  3. Look for the API section where you'll find your:
    • User ID: A unique identifier for your account
    • API Key: Your authentication token for API access

First, watch this video tutorial on how to create your Powerdrill Team:

Create Powerdrill Team Tutorial

Then, follow this video tutorial for setting up your API credentials:

Powerdrill API Setup Tutorial

Quick Setup

The easiest way to set up the server is using the provided setup script:

# Make the script executable
chmod +x setup.sh

# Run the setup script
./setup.sh

This will:

  1. Install dependencies
  2. Build the TypeScript code
  3. Create a .env file if it doesn't exist
  4. Generate configuration files for Claude Desktop and Cursor with the npx-based configuration (recommended)

Then edit your .env file with your actual credentials:

POWERDRILL_USER_ID=your_actual_user_id
POWERDRILL_PROJECT_API_KEY=your_actual_project_api_key

Also update the credentials in the generated configuration files before using them.

Manual Installation

If you prefer to set up manually:

# Install dependencies
npm install

# Build the TypeScript code
npm run build

# Copy the environment example file
cp .env.example .env

# Edit the .env file with your credentials

Usage

Running the server

npm start

Integrating with Claude Desktop

  1. Open Claude Desktop
  2. Go to Settings > Server Settings
  3. Add a new server with one of the following configurations:

Option 1: Using npx (Recommended)

{
  "powerdrill": {
    "command": "npx",
    "args": [
      "-y",
      "@powerdrillai/powerdrill-mcp@latest"
    ],
    "env": {
      "POWERDRILL_USER_ID": "your_actual_user_id",
      "POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
    }
  }
}

Option 2: Using node with local installation

{
  "powerdrill": {
    "command": "node",
    "args": ["/path/to/powerdrill-mcp/dist/index.js"],
    "env": {
      "POWERDRILL_USER_ID": "your_actual_user_id",
      "POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
    }
  }
}
  1. Save the configuration
  2. Restart Claude Desktop

Integrating with Cursor

  1. Open Cursor
  2. Go to Settings > MCP Tools
  3. Add a new MCP tool with one of the following configurations:

Option 1: Using npx (Recommended)

{
  "powerdrill": {
    "command": "npx",
    "args": [
      "-y",
      "@powerdrillai/powerdrill-mcp@latest"
    ],
    "env": {
      "POWERDRILL_USER_ID": "your_actual_user_id",
      "POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
    }
  }
}

Option 2: Using node with local installation

{
  "powerdrill": {
    "command": "node",
    "args": ["/path/to/powerdrill-mcp/dist/index.js"],
    "env": {
      "POWERDRILL_USER_ID": "your_actual_user_id",
      "POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
    }
  }
}
  1. Save the configuration
  2. Restart Cursor if needed

Using the tools

Once connected, you can use the Powerdrill tools in your conversations with Claude Desktop, Cursor, Cline, Windsurf, etc.:

  • List datasets: What datasets are available in my Powerdrill account? or Show me all my datasets
  • Create dataset: Create a new dataset called "Sales Analytics" or Make a new dataset named "Customer Data" with description "Customer information for 2024 analysis"
  • Create data source from local file: Upload the file /Users/your_name/Downloads/sales_data.csv to dataset {dataset_id} or Add my local file /path/to/customer_data.xlsx to my {dataset_id} dataset
  • Get dataset overview: Tell me more about this dataset: {dataset_id} or Describe the structure of dataset {dataset_id}
  • Create a job: Analyze dataset {dataset_id} with this question: "How has the trend changed over time?" or Run a query on {dataset_id} asking "What are the top 10 customers by revenue?"
  • Create a session: Create a new session named "Sales Analysis 2024" for my data analysis or Start a session called "Customer Segmentation" for analyzing market data
  • List data sources: What data sources are available in dataset {dataset_id}? or Show me all files in the {dataset_id} dataset
  • List sessions: Show me all my current analysis sessions or List my recent data analysis sessions

Available Tools

mcp_powerdrill_list_datasets

Lists available datasets from your Powerdrill account.

Parameters:

  • limit (optional): Maximum number of datasets to return

Example response:

{
  "datasets": [
    {
      "id": "dataset-dasfadsgadsgas",
      "name": "mydata",
      "description": "my dataset"
    }
  ]
}

mcp_powerdrill_get_dataset_overview

Gets detailed overview information about a specific dataset.

Parameters:

  • datasetId (required): The ID of the dataset to get overview information for

Example response:

{
  "id": "dset-cm5axptyyxxx298",
  "name": "sales_indicators_2024",
  "description": "A dataset comprising 373 travel bookings with 15 attributes...",
  "summary": "This dataset contains 373 travel bookings with 15 attributes...",
  "exploration_questions": [
    "How does the booking price trend over time based on the BookingTimestamp?",
    "How does the average booking price change with respect to the TravelDate?"
  ],
  "keywords": [
    "Travel Bookings",
    "Booking Trends",
    "Travel Agencies"
  ]
}

mcp_powerdrill_create_job

Creates a job to analyze data with natural language questions.

Parameters:

  • question (required): The natural language question or prompt to analyze the data
  • dataset_id (required): The ID of the dataset to analyze
  • datasource_ids (optional): Array of specific data source IDs within the dataset to analyze
  • session_id (optional): Session ID to group related jobs
  • stream (optional, default: false): Whether to stream the results
  • output_language (optional, default: "AUTO"): The language for the output
  • job_mode (optional, default: "AUTO"): The job mode

Example response:

{
  "job_id": "job-cm3ikdeuj02zk01l1yeuirt77",
  "blocks": [
    {
      "type": "CODE",
      "content": "```python\nimport pandas as pd\n\ndef invoke(input_0: pd.DataFrame) -> pd.DataFrame:\n...",
      "stage": "Analyze"
    },
    {
      "type": "TABLE",
      "url": "https://static.powerdrill.ai/tmp_datasource_cache/code_result/...",
      "name": "trend_data.csv",
      "expires_at": "2024-11-21T09:56:34.290544Z"
    },
    {
      "type": "IMAGE",
      "url": "https://static.powerdrill.ai/tmp_datasource_cache/code_result/...",
      "name": "Trend of Deaths from Natural Disasters Over the Century",
      "expires_at": "2024-11-21T09:56:34.290544Z"
    },
    {
      "type": "MESSAGE",
      "content": "Analysis of Trends in the Number of Deaths from Natural Disasters...",
      "stage": "Respond"
    }
  ]
}

mcp_powerdrill_create_session

Creates a new session to group related jobs together.

Parameters:

  • name (required): The session name, which can be up to 128 characters in length
  • output_language (optional, default: "AUTO"): The language in which the output is generated. Options include: "AUTO", "EN", "ES", "AR", "PT", "ID", "JA", "RU", "HI", "FR", "DE", "VI", "TR", "PL", "IT", "KO", "ZH-CN", "ZH-TW"
  • job_mode (optional, defaul

README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
data-storytelling

Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.

27
content-trend-researcher

Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis

23
data-scientist

Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence. Use PROACTIVELY for data analysis tasks, ML modeling, statistical analysis, and data-driven insights.

13
google-analytics

Analyze Google Analytics data, review website performance metrics, identify traffic patterns, and suggest data-driven improvements. Use when the user asks about analytics, website metrics, traffic analysis, conversion rates, user behavior, or performance optimization.

13
senior-data-scientist

World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.

8
backend-dev-guidelines

Comprehensive backend development guide for Langfuse's Next.js 14/tRPC/Express/TypeScript monorepo. Use when creating tRPC routers, public API endpoints, BullMQ queue processors, services, or working with tRPC procedures, Next.js API routes, Prisma database access, ClickHouse analytics queries, Redis queues, OpenTelemetry instrumentation, Zod v4 validation, env.mjs configuration, tenant isolation patterns, or async patterns. Covers layered architecture (tRPC procedures → services, queue processors → services), dual database system (PostgreSQL + ClickHouse), projectId filtering for multi-tenant isolation, traceException error handling, observability patterns, and testing strategies (Jest for web, vitest for worker).

7