
InfluxDB
Connects to InfluxDB v2 time-series databases to read data, write measurements, and manage database objects like buckets and organizations.
Connects to InfluxDB time-series databases for querying measurements, writing data, and managing buckets through a bridge that exposes database functionality via tools and resources.
What it does
- Query time-series data with Flux
- Write measurements in line protocol format
- Create and manage buckets
- Create organizations
- Browse measurements by bucket
- Access bucket and organization metadata
Best for
About InfluxDB
InfluxDB is a community-built MCP server published by idoru that provides AI assistants with tools and capabilities via the Model Context Protocol. Connect to InfluxDB time series databases for querying, writing, and managing data using powerful tools and resources. It is categorized under databases, analytics data.
How to install
You can install InfluxDB in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
InfluxDB is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
InfluxDB v2 MCP Server
A Model Context Protocol (MCP) server that exposes access to an InfluxDB v2 instance using the InfluxDB OSS API v2. Mostly built with Claude Code.
Features
This MCP server provides:
- Resources: Access to organization, bucket, and measurement data
- Tools: Write data, execute queries, and manage database objects
- Prompts: Templates for common Flux queries and Line Protocol format
Resources
The server exposes the following resources:
-
Organizations List:
influxdb://orgs- Displays all organizations in the InfluxDB instance
-
Buckets List:
influxdb://buckets- Shows all buckets with their metadata
-
Bucket Measurements:
influxdb://bucket/{bucketName}/measurements- Lists all measurements within a specified bucket
-
Query Data:
influxdb://query/{orgName}/{fluxQuery}- Executes a Flux query and returns results as a resource
Tools
The server provides these tools:
-
write-data: Write time-series data in line protocol format- Parameters: org, bucket, data, precision (optional)
-
query-data: Execute Flux queries- Parameters: org, query
-
create-bucket: Create a new bucket- Parameters: name, orgID, retentionPeriodSeconds (optional)
-
create-org: Create a new organization- Parameters: name, description (optional)
Prompts
The server offers these prompt templates:
flux-query-examples: Common Flux query examplesline-protocol-guide: Guide to InfluxDB line protocol format
Configuration
The server requires these environment variables:
INFLUXDB_TOKEN(required): Authentication token for the InfluxDB APIINFLUXDB_URL(optional): URL of the InfluxDB instance (defaults tohttp://localhost:8086)INFLUXDB_ORG(optional): Default organization name for certain operations
Installation
Installing via Smithery
To install InfluxDB MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @idoru/influxdb-mcp-server --client claude
Option 1: Run with npx (recommended)
# Run directly with npx
INFLUXDB_TOKEN=your_token npx influxdb-mcp-server
Option 2: Install globally
# Install globally
npm install -g influxdb-mcp-server
# Run the server
INFLUXDB_TOKEN=your_token influxdb-mcp-server
Option 3: From source
# Clone the repository
git clone https://github.com/idoru/influxdb-mcp-server.git
cd influxdb-mcp-server
# Install dependencies
npm install
# Run the server
INFLUXDB_TOKEN=your_token npm start
influxdb-mcp-server uses stdio transport by default. You can explicitly request it with --stdio, or start the server with Streamable HTTP transport by providing the --http option with an optional port number (defaults to 3000). This mode uses an internal Express.js server:
# Start with Streamable HTTP transport on default port 3000
INFLUXDB_TOKEN=your_token npm start -- --http
# Start with Streamable HTTP transport on a specific port
INFLUXDB_TOKEN=your_token npm start -- --http 8080
If you installed globally or are using npx, you can run:
INFLUXDB_TOKEN=your_token influxdb-mcp-server --http
# or explicitly force stdio
INFLUXDB_TOKEN=your_token influxdb-mcp-server --stdio
# or
INFLUXDB_TOKEN=your_token influxdb-mcp-server --http 8080
Integration with Claude for Desktop
Add the server to your claude_desktop_config.json:
Using npx (recommended)
{
"mcpServers": {
"influxdb": {
"command": "npx",
"args": ["influxdb-mcp-server"],
"env": {
"INFLUXDB_TOKEN": "your_token",
"INFLUXDB_URL": "http://localhost:8086",
"INFLUXDB_ORG": "your_org"
}
}
}
}
If installed locally
{
"mcpServers": {
"influxdb": {
"command": "node",
"args": ["/path/to/influxdb-mcp-server/src/index.js"],
"env": {
"INFLUXDB_TOKEN": "your_token",
"INFLUXDB_URL": "http://localhost:8086",
"INFLUXDB_ORG": "your_org"
}
}
}
}
Code Structure
The server code is organized into a modular structure:
src/index.js- Main server entry pointconfig/- Configuration related filesenv.js- Environment variable handling
utils/- Utility functionsinfluxClient.js- InfluxDB API clientloggerConfig.js- Console logger configuration
handlers/- Resource and tool handlersorganizationsHandler.js- Organizations listingbucketsHandler.js- Buckets listingmeasurementsHandler.js- Measurements listingqueryHandler.js- Query executionwriteDataTool.js- Data write toolqueryDataTool.js- Query toolcreateBucketTool.js- Bucket creation toolcreateOrgTool.js- Organization creation tool
prompts/- Prompt templatesfluxQueryExamplesPrompt.js- Flux query exampleslineProtocolGuidePrompt.js- Line protocol guide
This structure allows for better maintainability, easier testing, and clearer separation of concerns.
Testing
The repository includes comprehensive integration tests that:
- Spin up a Docker container with InfluxDB
- Populate it with sample data
- Test all MCP server functionality
To run the tests:
npm test
License
MIT
Alternatives
Related Skills
Browse all skillsConduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).
Comprehensive guide for PostgreSQL psql - the interactive terminal client for PostgreSQL. Use when connecting to PostgreSQL databases, executing queries, managing databases/tables, configuring connection options, formatting output, writing scripts, managing transactions, and using advanced psql features for database administration and development.
Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis
No API KEY needed for free tier. Professional-grade cryptocurrency market data integration for real-time prices, historical charts, and global analytics.
Notion workspace integration. Use when user wants to read/write Notion pages, search databases, create tasks, or sync content with Notion.
