Connects to Axiom to execute APL queries and manage datasets. Enables AI agents to perform log analysis and anomaly detection on your Axiom data.

Integrates with Axiom for executing APL queries and listing datasets, enabling log analysis, anomaly detection, and data-driven decision making.

1338 views4Local (stdio)

What it does

  • Execute APL queries on Axiom datasets
  • List available datasets
  • Perform log analysis and filtering
  • Detect anomalies in data
  • Query time-series data
  • Generate data-driven insights

Best for

DevOps teams analyzing application logsData analysts exploring observability dataTeams performing real-time monitoringIncident response and troubleshooting
Uses Axiom Processing Language (APL)Built-in rate limitingJavaScript port of official Go server

About Axiom

Axiom is a community-built MCP server published by thetabird that provides AI assistants with tools and capabilities via the Model Context Protocol. Integrate with Axiom to run APL queries, analyze logs, detect anomalies, and make data-driven decisions easily. It is categorized under analytics data.

How to install

You can install Axiom in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Axiom is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

MCP Server for Axiom

A JavaScript port of the official Axiom MCP server that enables AI agents to query data using Axiom Processing Language (APL).

README image Link to glama.ai

This implementation provides the same functionality as the original Go version but packaged as an npm module for easier integration with Node.js environments.

Installation & Usage

MCP Configuration

You can run this MCP server directly using npx. Add the following configuration to your MCP configuration file:

{
  "axiom": {
    "command": "npx",
    "args": ["-y", "mcp-server-axiom"],
    "env": {
      "AXIOM_TOKEN": "<AXIOM_TOKEN_HERE>",
      "AXIOM_URL": "https://api.axiom.co",
      "AXIOM_ORG_ID": "<AXIOM_ORG_ID_HERE>"
    }
  }
}

Local Development & Testing

Installation

npm install -g mcp-server-axiom

Environment Variables

The server can be configured using environment variables:

  • AXIOM_TOKEN (required): Your Axiom API token
  • AXIOM_ORG_ID (required): Your Axiom organization ID
  • AXIOM_URL (optional): Custom Axiom API URL (defaults to https://api.axiom.co)
  • AXIOM_QUERY_RATE (optional): Queries per second limit (default: 1)
  • AXIOM_QUERY_BURST (optional): Query burst capacity (default: 1)
  • AXIOM_DATASETS_RATE (optional): Dataset list operations per second (default: 1)
  • AXIOM_DATASETS_BURST (optional): Dataset list burst capacity (default: 1)
  • PORT (optional): Server port (default: 3000)

Running the Server Locally

  1. Using environment variables:
export AXIOM_TOKEN=your_token
mcp-server-axiom
  1. Using a config file:
mcp-server-axiom config.json

Example config.json:

{
  "token": "your_token",
  "url": "https://custom.axiom.co",
  "orgId": "your_org_id",
  "queryRate": 2,
  "queryBurst": 5,
  "datasetsRate": 1,
  "datasetsBurst": 2
}

API Endpoints

  • GET /: Get server implementation info
  • GET /tools: List available tools
  • POST /tools/:name/call: Call a specific tool
    • Available tools:
      • queryApl: Execute APL queries
      • listDatasets: List available datasets

Example Tool Calls

  1. Query APL:
curl -X POST http://localhost:3000/tools/queryApl/call \
  -H "Content-Type: application/json" \
  -d '{
    "arguments": {
      "query": "['logs'] | where ['severity'] == \"error\" | limit 10"
    }
  }'
  1. List Datasets:
curl -X POST http://localhost:3000/tools/listDatasets/call \
  -H "Content-Type: application/json" \
  -d '{
    "arguments": {}
  }'

License

MIT

Alternatives

Related Skills

Browse all skills
data-storytelling

Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.

27
content-trend-researcher

Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis

23
data-scientist

Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence. Use PROACTIVELY for data analysis tasks, ML modeling, statistical analysis, and data-driven insights.

13
google-analytics

Analyze Google Analytics data, review website performance metrics, identify traffic patterns, and suggest data-driven improvements. Use when the user asks about analytics, website metrics, traffic analysis, conversion rates, user behavior, or performance optimization.

13
senior-data-scientist

World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.

8
backend-dev-guidelines

Comprehensive backend development guide for Langfuse's Next.js 14/tRPC/Express/TypeScript monorepo. Use when creating tRPC routers, public API endpoints, BullMQ queue processors, services, or working with tRPC procedures, Next.js API routes, Prisma database access, ClickHouse analytics queries, Redis queues, OpenTelemetry instrumentation, Zod v4 validation, env.mjs configuration, tenant isolation patterns, or async patterns. Covers layered architecture (tRPC procedures → services, queue processors → services), dual database system (PostgreSQL + ClickHouse), projectId filtering for multi-tenant isolation, traceException error handling, observability patterns, and testing strategies (Jest for web, vitest for worker).

7