
SE Ranking
OfficialConnects AI assistants to SE Ranking's SEO data for natural language keyword research, competitive analysis, backlink monitoring, and website audits.
Unlock SEO insights with an AI-friendly MCP server for SE Ranking. This project exposes SE Ranking data as an MCP server so AI assistants can run natural-language SEO analysis. It provides tools to find lost and declining keywords, compare domains against competitors, discover high-volume competitor keywords, and generate related and similar keyword suggestions. Outputs include synthesized reports that highlight low-hanging opportunities using CPC and keyword difficulty metrics. Useful for automated competitive research, keyword discovery, and batch queries. Documentation and support are available at [email protected].
What it does
- Find lost and declining keywords
- Compare domains against competitors
- Discover high-volume competitor keywords
- Generate related keyword suggestions
- Analyze backlinks and monitor changes
- Track website rankings and traffic
Best for
About SE Ranking
SE Ranking is an official MCP server published by seranking that provides AI assistants with tools and capabilities via the Model Context Protocol. AI-friendly MCP server for SE Ranking: run natural-language SEO analysis to find lost or high-op keywords, compare compe It is categorized under analytics data.
How to install
You can install SE Ranking in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
SE Ranking is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
SE Ranking MCP Server
This Model Context Protocol (MCP) server connects AI assistants to SE Ranking's SEO data and project management APIs. It enables natural language queries for:
- Keyword research and competitive analysis
- Backlink analysis and monitoring
- Domain traffic and ranking insights
- Website audits and technical SEO
- AI search visibility tracking
- Project and rank tracking management
Prerequisites
Before you begin, please ensure you have the following software and accounts ready:
- SE Ranking Account: You will need an active SE Ranking account to generate an API token. If you don’t have one, you can sign up here.
- Docker: A platform for developing, shipping, and running applications in containers. If you don’t have it, you can download it from the official Docker website.
- Git: A free and open-source distributed version control system. You can download it from the official Git website.
- AI Assistant: You will need an MCP-compatible client, such as Claude Desktop or the Gemini CLI.
API Tokens
This MCP server supports two types of API access:
| Token | Environment Variable | Format | Purpose |
|---|---|---|---|
| Data API | DATA_API_TOKEN | UUID (e.g., 80cfee7d-xxxx-xxxx-xxxx-fc8500816bb3) | Access to keyword research, domain analysis, backlinks data, SERP analysis, and website audits. Tools prefixed with DATA_. |
| Project API | PROJECT_API_TOKEN | 40-char hex (e.g., 253a73adxxxxxxxxxxxx340aa0a939) | Access to project management, rank tracking, backlink monitoring, and account management. Tools prefixed with PROJECT_. |
Get your tokens from: https://online.seranking.com/admin.api.dashboard.html
You can use one or both tokens depending on which tools you need. If you only use Data API tools, you can omit PROJECT_API_TOKEN, and vice versa.
Rate Limits
| API | Default Rate Limit |
|---|---|
| Data API | 10 requests per second |
| Project API | 5 requests per second |
Rate limits are customizable. Contact [email protected] to request adjustments.
Installation
Choose the installation method that best fits your needs:
- Option 1: Docker (Recommended) - Best for standard usage, stability, and ease of updates. Use this if you just want to run the tool without managing dependencies.
- Option 2: Local Node.js Server (For Developers) - Best for development, debugging, or environments where Docker isn't available (like Replit). Use this if you need to modify the code or run a custom setup.
Option 1: Docker (Recommended)
- Open your terminal (or Command Prompt/PowerShell on Windows).
- Clone the project repository from GitHub:
git clone https://github.com/seranking/seo-data-api-mcp-server.git
- Navigate into the new directory:
cd seo-data-api-mcp-server
- Build the Docker Image:
docker build -t se-ranking/seo-data-api-mcp-server .
# Check that the image is built and named `se-ranking/seo-data-api-mcp-server`:
docker image ls
How to Update SEO-MCP (Docker)
To ensure you have the latest features, pull the latest changes and rebuild:
git pull origin main
docker build -t se-ranking/seo-data-api-mcp-server .
Option 2: Local Node.js Server (For Developers)
In order to run the local Node server, you need to have Node.js 20+ version installed on your machine.
- Install dependencies:
npm install
- Build the project:
npm run build
- Start the server:
npm run start-http
Then your HTTP server should be running at: http://0.0.0.0:5000/mcp.
In case you'd like to modify the HOST and PORT, you can do so by creating a .env file in the root directory of the project with the settings you want to override, for example:
HOST=127.0.0.1
PORT=5555
Additionally, when running in external environments like Replit, you can set the DATA_API_TOKEN and PROJECT_API_TOKEN environment variables in the configuration panel.
Note: If you change the API token values when the server is running, you need to restart the server.
Verifying the HTTP Server
To send a sample test request and verify your setup:
./test-http-server-curl-request.sh '<your-api-token-here>'
For batch MCP Requests testing:
./test-batch-http-server-curl-request.sh '<your-api-token-here>'
Connect to Claude Desktop
Claude Desktop reads its configuration from claude_desktop_config.json.
- Click on the Claude menu and select Settings....
- In the Settings window, navigate to the Developer tab in the left sidebar.
- Click the Edit Config button to open the configuration file. This action creates a new configuration file if one doesn’t exist or opens your existing configuration.
The file is located at:
- macOS:
~/Library/Application\ Support/Claude/claude_desktop_config.json - Windows:
%AppData%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Example of Claude Desktop configuration for MCP server
JSON Configuration Template:
{
"mcpServers": {
"seo-data-api-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"DATA_API_TOKEN",
"-e",
"PROJECT_API_TOKEN",
"se-ranking/seo-data-api-mcp-server"
],
"env": {
"DATA_API_TOKEN": "<your-data-api-token-here>",
"PROJECT_API_TOKEN": "<your-project-api-token-here>"
}
}
}
}
-
Replace the
DATA_API_TOKENandPROJECT_API_TOKENplaceholder values with your tokens (see API Tokens section). -
After saving claude_desktop_config.json, restart Claude Desktop. You should see the server under MCP Servers/Tools.
-
To verify the setup, ask Claude:
Do you have access to MCP?It should respond by listingseo-data-api-mcp.

- Your setup is complete! You can now run complex SEO queries using natural language.

Connect to Gemini CLI
- Open the Gemini CLI settings file, which is typically located at:
~/.gemini/settings.json - Add the following JSON configuration, making sure to replace the API token placeholder values.
{
"mcpServers": {
"seo-data-api-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"DATA_API_TOKEN",
"-e",
"PROJECT_API_TOKEN",
"se-ranking/seo-data-api-mcp-server"
],
"env": {
"DATA_API_TOKEN": "<your-data-api-token-here>",
"PROJECT_API_TOKEN": "<your-project-api-token-here>"
}
}
}
}
Replace the DATA_API_TOKEN and PROJECT_API_TOKEN placeholder values with your tokens (see API Tokens section).
-
Save the configuration file.
-
To verify the setup, launch the Gemini CLI by running
geminiin your terminal. Once the interface is active, pressCtrl+Tto view the available MCP servers. Ensure seo-data-api-mcp is listed.

- Your setup is complete! You can now run complex SEO queries using natural language.

Available Tools
Data API Tools
| Module | Tool Name | Description |
|---|---|---|
| SERP | DATA_getSerpHtmlDump | Retrieves the raw HTML dump of a completed SERP task as a ZIP file. |
| SERP | DATA_getSerpLocations | Retrieves a list of available locations for SERP analysis. |
| SERP | DATA_getSerpResults | Runs a SERP query and returns results. Creates task, polls until complete, and returns organic/ads/featured snippets (standard) or all SERP types including AI Overview, Maps, Reviews (advanced). |
| SERP | DATA_getSerpTaskAdvancedResults | Retrieves the status or advanced results of a specific SERP task. |
| SERP | DATA_getSerpTaskResults | Retrieves the status or standard results of a specific SERP task. Returns organic, ads, and featured_snippet types only. |
| SERP | DATA_getSerpTasks | Retrieves a list of all SERP tasks added to the queue in the last 24 hours. |
| ai search | DATA_getAiDiscoverBrand | Identifies and returns the brand name associated with a given target domain, subdomain, or URL. |
| ai search | DATA_getAiOverview | Retrieves a high-level overview of a domain's performance in AI search engines. |
| ai search | DATA_getAiPromptsByBrand | Retrieves a list of prompts where the specified brand is mentioned in AI search results. |
| ai search | DATA_getAiPromptsByTarget | Retrieves a list of prompts (queries) that mention the specified target in AI search results. |
| backlinks | DATA_exportBacklinksData | Retrieves large-scale backlinks asynchronously, returning a task ID to check status later. |
| backlinks | DATA_getAllBacklinks | Retrieves a comprehensive list of backlinks for the specified target, with extensive filtering and sorting options. |
| backlinks | DATA_getBacklinksAnchors | Retrieves a list of anchor texts for backlinks pointing to the specified target. |
| backlinks | DATA_getBacklinksAuthority | Fetch authority metrics for a target (domain, host or URL). |
| backlinks | DATA_getBacklinksCount | Returns the total number of backlinks for the target. Supports batch requests. |
| backlinks | DATA_getBacklinksExportStatus | Checks the status of an asynchronous backlinks export task. Returns download URL when complete. |
| backlinks | DATA_getBacklinksIndexedPages | Fetch site pages that hav |
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsTransform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis
When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," or "SEO health check." For building pages at scale to target keywords, see programmatic-seo. For adding structured data, see schema-markup.
Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence. Use PROACTIVELY for data analysis tasks, ML modeling, statistical analysis, and data-driven insights.
Analyze Google Analytics data, review website performance metrics, identify traffic patterns, and suggest data-driven improvements. Use when the user asks about analytics, website metrics, traffic analysis, conversion rates, user behavior, or performance optimization.
World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.