Dumpling AI

Dumpling AI

Official
dumpling-ai

Connects to Dumpling AI's API to perform web scraping, document processing, and data extraction from various sources. Includes 20+ tools for content extraction, web searches, and document conversion.

Provides a bridge to Dumpling AI's data extraction API for performing web searches, scraping content, extracting structured data, and processing various document formats through 20+ specialized tools.

29412 views9Local (stdio)

What it does

  • Scrape web pages and extract structured data
  • Search YouTube transcripts and web content
  • Convert documents and extract text from PDFs
  • Take website screenshots
  • Extract data from images, audio, and video files
  • Run JavaScript and Python code securely

Best for

Data scientists needing web scraping and content extractionResearchers collecting information from multiple sourcesDevelopers building data processing pipelinesContent creators extracting media transcripts
20+ specialized toolsSupports multiple document formatsSecure code execution environment

About Dumpling AI

Dumpling AI is an official MCP server published by dumpling-ai that provides AI assistants with tools and capabilities via the Model Context Protocol. Dumpling AI offers advanced web scraping tools, acting as a web scraper to extract structured data from websites and doc It is categorized under search web, ai ml.

How to install

You can install Dumpling AI in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Dumpling AI is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Dumpling AI MCP Server

A Model Context Protocol (MCP) server implementation that integrates with Dumpling AI for data scraping, content processing, knowledge management, AI agents, and code execution capabilities.

smithery badge

Features

  • Complete integration with all Dumpling AI API endpoints
  • Data APIs for YouTube transcripts, search, autocomplete, maps, places, news, and reviews
  • Web scraping with support for scraping, crawling, screenshots, and structured data extraction
  • Document conversion tools for text extraction, PDF operations, video processing
  • Extract data from documents, images, audio, and video
  • AI capabilities including agent completions, knowledge base management, and image generation
  • Developer tools for running JavaScript and Python code in a secure environment
  • Automatic error handling and detailed response formatting

Installation

Installing via Smithery

To install mcp-server-dumplingai for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @Dumpling-AI/mcp-server-dumplingai --client claude

Running with npx

env DUMPLING_API_KEY=your_api_key npx -y mcp-server-dumplingai

Manual Installation

npm install -g mcp-server-dumplingai

Running on Cursor

Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+

To configure Dumpling AI MCP in Cursor:

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
{
  "mcpServers": {
    "dumplingai": {
      "command": "npx",
      "args": ["-y", "mcp-server-dumplingai"],
      "env": {
        "DUMPLING_API_KEY": "<your-api-key>"
      }
    }
  }
}

If you are using Windows and are running into issues, try cmd /c "set DUMPLING_API_KEY=your-api-key && npx -y mcp-server-dumplingai"

Replace your-api-key with your Dumpling AI API key.

Configuration

Environment Variables

  • DUMPLING_API_KEY: Your Dumpling AI API key (required)

Available Tools

Data APIs

1. Get YouTube Transcript (get-youtube-transcript)

Extract transcripts from YouTube videos with optional timestamps.

{
  "name": "get-youtube-transcript",
  "arguments": {
    "videoUrl": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
    "includeTimestamps": true,
    "timestampsToCombine": 3,
    "preferredLanguage": "en"
  }
}

2. Search (search)

Perform Google web searches and optionally scrape content from results.

{
  "name": "search",
  "arguments": {
    "query": "machine learning basics",
    "country": "us",
    "language": "en",
    "dateRange": "pastMonth",
    "scrapeResults": true,
    "numResultsToScrape": 3,
    "scrapeOptions": {
      "format": "markdown",
      "cleaned": true
    }
  }
}

3. Get Autocomplete (get-autocomplete)

Get Google search autocomplete suggestions for a query.

{
  "name": "get-autocomplete",
  "arguments": {
    "query": "how to learn",
    "country": "us",
    "language": "en",
    "location": "New York"
  }
}

4. Search Maps (search-maps)

Search Google Maps for locations and businesses.

{
  "name": "search-maps",
  "arguments": {
    "query": "coffee shops",
    "gpsPositionZoom": "37.7749,-122.4194,14z",
    "language": "en",
    "page": 1
  }
}

5. Search Places (search-places)

Search for places with more detailed information.

{
  "name": "search-places",
  "arguments": {
    "query": "hotels in paris",
    "country": "fr",
    "language": "en",
    "page": 1
  }
}

6. Search News (search-news)

Search for news articles with customizable parameters.

{
  "name": "search-news",
  "arguments": {
    "query": "climate change",
    "country": "us",
    "language": "en",
    "dateRange": "pastWeek"
  }
}

7. Get Google Reviews (get-google-reviews)

Retrieve Google reviews for businesses or places.

{
  "name": "get-google-reviews",
  "arguments": {
    "businessName": "Eiffel Tower",
    "location": "Paris, France",
    "limit": 10,
    "sortBy": "relevance"
  }
}

Web Scraping

8. Scrape (scrape)

Extract content from a web page with formatting options.

{
  "name": "scrape",
  "arguments": {
    "url": "https://example.com",
    "format": "markdown",
    "cleaned": true,
    "renderJs": true
  }
}

9. Crawl (crawl)

Recursively crawl websites and extract content with customizable parameters.

{
  "name": "crawl",
  "arguments": {
    "baseUrl": "https://example.com",
    "maxPages": 10,
    "crawlBeyondBaseUrl": false,
    "depth": 2,
    "scrapeOptions": {
      "format": "markdown",
      "cleaned": true,
      "renderJs": true
    }
  }
}

10. Screenshot (screenshot)

Capture screenshots of web pages with customizable viewport and format options.

{
  "name": "screenshot",
  "arguments": {
    "url": "https://example.com",
    "width": 1280,
    "height": 800,
    "fullPage": true,
    "format": "png",
    "waitFor": 1000
  }
}

11. Extract (extract)

Extract structured data from web pages using AI-powered instructions.

{
  "name": "extract",
  "arguments": {
    "url": "https://example.com/products",
    "instructions": "Extract all product names, prices, and descriptions from this page",
    "schema": {
      "products": [
        {
          "name": "string",
          "price": "number",
          "description": "string"
        }
      ]
    },
    "renderJs": true
  }
}

Document Conversion

12. Doc to Text (doc-to-text)

Convert documents to plaintext with optional OCR.

{
  "name": "doc-to-text",
  "arguments": {
    "url": "https://example.com/document.pdf",
    "options": {
      "ocr": true,
      "language": "en"
    }
  }
}

13. Convert to PDF (convert-to-pdf)

Convert various file formats to PDF.

{
  "name": "convert-to-pdf",
  "arguments": {
    "url": "https://example.com/document.docx",
    "format": "docx",
    "options": {
      "quality": 90,
      "pageSize": "A4",
      "margin": 10
    }
  }
}

14. Merge PDFs (merge-pdfs)

Combine multiple PDFs into a single document.

{
  "name": "merge-pdfs",
  "arguments": {
    "urls": ["https://example.com/doc1.pdf", "https://example.com/doc2.pdf"],
    "options": {
      "addPageNumbers": true,
      "addTableOfContents": true
    }
  }
}

15. Trim Video (trim-video)

Extract a specific clip from a video.

{
  "name": "trim-video",
  "arguments": {
    "url": "https://example.com/video.mp4",
    "startTime": 30,
    "endTime": 60,
    "output": "mp4",
    "options": {
      "quality": 720,
      "fps": 30
    }
  }
}

16. Extract Document (extract-document)

Extract specific content from documents in various formats.

{
  "name": "extract-document",
  "arguments": {
    "url": "https://example.com/document.pdf",
    "format": "structured",
    "options": {
      "ocr": true,
      "language": "en",
      "includeMetadata": true
    }
  }
}

17. Extract Image (extract-image)

Extract text and information from images.

{
  "name": "extract-image",
  "arguments": {
    "url": "https://example.com/image.jpg",
    "extractionType": "text",
    "options": {
      "language": "en",
      "detectOrientation": true
    }
  }
}

18. Extract Audio (extract-audio)

Transcribe and extract information from audio files.

{
  "name": "extract-audio",
  "arguments": {
    "url": "https://example.com/audio.mp3",
    "language": "en",
    "options": {
      "model": "enhanced",
      "speakerDiarization": true,
      "wordTimestamps": true
    }
  }
}

19. Extract Video (extract-video)

Extract content from videos including transcripts, scenes, and objects.

{
  "name": "extract-video",
  "arguments": {
    "url": "https://example.com/video.mp4",
    "extractionType": "transcript",
    "options": {
      "language": "en",
      "speakerDiarization": true
    }
  }
}

20. Read PDF Metadata (read-pdf-metadata)

Extract metadata from PDF files.

{
  "name": "read-pdf-metadata",
  "arguments": {
    "url": "https://example.com/document.pdf",
    "includeExtended": true
  }
}

21. Write PDF Metadata (write-pdf-metadata)

Update metadata in PDF files.

{
  "name": "write-pdf-metadata",
  "arguments": {
    "url": "https://example.com/document.pdf",
    "metadata": {
      "title": "New Title",
      "author": "John Doe",
      "keywords": ["keyword1", "keyword2"]
    }
  }
}

AI

22. Generate Agent Completion (generate-agent-completion)

Get AI agent completions with optional tool definitions.

{
  "name": "generate-agent-completion",
  "arguments": {
    "prompt": "How can I improve my website's SEO?",
    "model": "gpt-4",
    "temperature": 0.7,
    "maxTokens": 500,
    "context": ["The website is an e-commerce store selling handmade crafts."]
  }
}

23. Search Knowledge Base (search-knowledge-base)

Search a knowledge base for relevant information.

{
  "name": "search-knowledge-base",
  "arguments": {
    "kbId": "kb_12345",
    "query": "How to optimize database performance",
    "limit": 5,
    "similarityThreshold": 0.7
  }
}

24. Add to Knowledge Base (add-to-knowledge-base)

Add entries to a knowledge base.

{
  "name": "add-to-knowledge-base",
  "arguments": {
    "kbId": "kb_12345",
    "entries": [
      {
        "text": "MongoDB is a document-based NoSQL database.",
        "metadata": {
          "source": "MongoDB documentation",
          "category": "databases"
        }
      }
    ],
    "upsert": true
  }
}

25. Generate AI Image (`generate-ai-ima


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
google-official-seo-guide

Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation

80
ux-writing

Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.

19
browser-automation

Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.

16
web-research

Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research

14
last30days

Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.

7
seo-optimizer

Search Engine Optimization specialist for content strategy, technical SEO, keyword research, and ranking improvements. Use when optimizing website content, improving search rankings, conducting keyword analysis, or implementing SEO best practices. Expert in on-page SEO, meta tags, schema markup, and Core Web Vitals.

5