
RSS Feed Parser
Fetches and parses RSS/Atom feeds from any URL, with special support for RSSHub to create feeds from websites that don't natively offer them.
Provides RSS feed parsing and retrieval with RSSHub integration, automatically trying multiple instances when one fails and supporting custom rsshub:// protocol URLs for accessing current content from websites, social platforms, and news sources that don't natively provide RSS feeds.
What it does
- Parse standard RSS and Atom feeds
- Access RSSHub feeds with rsshub:// URLs
- Retrieve content from social platforms via RSSHub
- Extract clean text from feed content
- Specify custom item count limits
- Auto-retry failed requests across multiple instances
Best for
About RSS Feed Parser
RSS Feed Parser is a community-built MCP server published by veithly that provides AI assistants with tools and capabilities via the Model Context Protocol. RSS Feed Parser is a powerful rss feed generator and rss link generator with RSSHub integration, perfect for creating cu It is categorized under search web.
How to install
You can install RSS Feed Parser in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
RSS Feed Parser is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
RSS MCP Server
This is a Model Context Protocol (MCP) server built with TypeScript. It provides a versatile tool to fetch and parse any standard RSS/Atom feed, and also includes special support for RSSHub feeds. With this server, language models or other MCP clients can easily retrieve structured content from various web sources.
The server comes with a built-in list of public RSSHub instances and supports a polling mechanism to automatically select an available instance, significantly improving the success rate and stability of data retrieval.
✨ Features
- Universal Feed Parsing: Fetch and parse any standard RSS/Atom feed from a given URL.
- Enhanced RSSHub Support: Provides a tool named
get_feedto fetch any RSSHub-supported feed via MCP, with multi-instance support. - Customizable Item Count: Specify the number of feed items to retrieve, with support for fetching all items.
- Multi-instance Support: Includes a list of public RSSHub instances and automatically polls to find an available service.
- Smart URL Parsing: Supports standard RSSHub URLs and a simplified
rsshub://protocol format. - Priority Instance Configuration: Allows setting a preferred RSSHub instance via the
PRIORITY_RSSHUB_INSTANCEenvironment variable. - Robust Error Handling: If a request to one instance fails, it automatically tries the next one until it succeeds or all instances have failed.
- Content Cleaning: Uses Cheerio to clean the feed content and extract plain text descriptions.
- Standardized Output: Converts the fetched RSS feed into a structured JSON format.
📦 Installation
First, clone the project repository, then install the required dependencies.
git clone https://github.com/veithly/rss-mcp.git
cd rss-mcp
npm install
🚀 Usage
1. Build the Project
Before running, you need to compile the TypeScript code into JavaScript:
npm run build
2. Run the Server
After a successful build, start the MCP server:
npm start
The server will then communicate with the parent process (e.g., Cursor) via Stdio.
3. Configure a Priority Instance (Optional)
You can create a .env file to specify a priority RSSHub instance. This is very useful for users who have a private, stable instance.
Create a .env file in the project root directory and add the following content:
PRIORITY_RSSHUB_INSTANCE=https://my-rsshub.example.com
The server will automatically load this configuration on startup and place it at the top of the polling list.
🔧 MCP Server Configuration
To use this server with an MCP client like Cursor, you need to add it to your configuration file.
Method 1: Using npx (Recommended)
This package is published on npm, so you can use npx to run the server without a local installation. This is the easiest method.
-
Direct Invocation: You can run the server directly from your terminal using
npx:npx rss-mcp -
MCP Client Configuration: To integrate with an MCP client like Cursor, add the following to your configuration file (e.g.,
~/.cursor/mcp_settings.json):{ "name": "rss", "command": ["npx", "rss-mcp"], "type": "stdio" }
Method 2: Local Installation
If you have cloned the repository locally, you can run it directly with node.
-
Clone and build the project as described in the "Installation" and "Usage" sections.
-
Locate your MCP configuration file.
-
Add the following server entry, making sure to use the absolute path to the compiled
index.jsfile:{ "name": "rss", "command": ["node", "/path/to/your/rss-mcp/dist/index.js"], "type": "stdio" }Important: Replace
/path/to/your/rss-mcp/dist/index.jswith the correct absolute path on your system.
After adding the configuration, restart your MCP client (e.g., Cursor) for the changes to take effect. The rss server will then be available, and you can call the get_feed tool.
🛠️ Tool Definition
get_feed
Fetches and parses an RSS feed from a given URL. It supports both standard RSS/Atom feeds and RSSHub feeds.
Input Parameters
url(string, required): The URL of the RSS feed to fetch. Two formats are supported:- Standard URL:
https://rsshub.app/bilibili/user/dynamic/208259 rsshub://protocol:rsshub://bilibili/user/dynamic/208259(the server will automatically match an available instance)
- Standard URL:
count(number, optional): The number of RSS feed items to retrieve.- Default:
1 - Retrieve all:
0
- Default:
Output
Returns a JSON string containing the feed information, with the following structure:
{
"title": "bilibili User Dynamics",
"link": "https://space.bilibili.com/208259",
"description": "bilibili User Dynamics",
"items": [
{
"title": "[Dynamic Title]",
"description": "Plain text content of the dynamic...",
"link": "https://t.bilibili.com/1234567890",
"guid": "https://t.bilibili.com/1234567890",
"pubDate": "2024-05-20T12:30:00.000Z",
"author": "Author Name",
"category": ["Category1", "Category2"]
}
]
}
📜 Main Dependencies
- @modelcontextprotocol/sdk: For building the MCP server.
- axios: For making HTTP requests.
- rss-parser: For parsing RSS/Atom feeds.
- cheerio: For parsing and manipulating HTML content.
- date-fns-tz: For handling time-zone-related date formatting.
- dotenv: For loading environment variables from a
.envfile.
📄 License
This project is licensed under the Apache-2.0 License.
Alternatives
Related Skills
Browse all skillsGeneral-purpose X/Twitter research agent. Searches X for real-time perspectives, dev discussions, product feedback, cultural takes, breaking news, and expert opinions. Works like a web research agent but uses X as the source. Use when: (1) user says "x research", "search x for", "search twitter for", "what are people saying about", "what's twitter saying", "check x for", "x search", "/x-research", (2) user is working on something where recent X discourse would provide useful context (new library releases, API changes, product launches, cultural events, industry drama), (3) user wants to find what devs/experts/community thinks about a topic. NOT for: posting tweets, account management, or historical archive searches beyond 7 days.
Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation
Create user-centered, accessible interface copy (microcopy) for digital products including buttons, labels, error messages, notifications, forms, onboarding, empty states, success messages, and help text. Use when writing or editing any text that appears in apps, websites, or software interfaces, designing conversational flows, establishing voice and tone guidelines, auditing product content for consistency and usability, reviewing UI strings, or improving existing interface copy. Applies UX writing best practices based on four quality standards — purposeful, concise, conversational, and clear. Includes accessibility guidelines, research-backed benchmarks (sentence length, comprehension rates, reading levels), expanded error patterns, tone adaptation frameworks, and comprehensive reference materials.
Automate web browser interactions using natural language via CLI commands. Use when the user asks to browse websites, navigate web pages, extract data from websites, take screenshots, fill forms, click buttons, or interact with web applications. Triggers include "browse", "navigate to", "go to website", "extract data from webpage", "screenshot", "web scraping", "fill out form", "click on", "search for on the web". When taking actions be as specific as possible.
Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.