H

Hugging Face

Official
huggingface

Connects your LLM to Hugging Face Hub to search models, datasets, and research papers, plus access thousands of Gradio AI applications hosted on Spaces.

Integrates with Hugging Face's ecosystem to search models, datasets, and papers while dynamically connecting to Gradio-based tools hosted on Spaces for extended ML capabilities.

205218 views54Remote

What it does

  • Search Hugging Face models and datasets
  • Browse AI research papers
  • Connect to Gradio applications on Spaces
  • Access ML model information and metadata
  • Interact with hosted AI tools dynamically

Best for

ML researchers exploring models and papersDevelopers integrating AI models into projectsData scientists finding relevant datasetsAnyone wanting to test Gradio AI applications
Official Hugging Face integrationAccess to thousands of Gradio appsStreamable HTTP transport

About Hugging Face

Hugging Face is an official MCP server published by huggingface that provides AI assistants with tools and capabilities via the Model Context Protocol. Search Hugging Face models, datasets, and papers — connect dynamically to Gradio examples on Hugging Face Spaces for ext It is categorized under developer tools.

How to install

You can install Hugging Face in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server supports remote connections over HTTP, so no local installation is required.

License

Hugging Face is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Hugging Face Official MCP Server

README image

Welcome to the official Hugging Face MCP Server 🤗. Connect your LLM to the Hugging Face Hub and thousands of Gradio AI Applications.

Installing the MCP Server

Follow the instructions below to get started:

Install in Claude Desktop or claude.ai

Click here to add the Hugging Face connector to your account.

Alternatively, navigate to https://claude.ai/settings/connectors, and add "Hugging Face" from the gallery.

README image
Install in Claude Code

Enter the command below to install in Claude Code:

claude mcp add hf-mcp-server -t http https://huggingface.co/mcp?login

Then start claude and follow the instructions to complete authentication.

claude mcp add hf-mcp-server \
  -t http https://huggingface.co/mcp \
  -H "Authorization: Bearer <YOUR_HF_TOKEN>"
Install in Gemini CLI

Enter the command below to install in Gemini CLI:

gemini mcp add -t http huggingface https://huggingface.co/mcp?login

Then start gemini and follow the instructions to complete authentication.

There is also a HuggingFace Gemini CLI extension that bundles the MCP server with a context file and custom commands, teaching Gemini how to better use all MCP tools.

gemini extensions install https://github.com/huggingface/hf-mcp-server

Start gemini and run /mcp auth huggingface to authenticate the extension.

Install in VSCode

Click here to add the Hugging Face connector directly to VSCode. Alternatively, install from the gallery at https://code.visualstudio.com/mcp:

README image

If you prefer to configure manually or use an auth token, add the snippet below to your mcp.json configuration:

"huggingface": {
    "url": "https://huggingface.co/mcp",
    "headers": {
        "Authorization": "Bearer <YOUR_HF_TOKEN>"
    }
Install in Cursor

Click here to install the Hugging Face MCP Server directly in Cursor.

If you prefer to use configure manually or specify an Authorization Token, use the snippet below:

"huggingface": {
    "url": "https://huggingface.co/mcp",
    "headers": {
        "Authorization": "Bearer <YOUR_HF_TOKEN>"
    }

Once installed, navigate to https://huggingface.co/settings/mcp to configure your Tools and Spaces.

[!TIP] Add ?no_image_content=true to the URL to remove ImageContent blocks from Gradio Servers.

hf_mcp_server_small

Quick Guide (Repository Packages)

This repo contains:

  • (/mcp) MCP Implementations of Hub API and Search endpoints for integration with MCP Servers.
  • (/app) An MCP Server and Web Application for deploying endpoints.

MCP Server

The following transports are supported:

  • STDIO
  • StreamableHTTP
  • StreamableHTTP in Stateless JSON Mode (StreamableHTTPJson)

The Web Application and HTTP Transports start by default on Port 3000.

The StreamableHTTP service is available at /mcp. Although though not strictly enforced by the specification this is common convention.

[!TIP] The Web Application allows you to switch tools on and off. For STDIO and StreamableHTTP this will send a ToolListChangedNotification to the MCP Client. In StreamableHTTPJSON mode the tool will not be listed when the client next requests the tool lists.

Running Locally

You can run the MCP Server locally with either npx or docker.

npx @llmindset/hf-mcp-server       # Start in STDIO mode
npx @llmindset/hf-mcp-server-http  # Start in Streamable HTTP mode
npx @llmindset/hf-mcp-server-json  # Start in Streamable HTTP (JSON RPC) mode

To run with docker:

docker pull ghcr.io/evalstate/hf-mcp-server:latest
docker run --rm -p 3000:3000 ghcr.io/evalstate/hf-mcp-server:latest

image

All commands above start the Management Web interface on http://localhost:3000/. The Streamable HTTP server is accessible on http://localhost:3000/mcp. See [Environment Variables](#Environment Variables) for configuration options. Docker defaults to Streamable HTTP (JSON RPC) mode.

Developing OpenAI Apps SDK Components

To build and test the Apps SDK component, run

cd packages/app
npm run dev:widget

Then open http://localhost:5173/gradio-widget-dev.html. This will bring up a browser with HMR where you can send Structured Content to the components for testing.

skybridge-viewer

Development

This project uses pnpm for build and development. Corepack is used to ensure everyone uses the same pnpm version (10.12.3).

# Install dependencies
pnpm install

# Build all packages
pnpm build

Build Commands

pnpm run clean -> clean build artifacts

pnpm run build -> build packages

pnpm run start -> start the mcp server application

pnpm run buildrun -> clean, build and start

pnpm run dev -> concurrently watch mcp and start dev server with HMR

Docker Build

Build the image:

docker build -t hf-mcp-server .

Run with default settings (Streaming HTTP JSON Mode), Dashboard on Port 3000:

docker run --rm -p 3000:3000 -e DEFAULT_HF_TOKEN=hf_xxx hf-mcp-server

Run STDIO MCP Server:

docker run -i --rm -e TRANSPORT=stdio -p 3000:3000 -e DEFAULT_HF_TOKEN=hf_xxx hf-mcp-server

TRANSPORT can be stdio, streamableHttp or streamableHttpJson (default).

Transport Endpoints

The different transport types use the following endpoints:

  • Streamable HTTP: /mcp (regular or JSON mode)
  • STDIO: Uses stdin/stdout directly, no HTTP endpoint

Stateful Connection Management

The streamableHttp transport is stateful - it maintains a connection with the MCP Client through an SSE connection. When using this transport, the following configuration options take effect:

Environment VariableDefaultDescription
MCP_CLIENT_HEARTBEAT_INTERVAL30000msHow often to check connection health
MCP_CLIENT_CONNECTION_CHECK90000msHow often to check for stale sessions
MCP_CLIENT_CONNECTION_TIMEOUT300000msRemove sessions inactive for this duration
MCP_PING_ENABLEDtrueEnable ping keep-alive for sessions
MCP_PING_INTERVAL30000msInterval between ping cycles

Environment Variables

The server respects the following environment variables:

  • TRANSPORT: The transport type to use (stdio, streamableHttp, or streamableHttpJson)
  • DEFAULT_HF_TOKEN: ⚠️ Requests are serviced with the HF_TOKEN received in the Authorization: Bearer header. The DEFAULT_HF_TOKEN is used if no header was sent. Only set this in Development / Test environments or for local STDIO Deployments. ⚠️
  • If running with stdio transport, HF_TOKEN is used if DEFAULT_HF_TOKEN is not set.
  • HF_API_TIMEOUT: Timeout for Hugging Face API requests in milliseconds (default: 12500ms / 12.5 seconds)
  • USER_CONFIG_API: URL to use for User settings (defaults to Local front-end)
  • ALLOW_INTERNAL_ADDRESS_HOSTS: Optional comma-separated host allowlist to permit internal/reserved DNS resolutions for trusted domains during outbound checks (supports exact hosts and *. wildcards, for example: huggingface.co,*.hf.space).
  • MCP_STRICT_COMPLIANCE: set to True for GET 405 rejects in JSON Mode (default serves a welcome page).
  • AUTHENTICATE_TOOL: whether to include an Authenticate tool to issue an OAuth challenge when called
  • SEARCH_ENABLES_FETCH: When set to true, automatically enables the hf_doc_fetch tool whenever hf_doc_search is enabled
  • PROXY_TOOLS_CSV: Optional CSV that defines Streamable HTTP proxy tool sources (see below).
  • GRADIO_SKIP_INITIALIZE: When set to true, Gradio MCP calls skip the initialize handshake and issue tools/call directly.

Proxy tools (Streamable HTTP via CSV)

You can load proxy tool definitions at startup by setting PROXY_TOOLS_CSV to a HTTPS URL or a local file path. The server fetches each MCP endpoint once on startup, runs initialize + tools/list (10s timeout), and registers any tools returned. If a source fails or returns no tools, it is skipped (no startup failure).

CSV format

proxy_id,url,response_type
papers,https://evalstate-hf-papers.hf.space/mcp,SSE
news,https://example.com/mcp,JSON
  • proxy_id: identifier used to disambiguate tools.
  • url: Streamable HTTP MCP endpoint.
  • response_type: SSE (streamed response) or JSON (direct JSON-RPC response).

Tool naming

  • Single source: tool names are unchanged (taken from the downstream server).
  • Multiple sources: tool names are prefixed with proxy_id_ (e.g. papers_hf-papers-search_send).

You can include these tool names in bouquets or mixes as needed. Use bouquet=proxy or mix=proxy to enable all proxy tools loaded from PROXY_TOOLS_CSV (in


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
hf-mcp

Use Hugging Face Hub via MCP server tools. Search models, datasets, Spaces, papers. Get repo details, fetch documentation, run compute jobs, and use Gradio Spaces as AI tools. Available when connected to the HF MCP server.

0
ui-ux-expert-skill

Technical workflow for implementing accessible React user interfaces with shadcn/ui, Tailwind CSS, and TanStack Query. Includes 6-phase process with mandatory Style Guide compliance, Context7 best practices consultation, Chrome DevTools validation, and WCAG 2.1 AA accessibility standards. Use after Test Agent, Implementer, and Supabase agents complete their work.

66
ui-ux-designer

Create interface designs, wireframes, and design systems. Masters user research, accessibility standards, and modern design tools. Specializes in design tokens, component libraries, and inclusive design. Use PROACTIVELY for design systems, user flows, or interface optimization.

25
penpot-uiux-design

Comprehensive guide for creating professional UI/UX designs in Penpot using MCP tools. Use this skill when: (1) Creating new UI/UX designs for web, mobile, or desktop applications, (2) Building design systems with components and tokens, (3) Designing dashboards, forms, navigation, or landing pages, (4) Applying accessibility standards and best practices, (5) Following platform guidelines (iOS, Android, Material Design), (6) Reviewing or improving existing Penpot designs for usability. Triggers: "design a UI", "create interface", "build layout", "design dashboard", "create form", "design landing page", "make it accessible", "design system", "component library".

19
ui-design-system

UI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.

18
elegant-design

Create world-class, accessible, responsive interfaces with sophisticated interactive elements including chat, terminals, code display, and streaming content. Use when building user interfaces that need professional polish and developer-focused features.

17