hugging-face-paper-publisher

34
5
Source

Publish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.

Install

mkdir -p .claude/skills/hugging-face-paper-publisher && curl -L -o skill.zip "https://mcp.directory/api/skills/download/646" && unzip -o skill.zip -d .claude/skills/hugging-face-paper-publisher && rm skill.zip

Installs to .claude/skills/hugging-face-paper-publisher

About this skill

Overview

This skill provides comprehensive tools for AI engineers and researchers to publish, manage, and link research papers on the Hugging Face Hub. It streamlines the workflow from paper creation to publication, including integration with arXiv, model/dataset linking, and authorship management.

Integration with HF Ecosystem

  • Paper Pages: Index and discover papers on Hugging Face Hub
  • arXiv Integration: Automatic paper indexing from arXiv IDs
  • Model/Dataset Linking: Connect papers to relevant artifacts through metadata
  • Authorship Verification: Claim and verify paper authorship
  • Research Article Template: Generate professional, modern scientific papers

Version

1.0.0

Dependencies

  • huggingface_hub>=0.26.0
  • pyyaml>=6.0.3
  • requests>=2.32.5
  • markdown>=3.5.0
  • python-dotenv>=1.2.1

Core Capabilities

1. Paper Page Management

  • Index Papers: Add papers to Hugging Face from arXiv
  • Claim Authorship: Verify and claim authorship on published papers
  • Manage Visibility: Control which papers appear on your profile
  • Paper Discovery: Find and explore papers in the HF ecosystem

2. Link Papers to Artifacts

  • Model Cards: Add paper citations to model metadata
  • Dataset Cards: Link papers to datasets via README
  • Automatic Tagging: Hub auto-generates arxiv:<PAPER_ID> tags
  • Citation Management: Maintain proper attribution and references

3. Research Article Creation

  • Markdown Templates: Generate professional paper formatting
  • Modern Design: Clean, readable research article layouts
  • Dynamic TOC: Automatic table of contents generation
  • Section Structure: Standard scientific paper organization
  • LaTeX Math: Support for equations and technical notation

4. Metadata Management

  • YAML Frontmatter: Proper model/dataset card metadata
  • Citation Tracking: Maintain paper references across repositories
  • Version Control: Track paper updates and revisions
  • Multi-Paper Support: Link multiple papers to single artifacts

Usage Instructions

The skill includes Python scripts in scripts/ for paper publishing operations.

Prerequisites

  • Install dependencies: uv add huggingface_hub pyyaml requests markdown python-dotenv
  • Set HF_TOKEN environment variable with Write-access token
  • Activate virtual environment: source .venv/bin/activate

Method 1: Index Paper from arXiv

Add a paper to Hugging Face Paper Pages from arXiv.

Basic Usage:

python scripts/paper_manager.py index \
  --arxiv-id "2301.12345"

Check If Paper Exists:

python scripts/paper_manager.py check \
  --arxiv-id "2301.12345"

Direct URL Access: You can also visit https://huggingface.co/papers/{arxiv-id} directly to index a paper.

Method 2: Link Paper to Model/Dataset

Add paper references to model or dataset README with proper YAML metadata.

Add to Model Card:

python scripts/paper_manager.py link \
  --repo-id "username/model-name" \
  --repo-type "model" \
  --arxiv-id "2301.12345"

Add to Dataset Card:

python scripts/paper_manager.py link \
  --repo-id "username/dataset-name" \
  --repo-type "dataset" \
  --arxiv-id "2301.12345"

Add Multiple Papers:

python scripts/paper_manager.py link \
  --repo-id "username/model-name" \
  --repo-type "model" \
  --arxiv-ids "2301.12345,2302.67890,2303.11111"

With Custom Citation:

python scripts/paper_manager.py link \
  --repo-id "username/model-name" \
  --repo-type "model" \
  --arxiv-id "2301.12345" \
  --citation "$(cat citation.txt)"

How Linking Works

When you add an arXiv paper link to a model or dataset README:

  1. The Hub extracts the arXiv ID from the link
  2. A tag arxiv:<PAPER_ID> is automatically added to the repository
  3. Users can click the tag to view the Paper Page
  4. The Paper Page shows all models/datasets citing this paper
  5. Papers are discoverable through filters and search

Method 3: Claim Authorship

Verify your authorship on papers published on Hugging Face.

Start Claim Process:

python scripts/paper_manager.py claim \
  --arxiv-id "2301.12345" \
  --email "[email protected]"

Manual Process:

  1. Navigate to your paper's page: https://huggingface.co/papers/{arxiv-id}
  2. Find your name in the author list
  3. Click your name and select "Claim authorship"
  4. Wait for admin team verification

Check Authorship Status:

python scripts/paper_manager.py check-authorship \
  --arxiv-id "2301.12345"

Method 4: Manage Paper Visibility

Control which verified papers appear on your public profile.

List Your Papers:

python scripts/paper_manager.py list-my-papers

Toggle Visibility:

python scripts/paper_manager.py toggle-visibility \
  --arxiv-id "2301.12345" \
  --show true

Manage in Settings: Navigate to your account settings → Papers section to toggle "Show on profile" for each paper.

Method 5: Create Research Article

Generate a professional markdown-based research paper using modern templates.

Create from Template:

python scripts/paper_manager.py create \
  --template "standard" \
  --title "Your Paper Title" \
  --output "paper.md"

Available Templates:

  • standard - Traditional scientific paper structure
  • modern - Clean, web-friendly format inspired by Distill
  • arxiv - arXiv-style formatting
  • ml-report - Machine learning experiment report

Generate Complete Paper:

python scripts/paper_manager.py create \
  --template "modern" \
  --title "Fine-Tuning Large Language Models with LoRA" \
  --authors "Jane Doe, John Smith" \
  --abstract "$(cat abstract.txt)" \
  --output "paper.md"

Convert to HTML:

python scripts/paper_manager.py convert \
  --input "paper.md" \
  --output "paper.html" \
  --style "modern"

Paper Template Structure

Standard Research Paper Sections:

---
title: Your Paper Title
authors: Jane Doe, John Smith
affiliations: University X, Lab Y
date: 2025-01-15
arxiv: 2301.12345
tags: [machine-learning, nlp, fine-tuning]
---

# Abstract
Brief summary of the paper...

# 1. Introduction
Background and motivation...

# 2. Related Work
Previous research and context...

# 3. Methodology
Approach and implementation...

# 4. Experiments
Setup, datasets, and procedures...

# 5. Results
Findings and analysis...

# 6. Discussion
Interpretation and implications...

# 7. Conclusion
Summary and future work...

# References

Modern Template Features:

  • Dynamic table of contents
  • Responsive design for web viewing
  • Code syntax highlighting
  • Interactive figures and charts
  • Math equation rendering (LaTeX)
  • Citation management
  • Author affiliation linking

Commands Reference

Index Paper:

python scripts/paper_manager.py index --arxiv-id "2301.12345"

Link to Repository:

python scripts/paper_manager.py link \
  --repo-id "username/repo-name" \
  --repo-type "model|dataset|space" \
  --arxiv-id "2301.12345" \
  [--citation "Full citation text"] \
  [--create-pr]

Claim Authorship:

python scripts/paper_manager.py claim \
  --arxiv-id "2301.12345" \
  --email "your.email@edu"

Manage Visibility:

python scripts/paper_manager.py toggle-visibility \
  --arxiv-id "2301.12345" \
  --show true|false

Create Research Article:

python scripts/paper_manager.py create \
  --template "standard|modern|arxiv|ml-report" \
  --title "Paper Title" \
  [--authors "Author1, Author2"] \
  [--abstract "Abstract text"] \
  [--output "filename.md"]

Convert Markdown to HTML:

python scripts/paper_manager.py convert \
  --input "paper.md" \
  --output "paper.html" \
  [--style "modern|classic"]

Check Paper Status:

python scripts/paper_manager.py check --arxiv-id "2301.12345"

List Your Papers:

python scripts/paper_manager.py list-my-papers

Search Papers:

python scripts/paper_manager.py search --query "transformer attention"

YAML Metadata Format

When linking papers to models or datasets, proper YAML frontmatter is required:

Model Card Example:

---
language:
  - en
license: apache-2.0
tags:
  - text-generation
  - transformers
  - llm
library_name: transformers
---

# Model Name

This model is based on the approach described in [Our Paper](https://arxiv.org/abs/2301.12345).

## Citation

```bibtex
@article{doe2023paper,
  title={Your Paper Title},
  author={Doe, Jane and Smith, John},
  journal={arXiv preprint arXiv:2301.12345},
  year={2023}
}

**Dataset Card Example:**
```yaml
---
language:
  - en
license: cc-by-4.0
task_categories:
  - text-generation
  - question-answering
size_categories:
  - 10K<n<100K
---

# Dataset Name

Dataset introduced in [Our Paper](https://arxiv.org/abs/2301.12345).

For more details, see the [paper page](https://huggingface.co/papers/2301.12345).

The Hub automatically extracts arXiv IDs from these links and creates arxiv:2301.12345 tags.

Integration Examples

Workflow 1: Publish New Research

# 1. Create research article
python scripts/paper_manager.py create \
  --template "modern" \
  --title "Novel Fine-Tuning Approach" \
  --output "paper.md"

# 2. Edit paper.md with your content

# 3. Submit to arXiv (external process)
# Upload to arxiv.org, get arXiv ID

# 4. Index on Hugging Face
python scripts/paper_manager.py index --arxiv-id "2301.12345"

# 5. Link to your model
python scripts/paper_manager.py link \

---

*Content truncated.*

hugging-face-tool-builder

patchy631

Use this skill when the user wants to build tool/scripts or achieve a task where using data from the Hugging Face API would help. This is especially useful when chaining or combining API calls or the task will be repeated/automated. This Skill creates a reusable script to fetch, enrich or process data.

146

brightdata-web-mcp

patchy631

Search the web, scrape websites, extract structured data from URLs, and automate browsers using Bright Data's Web MCP. Use when fetching live web content, bypassing blocks/CAPTCHAs, getting product data from Amazon/eBay, social media posts, or when standard requests fail.

485

hugging-face-cli

patchy631

Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.

442

hugging-face-jobs

patchy631

This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.

41

hugging-face-datasets

patchy631

Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.

41

hugging-face-model-trainer

patchy631

This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.

31

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,5621,368

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

1,1041,184

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

1,4131,106

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

1,189746

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

1,146683

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

1,303607

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.