add-dataset
Guide for adding a new dataset loader to AReaL. Use when user wants to add a new dataset.
Install
mkdir -p .claude/skills/add-dataset && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5252" && unzip -o skill.zip -d .claude/skills/add-dataset && rm skill.zipInstalls to .claude/skills/add-dataset
About this skill
Add Dataset
Add a new dataset loader to AReaL.
When to Use
This skill is triggered when:
- User asks "how do I add a dataset?"
- User wants to integrate a new dataset
- User mentions creating a dataset loader
Step-by-Step Guide
Step 1: Create Dataset File
Create areal/dataset/<name>.py:
from datasets import Dataset, load_dataset
def get_<name>_sft_dataset(
path: str,
split: str,
tokenizer,
max_length: int | None = None,
) -> Dataset:
"""Load dataset for SFT training.
Args:
path: Path to dataset (HuggingFace hub or local path)
split: Dataset split (train/validation/test)
tokenizer: Tokenizer for processing
max_length: Maximum sequence length (optional)
Returns:
HuggingFace Dataset with processed samples
"""
dataset = load_dataset(path=path, split=split)
def process(sample):
# Tokenize the full sequence (prompt + response)
seq_token = tokenizer.encode(
sample["question"] + sample["answer"] + tokenizer.eos_token
)
prompt_token = tokenizer.encode(sample["question"])
# Loss mask: 0 for prompt, 1 for response
loss_mask = [0] * len(prompt_token) + [1] * (len(seq_token) - len(prompt_token))
return {"input_ids": seq_token, "loss_mask": loss_mask}
dataset = dataset.map(process).remove_columns(["question", "answer"])
if max_length is not None:
dataset = dataset.filter(lambda x: len(x["input_ids"]) <= max_length)
return dataset
def get_<name>_rl_dataset(
path: str,
split: str,
tokenizer,
max_length: int | None = None,
) -> Dataset:
"""Load dataset for RL training.
Args:
path: Path to dataset
split: Dataset split
tokenizer: Tokenizer for length filtering
max_length: Maximum sequence length
Returns:
HuggingFace Dataset with prompts and answers for reward computation
"""
dataset = load_dataset(path=path, split=split)
def process(sample):
messages = [
{
"role": "user",
"content": sample["question"],
}
]
return {"messages": messages, "answer": sample["answer"]}
dataset = dataset.map(process).remove_columns(["question"])
if max_length is not None:
def filter_length(sample):
content = sample["messages"][0]["content"]
tokens = tokenizer.encode(content)
return len(tokens) <= max_length
dataset = dataset.filter(filter_length)
return dataset
Step 2: Register in init.py
Update areal/dataset/__init__.py:
# Add to VALID_DATASETS
VALID_DATASETS = [
# ... existing datasets
"<name>",
]
# Add to _get_custom_dataset function
def _get_custom_dataset(name: str, ...):
# ... existing code
elif name == "<name>":
from areal.dataset.<name> import get_<name>_sft_dataset, get_<name>_rl_dataset
if dataset_type == "sft":
return get_<name>_sft_dataset(path, split, max_length, tokenizer)
else:
return get_<name>_rl_dataset(path, split, max_length, tokenizer)
Step 3: Add Config (Optional)
If the dataset needs special configuration, add to areal/api/cli_args.py:
@dataclass
class TrainDatasetConfig:
# ... existing fields
<name>_specific_field: Optional[str] = None
Step 4: Add Tests
Create tests/test_<name>_dataset.py:
import pytest
from areal.dataset.<name> import get_<name>_sft_dataset, get_<name>_rl_dataset
def test_sft_dataset_loads(tokenizer):
dataset = get_<name>_sft_dataset("path/to/data", split="train", tokenizer=tokenizer)
assert len(dataset) > 0
assert "input_ids" in dataset.column_names
assert "loss_mask" in dataset.column_names
def test_rl_dataset_loads(tokenizer):
dataset = get_<name>_rl_dataset("path/to/data", split="train", tokenizer=tokenizer)
assert len(dataset) > 0
assert "messages" in dataset.column_names
assert "answer" in dataset.column_names
Reference Implementations
| Dataset | File | Description |
|---|---|---|
| GSM8K | areal/dataset/gsm8k.py | Math word problems |
| Geometry3K | areal/dataset/geometry3k.py | Geometry problems |
| CLEVR | areal/dataset/clevr_count_70k.py | Visual counting |
| HH-RLHF | areal/dataset/hhrlhf.py | Helpfulness/Harmlessness |
| TORL | areal/dataset/torl_data.py | Tool-use RL |
Required Fields
SFT Dataset
{
"messages": [
{"role": "user", "content": "..."},
{"role": "assistant", "content": "..."},
]
}
RL Dataset
{
"messages": [
{"role": "user", "content": "..."},
],
"answer": "ground_truth_for_reward",
# Optional metadata for reward function
}
Common Mistakes
- ❌ Returning
List[Dict]instead of HuggingFaceDataset - ❌ Using Python loops instead of
dataset.map()/filter() - ❌ Missing
"messages"field for RL datasets - ❌ Wrong message format (should be list of dicts with
roleandcontent) - ❌ Not registering in
__init__.py
<!-- ================================================================================ MAINTAINER GUIDE ================================================================================ Location: .claude/skills/add-dataset/SKILL.md Invocation: /add-dataset <name> ## Purpose Step-by-step guide for adding new dataset loaders. ## How to Update ### When Dataset API Changes 1. Update the code templates 2. Update required fields section 3. Update registration example ### When New Dataset Types Added 1. Add to "Reference Implementations" table 2. Add any new required fields ================================================================================ -->
More by inclusionAI
View all skills by inclusionAI →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversGalileo: Integrate with Galileo to create datasets, manage prompt templates, run experiments, analyze logs, and monitor
Boost your AI code assistant with Context7: inject real-time API documentation from OpenAPI specification sources into y
Uno Platform — Documentation and prompts for building cross-platform .NET apps with a single codebase. Get guides, sampl
Arize Phoenix — unified interface for managing prompts, exploring datasets, and running LLM experiments across providers
Effortlessly create 25+ chart types with MCP Server Chart. Visualize complex datasets using TypeScript and AntV for powe
pg-aiguide — Version-aware PostgreSQL docs and best practices tailored for AI coding assistants. Improve queries, migrat
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.