firecrawl-data-handling
Implement FireCrawl PII handling, data retention, and GDPR/CCPA compliance patterns. Use when handling sensitive data, implementing data redaction, configuring retention policies, or ensuring compliance with privacy regulations for FireCrawl integrations. Trigger with phrases like "firecrawl data", "firecrawl PII", "firecrawl GDPR", "firecrawl data retention", "firecrawl privacy", "firecrawl CCPA".
Install
mkdir -p .claude/skills/firecrawl-data-handling && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6353" && unzip -o skill.zip -d .claude/skills/firecrawl-data-handling && rm skill.zipInstalls to .claude/skills/firecrawl-data-handling
About this skill
Firecrawl Data Handling
Overview
Process scraped web content from Firecrawl pipelines. Covers markdown cleaning, structured data extraction with Zod validation, content deduplication, chunking for LLM/RAG, and storage patterns for crawled content.
Instructions
Step 1: Content Cleaning
import FirecrawlApp from "@mendable/firecrawl-js";
const firecrawl = new FirecrawlApp({
apiKey: process.env.FIRECRAWL_API_KEY!,
});
// Scrape with clean output settings
async function scrapeClean(url: string) {
const result = await firecrawl.scrapeUrl(url, {
formats: ["markdown"],
onlyMainContent: true, // strips nav, footer, sidebar
excludeTags: ["script", "style", "nav", "footer", "iframe"],
waitFor: 2000,
});
return {
url: result.metadata?.sourceURL || url,
title: result.metadata?.title || "",
markdown: cleanMarkdown(result.markdown || ""),
scrapedAt: new Date().toISOString(),
};
}
function cleanMarkdown(md: string): string {
return md
.replace(/\n{3,}/g, "\n\n") // collapse multiple newlines
.replace(/\[.*?\]\(javascript:.*?\)/g, "") // remove JS links
.replace(/!\[.*?\]\(data:.*?\)/g, "") // remove inline data URIs
.replace(/<!--[\s\S]*?-->/g, "") // remove HTML comments
.replace(/<script[\s\S]*?<\/script>/gi, "") // remove script tags
.trim();
}
Step 2: Structured Extraction with Validation
import { z } from "zod";
const ArticleSchema = z.object({
title: z.string().min(1),
author: z.string().optional(),
publishedDate: z.string().optional(),
content: z.string().min(50),
wordCount: z.number(),
});
async function extractArticle(url: string) {
const result = await firecrawl.scrapeUrl(url, {
formats: ["extract"],
extract: {
schema: {
type: "object",
properties: {
title: { type: "string" },
author: { type: "string" },
publishedDate: { type: "string" },
content: { type: "string" },
},
required: ["title", "content"],
},
},
});
if (!result.extract) throw new Error(`Extraction failed for ${url}`);
return ArticleSchema.parse({
...result.extract,
wordCount: (result.extract.content || "").split(/\s+/).length,
});
}
Step 3: Content Deduplication
import { createHash } from "crypto";
function contentHash(text: string): string {
return createHash("sha256")
.update(text.trim().toLowerCase())
.digest("hex");
}
function deduplicatePages(pages: Array<{ url: string; markdown: string }>) {
const seen = new Map<string, string>(); // hash -> first URL
const unique: typeof pages = [];
const duplicates: Array<{ url: string; duplicateOf: string }> = [];
for (const page of pages) {
const hash = contentHash(page.markdown);
if (seen.has(hash)) {
duplicates.push({ url: page.url, duplicateOf: seen.get(hash)! });
} else {
seen.set(hash, page.url);
unique.push(page);
}
}
console.log(`Dedup: ${pages.length} input, ${unique.length} unique, ${duplicates.length} duplicates`);
return { unique, duplicates };
}
Step 4: Chunk for LLM / RAG
interface ContentChunk {
url: string;
title: string;
chunkIndex: number;
content: string;
wordCount: number;
}
function chunkForRAG(
url: string,
title: string,
markdown: string,
maxWords = 800
): ContentChunk[] {
// Split by headings to preserve semantic boundaries
const sections = markdown.split(/\n(?=#{1,3}\s)/);
const chunks: ContentChunk[] = [];
let current = "";
let index = 0;
for (const section of sections) {
const combined = current ? `${current}\n\n${section}` : section;
if (combined.split(/\s+/).length > maxWords && current) {
chunks.push({
url, title, chunkIndex: index++,
content: current.trim(),
wordCount: current.split(/\s+/).length,
});
current = section;
} else {
current = combined;
}
}
if (current.trim()) {
chunks.push({
url, title, chunkIndex: index,
content: current.trim(),
wordCount: current.split(/\s+/).length,
});
}
return chunks;
}
Step 5: Crawl and Store Pipeline
import { writeFileSync, mkdirSync } from "fs";
import { join } from "path";
async function crawlAndStore(baseUrl: string, outputDir: string, opts?: {
maxPages?: number;
paths?: string[];
}) {
mkdirSync(outputDir, { recursive: true });
const crawlResult = await firecrawl.crawlUrl(baseUrl, {
limit: opts?.maxPages || 50,
includePaths: opts?.paths,
scrapeOptions: { formats: ["markdown"], onlyMainContent: true },
});
const pages = (crawlResult.data || []).map(page => ({
url: page.metadata?.sourceURL || baseUrl,
markdown: cleanMarkdown(page.markdown || ""),
}));
// Deduplicate
const { unique } = deduplicatePages(pages);
// Write files + manifest
const manifest = unique.map(page => {
const slug = new URL(page.url).pathname
.replace(/\//g, "_").replace(/^_|_$/g, "") || "index";
const filename = `${slug}.md`;
writeFileSync(join(outputDir, filename), page.markdown);
return { url: page.url, file: filename, size: page.markdown.length };
});
writeFileSync(join(outputDir, "manifest.json"), JSON.stringify(manifest, null, 2));
return manifest;
}
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Empty content | JS not rendered | Increase waitFor, use onlyMainContent |
| Garbage in markdown | Bad HTML cleanup | Add excludeTags for problematic elements |
| Duplicate pages | URL aliases or redirects | Content-hash deduplication |
| Oversized chunks | Long single sections | Add word limit to chunking logic |
| Extract returns null | Page too complex for LLM | Simplify schema, use shorter prompt |
Examples
Documentation Scraper with RAG Output
const docs = await crawlAndStore("https://docs.example.com", "./scraped-docs", {
maxPages: 50,
paths: ["/docs/*", "/api/*"],
});
// Generate RAG-ready chunks
for (const doc of docs) {
const content = readFileSync(`./scraped-docs/${doc.file}`, "utf-8");
const chunks = chunkForRAG(doc.url, doc.file, content);
console.log(`${doc.url}: ${chunks.length} chunks`);
// Feed chunks to vector store (Pinecone, Weaviate, pgvector, etc.)
}
Resources
Next Steps
For access control, see firecrawl-enterprise-rbac.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversNekzus Utility Server offers modular TypeScript tools for datetime, cards, and schema conversion with stdio transport co
Unlock AI-ready web data with Firecrawl: scrape any website, handle dynamic content, and automate web scraping for resea
Build persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Unlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Integrate FireCrawl for advanced web scraping to extract clean, structured data from complex websites—fast, scalable, an
Cipher empowers agents with persistent memory using vector databases and embeddings for seamless context retention and t
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.