firecrawl-policy-guardrails
Implement FireCrawl lint rules, policy enforcement, and automated guardrails. Use when setting up code quality rules for FireCrawl integrations, implementing pre-commit hooks, or configuring CI policy checks for FireCrawl best practices. Trigger with phrases like "firecrawl policy", "firecrawl lint", "firecrawl guardrails", "firecrawl best practices check", "firecrawl eslint".
Install
mkdir -p .claude/skills/firecrawl-policy-guardrails && curl -L -o skill.zip "https://mcp.directory/api/skills/download/6343" && unzip -o skill.zip -d .claude/skills/firecrawl-policy-guardrails && rm skill.zipInstalls to .claude/skills/firecrawl-policy-guardrails
About this skill
Firecrawl Policy Guardrails
Overview
Automated guardrails for Firecrawl scraping pipelines. Web scraping carries legal (robots.txt, ToS), ethical (rate limiting, attribution), and cost (credit burn) risks. This skill implements domain blocklists, credit budgets, content quality gates, and per-domain rate limits as enforceable policies.
Instructions
Step 1: Domain Policy Enforcement
import FirecrawlApp from "@mendable/firecrawl-js";
const firecrawl = new FirecrawlApp({
apiKey: process.env.FIRECRAWL_API_KEY!,
});
class ScrapePolicy {
// Domains that explicitly prohibit scraping in their ToS
static BLOCKED_DOMAINS = [
"facebook.com", "instagram.com", // Meta ToS
"linkedin.com", // LinkedIn ToS
"twitter.com", "x.com", // X/Twitter ToS
];
// Domains with sensitive/regulated content
static SENSITIVE_DOMAINS = [
"*.gov", "*.mil", // Government
"*.edu", // Educational (FERPA)
];
static validateUrl(url: string): void {
const hostname = new URL(url).hostname;
for (const blocked of this.BLOCKED_DOMAINS) {
if (hostname === blocked || hostname.endsWith(`.${blocked}`)) {
throw new PolicyViolation(`Domain "${hostname}" is blocked: ToS prohibits scraping`);
}
}
for (const pattern of this.SENSITIVE_DOMAINS) {
const regex = new RegExp("^" + pattern.replace("*.", ".*\\.") + "$");
if (regex.test(hostname)) {
console.warn(`CAUTION: "${hostname}" matches sensitive domain pattern "${pattern}"`);
}
}
}
}
class PolicyViolation extends Error {
constructor(message: string) {
super(message);
this.name = "PolicyViolation";
}
}
Step 2: Credit Budget Enforcement
class CrawlBudget {
private usage = new Map<string, number>();
private dailyLimit: number;
constructor(dailyLimit = 5000) {
this.dailyLimit = dailyLimit;
}
authorize(estimatedPages: number): void {
const today = new Date().toISOString().split("T")[0];
const used = this.usage.get(today) || 0;
if (used + estimatedPages > this.dailyLimit) {
throw new PolicyViolation(
`Daily credit limit would be exceeded: ${used} used + ${estimatedPages} requested > ${this.dailyLimit} limit`
);
}
}
record(pagesScraped: number) {
const today = new Date().toISOString().split("T")[0];
this.usage.set(today, (this.usage.get(today) || 0) + pagesScraped);
}
}
const budget = new CrawlBudget(5000);
Step 3: Content Quality Gate
function validateScrapedContent(result: any): {
accepted: boolean;
reason?: string;
} {
const md = result.markdown || "";
// Reject thin content
if (md.length < 50) {
return { accepted: false, reason: "Content too short (<50 chars)" };
}
// Reject error pages
if (/403 forbidden|access denied|captcha/i.test(md)) {
return { accepted: false, reason: "Error page detected" };
}
// Reject login walls
if (/sign in to continue|create an account|login required/i.test(md)) {
return { accepted: false, reason: "Login wall detected" };
}
// Reject cookie consent pages (only content is cookie notice)
if (md.length < 500 && /cookie|consent|gdpr/i.test(md)) {
return { accepted: false, reason: "Cookie consent page only" };
}
return { accepted: true };
}
Step 4: Crawl Limit Enforcement
const MAX_CRAWL_LIMIT = 500;
const MAX_DEPTH = 5;
async function policedCrawl(url: string, requestedLimit: number) {
// Validate URL
ScrapePolicy.validateUrl(url);
// Enforce hard limits
const limit = Math.min(requestedLimit, MAX_CRAWL_LIMIT);
if (requestedLimit > MAX_CRAWL_LIMIT) {
console.warn(`Crawl limit capped: ${requestedLimit} -> ${MAX_CRAWL_LIMIT}`);
}
// Check budget
budget.authorize(limit);
// Execute with enforced limits
const result = await firecrawl.crawlUrl(url, {
limit,
maxDepth: MAX_DEPTH,
scrapeOptions: { formats: ["markdown"], onlyMainContent: true },
});
// Record actual usage
const pagesScraped = result.data?.length || 0;
budget.record(pagesScraped);
// Filter by content quality
const validPages = (result.data || []).filter(page => {
const { accepted, reason } = validateScrapedContent(page);
if (!accepted) console.log(`Rejected: ${page.metadata?.sourceURL} — ${reason}`);
return accepted;
});
console.log(`Crawl: ${pagesScraped} scraped, ${validPages.length} accepted, ${pagesScraped - validPages.length} rejected`);
return validPages;
}
Step 5: Per-Domain Rate Limiting
const DOMAIN_RATE_LIMITS: Record<string, number> = {
"docs.example.com": 2, // 2 requests/second
"blog.example.com": 1, // 1 request/second
default: 5, // 5 requests/second
};
const lastRequest = new Map<string, number>();
async function rateLimitedScrape(url: string) {
const domain = new URL(url).hostname;
const rate = DOMAIN_RATE_LIMITS[domain] || DOMAIN_RATE_LIMITS.default;
const minInterval = 1000 / rate;
const last = lastRequest.get(domain) || 0;
const elapsed = Date.now() - last;
if (elapsed < minInterval) {
await new Promise(r => setTimeout(r, minInterval - elapsed));
}
lastRequest.set(domain, Date.now());
return firecrawl.scrapeUrl(url, { formats: ["markdown"] });
}
Policy Summary
| Policy | Enforcement | Consequence |
|---|---|---|
| Domain blocklist | Pre-request check | Request rejected with PolicyViolation |
| Credit budget | Pre-request check | Request rejected if over daily limit |
| Crawl limit | Hard cap at 500 | Silently capped, logged |
| Content quality | Post-scrape filter | Invalid pages excluded from results |
| Per-domain rate | Pre-request delay | Automatic throttling |
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| PolicyViolation thrown | Blocked domain | Remove from scrape targets |
| Budget exceeded | Heavy scraping day | Increase daily limit or wait |
| Many rejected pages | Error/login pages | Check target site, adjust URL patterns |
| Slow scraping | Per-domain rate limit | Expected behavior, protects target site |
Examples
Policy-Checked Pipeline
async function scrapePipeline(urls: string[]) {
const results = [];
for (const url of urls) {
try {
ScrapePolicy.validateUrl(url);
budget.authorize(1);
const result = await rateLimitedScrape(url);
const { accepted } = validateScrapedContent(result);
if (accepted) results.push(result);
budget.record(1);
} catch (e) {
if (e instanceof PolicyViolation) {
console.warn(`Policy: ${e.message}`);
} else {
console.error(`Error: ${(e as Error).message}`);
}
}
}
return results;
}
Resources
Next Steps
For architecture patterns, see firecrawl-architecture-variants.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversManage and optimize feature flags, experiments, and environments in GrowthBook with AI-driven tools for targeting rules
Discover Modus Design System: comprehensive docs, specs, and guides for React UI library and component implementation in
Unlock AI-ready web data with Firecrawl: scrape any website, handle dynamic content, and automate web scraping for resea
Break down complex problems with Sequential Thinking, a structured tool and step by step math solver for dynamic, reflec
Build persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Boost productivity with Task Master: an AI-powered tool for project management and agile development workflows, integrat
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.