firecrawl-cost-tuning
Optimize FireCrawl costs through tier selection, sampling, and usage monitoring. Use when analyzing FireCrawl billing, reducing API costs, or implementing usage monitoring and budget alerts. Trigger with phrases like "firecrawl cost", "firecrawl billing", "reduce firecrawl costs", "firecrawl pricing", "firecrawl expensive", "firecrawl budget".
Install
mkdir -p .claude/skills/firecrawl-cost-tuning && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4836" && unzip -o skill.zip -d .claude/skills/firecrawl-cost-tuning && rm skill.zipInstalls to .claude/skills/firecrawl-cost-tuning
About this skill
Firecrawl Cost Tuning
Overview
Firecrawl charges credits per operation: 1 credit per scrape, 1 per crawled page, 1 per map call, and variable credits for extract (LLM usage). An unbounded crawl on a large site can consume thousands of credits in minutes. This skill covers concrete techniques to reduce credit consumption by 50-80%.
Credit Cost Table
| Operation | Credits | Notes |
|---|---|---|
scrapeUrl | 1 | Per page, any format |
crawlUrl | 1 per page | Each discovered page costs 1 credit |
mapUrl | 1 | Regardless of URLs returned |
batchScrapeUrls | 1 per URL | Same as individual scrape |
extract | 5+ | LLM processing adds cost |
Instructions
Step 1: Always Set Crawl Limits
import FirecrawlApp from "@mendable/firecrawl-js";
const firecrawl = new FirecrawlApp({
apiKey: process.env.FIRECRAWL_API_KEY!,
});
// BAD: no limit — could crawl 100K pages
await firecrawl.crawlUrl("https://docs.large-project.org");
// Cost: potentially 100,000+ credits
// GOOD: bounded crawl
await firecrawl.crawlUrl("https://docs.large-project.org", {
limit: 50, // max 50 pages
maxDepth: 2, // only 2 levels deep
includePaths: ["/api/*"], // only API docs
excludePaths: ["/blog/*", "/changelog/*"],
scrapeOptions: { formats: ["markdown"] },
});
// Cost: max 50 credits
Step 2: Use Scrape for Known URLs Instead of Crawl
// If you know which pages you need, don't crawl — scrape them directly
const targetUrls = [
"https://docs.example.com/api/auth",
"https://docs.example.com/api/users",
"https://docs.example.com/api/billing",
];
// Cost: 3 credits (one per page)
const results = await firecrawl.batchScrapeUrls(targetUrls, {
formats: ["markdown"],
});
// vs crawling the whole docs site: potentially 500+ credits
Step 3: Map First, Then Selective Scrape
// Map costs 1 credit and returns up to 30K URLs
const map = await firecrawl.mapUrl("https://docs.example.com");
// Cost: 1 credit
// Filter to only what you need
const apiDocs = (map.links || []).filter(url => url.includes("/api/"));
console.log(`${map.links?.length} total URLs, only ${apiDocs.length} are API docs`);
// Scrape only relevant pages
const results = await firecrawl.batchScrapeUrls(apiDocs.slice(0, 20), {
formats: ["markdown"],
});
// Cost: 1 (map) + 20 (scrape) = 21 credits
// vs blind crawl: could be 500+ credits
Step 4: Cache to Prevent Re-Scraping
import { createHash } from "crypto";
const cache = new Map<string, { content: string; timestamp: number }>();
const CACHE_TTL = 24 * 3600 * 1000; // 24 hours
async function cachedScrape(url: string): Promise<string> {
const key = createHash("md5").update(url).digest("hex");
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.content; // Free — no API call
}
const result = await firecrawl.scrapeUrl(url, { formats: ["markdown"] });
if (result.markdown) {
cache.set(key, { content: result.markdown, timestamp: Date.now() });
}
return result.markdown || "";
}
// Typical savings: 50-80% credit reduction for recurring scrapes
Step 5: Monitor Credit Consumption
set -euo pipefail
# Check current credit balance
curl -s https://api.firecrawl.dev/v1/team/credits \
-H "Authorization: Bearer $FIRECRAWL_API_KEY" | jq .
// Daily credit tracker
class CreditBudget {
private dailyLimit: number;
private usage = new Map<string, number>();
constructor(dailyLimit = 1000) {
this.dailyLimit = dailyLimit;
}
canAfford(estimatedCredits: number): boolean {
const today = new Date().toISOString().split("T")[0];
const used = this.usage.get(today) || 0;
return used + estimatedCredits <= this.dailyLimit;
}
record(credits: number) {
const today = new Date().toISOString().split("T")[0];
this.usage.set(today, (this.usage.get(today) || 0) + credits);
}
remaining(): number {
const today = new Date().toISOString().split("T")[0];
return this.dailyLimit - (this.usage.get(today) || 0);
}
}
const budget = new CreditBudget(1000);
// Before each crawl
if (!budget.canAfford(50)) {
throw new Error(`Daily credit budget exceeded. ${budget.remaining()} credits left`);
}
await firecrawl.crawlUrl(url, { limit: 50 });
budget.record(50);
Step 6: Choose Minimal Formats
set -euo pipefail
# Cheapest: markdown only (1 credit, fastest)
curl -X POST https://api.firecrawl.dev/v1/scrape \
-H "Authorization: Bearer $FIRECRAWL_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com","formats":["markdown"]}'
# Avoid requesting screenshots, rawHtml, or extract unless needed
# Extract uses LLM calls — significantly more credits
Error Handling
| Issue | Cause | Solution |
|---|---|---|
402 Payment Required | Credits exhausted | Check balance, upgrade plan, or wait for reset |
| Credits drained by one crawl | No limit set | Always set limit and maxDepth |
| Duplicate scraping costs | Same URLs scraped daily | Implement URL-keyed caching |
| High per-page cost | Requesting all formats + extract | Use formats: ["markdown"] only |
| Budget overrun | No daily cap | Implement credit budget tracker |
Cost Optimization Summary
| Technique | Credit Savings |
|---|---|
Set crawl limit | Prevents 100x overages |
| Map + selective scrape | 50-90% vs blind crawl |
| Cache repeated scrapes | 50-80% reduction |
| Markdown-only format | Fastest, no extras |
| Batch scrape vs individual | Same cost, less overhead |
Resources
Next Steps
For reference architecture, see firecrawl-reference-architecture.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversOptimize Facebook ad campaigns with AI-driven insights, creative analysis, and campaign control in Meta Ads Manager for
Funnel is a TypeScript proxy server that aggregates MCP servers, intelligently filtering tools to optimize context token
Search any codebase or documentation, including Git Hub repositories, with Probe's optimized, auto-updating search engin
Enable AI web browsing in your MCP client — simple command to add browser integration for chatbots using your LLM with n
Context Optimizer offers web keyword analysis, website keyword analysis, and secure content extraction to help you find
Obsidian Semantic delivers smart Obsidian vault management with intelligent file access, editing, and adaptive indexing
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.