exa-data-handling
Implement Exa PII handling, data retention, and GDPR/CCPA compliance patterns. Use when handling sensitive data, implementing data redaction, configuring retention policies, or ensuring compliance with privacy regulations for Exa integrations. Trigger with phrases like "exa data", "exa PII", "exa GDPR", "exa data retention", "exa privacy", "exa CCPA".
Install
mkdir -p .claude/skills/exa-data-handling && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5388" && unzip -o skill.zip -d .claude/skills/exa-data-handling && rm skill.zipInstalls to .claude/skills/exa-data-handling
About this skill
Exa Data Handling
Overview
Manage search result data from Exa's neural search API. Covers content extraction scope control (text vs highlights vs summary), result caching with TTL, citation deduplication, token budget management for LLM context windows, and structured summary extraction.
Prerequisites
exa-jsSDK installed and configured- Optional:
lru-cachefor in-memory caching,ioredisfor Redis - Understanding of Exa content options (text, highlights, summary)
Instructions
Step 1: Control Content Extraction Scope
import Exa from "exa-js";
const exa = new Exa(process.env.EXA_API_KEY);
// Tier 1: Metadata only (cheapest, fastest)
async function searchMetadataOnly(query: string) {
return exa.search(query, {
type: "auto",
numResults: 10,
// No content options — returns URLs, titles, scores only
});
}
// Tier 2: Highlights only (balanced cost/value)
async function searchWithHighlights(query: string) {
return exa.searchAndContents(query, {
numResults: 10,
highlights: {
maxCharacters: 500,
query: query, // focus highlights on the original query
},
});
}
// Tier 3: Full text with character limit
async function searchWithText(query: string, maxChars = 2000) {
return exa.searchAndContents(query, {
numResults: 5,
text: { maxCharacters: maxChars },
highlights: { maxCharacters: 300 },
});
}
// Tier 4: Structured summary (LLM-generated per result)
async function searchWithSummary(query: string) {
return exa.searchAndContents(query, {
numResults: 5,
summary: { query: query },
// summary returns a concise LLM-generated summary per result
});
}
Step 2: Result Caching with TTL
import { LRUCache } from "lru-cache";
import { createHash } from "crypto";
const searchCache = new LRUCache<string, any>({
max: 500,
ttl: 1000 * 60 * 60, // 1 hour default
});
function cacheKey(query: string, options: any): string {
return createHash("sha256")
.update(JSON.stringify({ query, ...options }))
.digest("hex");
}
async function cachedSearch(query: string, options: any = {}, ttlMs?: number) {
const key = cacheKey(query, options);
const cached = searchCache.get(key);
if (cached) return cached;
const results = await exa.searchAndContents(query, options);
searchCache.set(key, results, { ttl: ttlMs });
return results;
}
Step 3: Token Budget Management for RAG
interface ProcessedResult {
url: string;
title: string;
score: number;
snippet: string;
tokenEstimate: number;
}
function processForRAG(results: any[], maxSnippetLength = 500): ProcessedResult[] {
return results.map(r => {
const snippet = (r.text || r.highlights?.join(" ") || r.summary || "")
.slice(0, maxSnippetLength);
return {
url: r.url,
title: r.title || "Untitled",
score: r.score,
snippet,
tokenEstimate: Math.ceil(snippet.length / 4),
};
});
}
function fitToTokenBudget(results: ProcessedResult[], maxTokens: number) {
const sorted = [...results].sort((a, b) => b.score - a.score);
const selected: ProcessedResult[] = [];
let tokenCount = 0;
for (const result of sorted) {
if (tokenCount + result.tokenEstimate > maxTokens) break;
selected.push(result);
tokenCount += result.tokenEstimate;
}
return { selected, tokenCount, dropped: sorted.length - selected.length };
}
// Usage: fit search results into a 4K token context window
const results = await exa.searchAndContents("query", {
numResults: 15,
text: { maxCharacters: 1500 },
});
const processed = processForRAG(results.results);
const { selected, tokenCount } = fitToTokenBudget(processed, 4000);
Step 4: Citation Deduplication
function deduplicateResults(results: any[]): any[] {
const seen = new Map<string, any>();
for (const result of results) {
const domain = new URL(result.url).hostname;
const key = `${domain}:${result.title}`;
if (!seen.has(key) || result.score > seen.get(key).score) {
seen.set(key, result);
}
}
return Array.from(seen.values());
}
Step 5: Structured Summary Extraction
// Use summary.schema for structured data extraction
const results = await exa.searchAndContents(
"YC-backed AI startups Series A 2025",
{
numResults: 10,
category: "company",
summary: {
query: "company name, funding amount, what they do",
// schema can define JSON structure for the summary output
},
}
);
// Each result.summary contains a structured summary
for (const r of results.results) {
console.log(`${r.title}: ${r.summary}`);
}
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Large response payload | Full text for many URLs | Use highlights or limit maxCharacters |
| Cache stale for news | Default TTL too long | Use 5-minute TTL for time-sensitive queries |
| Duplicate sources | Same article syndicated | Deduplicate by domain + title |
| Token budget exceeded | Too much context for LLM | Use fitToTokenBudget to trim by score |
Missing .text field | Content not requested | Use searchAndContents not search |
Examples
RAG-Optimized Search Pipeline
async function ragSearch(query: string, tokenBudget = 4000) {
const results = await cachedSearch(query, {
numResults: 15,
type: "neural",
text: { maxCharacters: 1500 },
highlights: { maxCharacters: 300, query },
});
const deduped = deduplicateResults(results.results);
const processed = processForRAG(deduped);
const { selected, tokenCount } = fitToTokenBudget(processed, tokenBudget);
return {
context: selected.map((r, i) =>
`[${i + 1}] ${r.title} (${r.url})\n${r.snippet}`
).join("\n\n---\n\n"),
sources: selected.map(r => ({ title: r.title, url: r.url })),
tokenCount,
};
}
Resources
Next Steps
For rate limit handling, see exa-rate-limits. For cost optimization, see exa-cost-tuning.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversNekzus Utility Server offers modular TypeScript tools for datetime, cards, and schema conversion with stdio transport co
Build persistent semantic networks for enterprise & engineering data management. Enable data persistence and memory acro
Unlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Cipher empowers agents with persistent memory using vector databases and embeddings for seamless context retention and t
Integrate with Gemini CLI for large-scale file analysis, secure code execution, and advanced context control using Googl
Powerful MCP server for Slack with advanced API, message fetching, webhooks, and enterprise features. Robust Slack data
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.