pgvector-semantic-search
Use this skill for setting up vector similarity search with pgvector for AI/ML embeddings, RAG applications, or semantic search. **Trigger when user asks to:** - Store or search vector embeddings in PostgreSQL - Set up semantic search, similarity search, or nearest neighbor search - Create HNSW or IVFFlat indexes for vectors - Implement RAG (Retrieval Augmented Generation) with PostgreSQL - Optimize pgvector performance, recall, or memory usage - Use binary quantization for large vector datasets **Keywords:** pgvector, embeddings, semantic search, vector similarity, HNSW, IVFFlat, halfvec, cosine distance, nearest neighbor, RAG, LLM, AI search Covers: halfvec storage, HNSW index configuration (m, ef_construction, ef_search), quantization strategies, filtered search, bulk loading, and performance tuning.
Install
mkdir -p .claude/skills/pgvector-semantic-search && curl -L -o skill.zip "https://mcp.directory/api/skills/download/1578" && unzip -o skill.zip -d .claude/skills/pgvector-semantic-search && rm skill.zipInstalls to .claude/skills/pgvector-semantic-search
About this skill
pgvector for Semantic Search
Semantic search finds content by meaning rather than exact keywords. An embedding model converts text into high-dimensional vectors, where similar meanings map to nearby points. pgvector stores these vectors in PostgreSQL and uses approximate nearest neighbor (ANN) indexes to find the closest matches quickly—scaling to millions of rows without leaving the database. Store your text alongside its embedding, then query by converting your search text to a vector and returning the rows with the smallest distance.
This guide covers pgvector setup and tuning—not embedding model selection or text chunking, which significantly affect search quality. Requires pgvector 0.8.0+ for all features (halfvec, binary_quantize, iterative scan).
Golden Path (Default Setup)
Use this configuration unless you have a specific reason not to.
- Embedding column data type:
halfvec(N)whereNis your embedding dimension (must match everywhere). Examples use 1536; replace with your dimensionN. - Distance: cosine (
<=>) - Index: HNSW (
m = 16,ef_construction = 64). Usehalfvec_cosine_opsand query with<=>. - Query-time recall:
SET hnsw.ef_search = 100(good starting point from published benchmarks, increase for higher recall at higher latency) - Query pattern:
ORDER BY embedding <=> $1::halfvec(N) LIMIT k
This setup provides a strong speed–recall tradeoff for most text-embedding workloads.
Core Rules
- Enable the extension in each database:
CREATE EXTENSION IF NOT EXISTS vector; - Use HNSW indexes by default—superior speed-recall tradeoff, can be created on empty tables, no training step required. Only consider IVFFlat for write-heavy or memory-bound workloads.
- Use
halfvecby default—store and index ashalfvecfor 50% smaller storage and indexes with minimal recall loss. - Index after bulk loading initial data for best build performance.
- Create indexes concurrently in production:
CREATE INDEX CONCURRENTLY ... - Use cosine distance by default (
<=>): For non-normalized embeddings, use cosine. For unit-normalized embeddings, cosine and inner product yield identical rankings; default to cosine. - Match query operator to index ops: Index with
halfvec_cosine_opsrequires<=>in queries;halfvec_l2_opsrequires<->; mismatched operators won't use the index. - Always cast query vectors explicitly (
$1::halfvec(N)) to avoid implicit-cast failures in prepared statements. - Always use the same embedding model for data and queries. Similarity search only works when the model generating the vectors is the same.
Type Rules
- Store embeddings as
halfvec(N) - Cast query vectors to
halfvec(N) - Store binary quantized vectors as
bit(N)in a generated column - Do not mix
vector/halfvec/bitwithout explicit casts - Never call
binary_quantize()on table columns insideORDER BY; store it instead - Dimensions must match: a
halfvec(1536)column requires query vectors cast as::halfvec(1536).
Standard Pattern
-- Store and index as halfvec
CREATE TABLE items (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
contents TEXT NOT NULL,
embedding halfvec(1536) NOT NULL -- NOT NULL requires embeddings generated before insert, not async
);
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- Query: returns 10 closest items. $1 is the embedding of your search text.
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
For other distance operators (L2, inner product, etc.), see the pgvector README.
HNSW Index
The recommended index type. Creates a multilayer navigable graph with superior speed-recall tradeoff. Can be created on empty tables (no training step required).
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
-- With tuning parameters
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops) WITH (m = 16, ef_construction = 64);
HNSW Parameters
| Parameter | Default | Description |
|---|---|---|
m | 16 | Max connections per layer. Higher = better recall, more memory |
ef_construction | 64 | Build-time candidate list. Higher = better graph quality, slower build |
hnsw.ef_search | 40 | Query-time candidate list. Higher = better recall, slower queries. Should be ≥ LIMIT. |
ef_search tuning (rough guidelines—actual results vary by dataset):
| ef_search | Approx Recall | Relative Speed |
|---|---|---|
| 40 | lower (~95% on some benchmarks) | 1x (baseline) |
| 100 | higher | ~2x slower |
| 200 | very-high | ~4x slower |
| 400 | near-exact | ~8x slower |
-- Set search parameter for session
SET hnsw.ef_search = 100;
-- Set for single query
BEGIN;
SET LOCAL hnsw.ef_search = 100;
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
COMMIT;
IVFFlat Index (Generally Not Recommended)
Default to HNSW. Use IVFFlat only when HNSW’s operational costs matter more than peak recall.
Choose IVFFlat if:
- Write-heavy or constantly changing data AND you're willing to rebuild the index frequently
- You rebuild indexes often and want predictable build time and memory usage
- Memory is tight and you cannot keep an HNSW graph mostly resident
- Data is partitioned or tiered, and this index lives on colder partitions
Avoid IVFFlat if you need:
- highest recall at low latency
- minimal tuning
- a “set and forget” index
Notes:
- IVFFlat requires data to exist before index creation.
- Recall depends on
listsandivfflat.probes; higher probes = better recall, slower queries.
Starter config:
CREATE INDEX ON items
USING ivfflat (embedding halfvec_cosine_ops)
WITH (lists = 1000);
SET ivfflat.probes = 10;
Quantization Strategies
- Quantization is a memory decision, not a recall decision.
- Use
halfvecby default for storage and indexing. - Estimate HNSW index footprint as ~4–6 KB per 1536-dim
halfvec(m=16) (order-of-magnitude); 3072-dim is ~2×; m=32 roughly doubles HNSW link/graph overhead. - If p95/p99 latency rises while CPU is mostly idle, the HNSW index is likely no longer resident in memory.
- If
halfvecdoesn’t fit, use binary quantization + re-ranking.
Guidelines for 1536-dim vectors
Approximate halfvec capacity at m=16, 1536-dim (assumes RAM mostly available for index caching):
| RAM | Approx max halfvec vectors |
|---|---|
| 16 GB | ~2–3M vectors |
| 32 GB | ~4–6M vectors |
| 64 GB | ~8–12M vectors |
| 128 GB | ~16–25M vectors |
For 3072-dim embeddings, divide these numbers by ~2.
For m=32, also divide capacity by ~2.
If the index cannot fit in memory at this scale, use binary quantization.
These are ranges, not guarantees. Validate by monitoring cache residency and p95/p99 latency under load.
Binary Quantization (For Very Large Datasets)
32× memory reduction. Use with re-ranking for acceptable recall.
-- Table with generated column for binary quantization
CREATE TABLE items (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
contents TEXT NOT NULL,
embedding halfvec(1536) NOT NULL,
embedding_bq bit(1536) GENERATED ALWAYS AS (binary_quantize(embedding)::bit(1536)) STORED
);
CREATE INDEX ON items USING hnsw (embedding_bq bit_hamming_ops);
-- Query with re-ranking for better recall
-- ef_search must be >= inner LIMIT to retrieve enough candidates
SET hnsw.ef_search = 800;
WITH q AS (
SELECT binary_quantize($1::halfvec(1536))::bit(1536) AS qb
)
SELECT *
FROM (
SELECT i.id, i.contents, i.embedding
FROM items i, q
ORDER BY i.embedding_bq <~> q.qb -- computes binary distance using index
LIMIT 800
) candidates
ORDER BY candidates.embedding <=> $1::halfvec(1536) -- computes halfvec distance (no index), more accurate than binary
LIMIT 10;
The 80× oversampling ratio (800 candidates for 10 results) is a reasonable starting point. Binary quantization loses precision, so more candidates are needed to find true nearest neighbors during re-ranking. Increase if recall is insufficient; decrease if re-ranking latency is too high.
Performance by Dataset Size
| Scale | Vectors | Config | Notes |
|---|---|---|---|
| Small | <100K | Defaults | Index optional but improves tail latency |
| Medium | 100K–5M | Defaults | Monitor p95 latency; most common production range |
| Large | 5M+ | ef_construction=100+ | Memory residency critical |
| Very Large | 10M+ | Binary quantization + re-ranking | Add RAM or partition first if possible |
Tune ef_search first for recall; only increase m if recall plateaus and memory allows. Under concurrency, tail latency spikes when the index doesn't fit in memory. Binary quantization is an escape hatch—prefer adding RAM or partitioning first.
Filtering Best Practices
Filtered vector search requires care. Depending on filter selectivity and query shape, filters can cause early termination (too few rows, missing results) or increase work (latency).
Iterative scan (recommended when filters are selective)
By default, HNSW may stop early when a WHERE clause is present, which can lead to fewer results than expected. Iterative scan allows HNSW to continue searching until enough filtered rows are found.
Enable iterative scan when filters materially reduce the result set.
-- Enable iterative scans for filtered queries
SET hnsw.iterative_scan = relaxed_order;
SELECT id, contents
FROM items
WHERE category_id = 123
ORDER BY embedding <=> $1::halfvec(1536)
LIMIT 10;
If results are still sparse, increase the scan budget:
SET hnsw.max_scan_tuples = 50000;
Trade-off: increasing hnsw.max_scan_tuples improves recall but can significantly increase latency.
When iterative scan is not needed:
- The filter matches a large portion of the table (low selectivity)
- You are prefiltering via a B-t
Content truncated.
More by timescale
View all skills by timescale →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversClaude Skills offers advanced GitHub search to find coding skills using semantic retrieval in bioinformatics and data an
Apple Developer Documentation (RAG) delivers fast, relevant technical docs with advanced semantic and keyword search for
Code Graph RAG enables advanced code analysis with graph traversal, semantic search, and multi-language support for smar
AI Memory is a production-ready vector database server that manages and retrieves contextual knowledge with advanced sem
SVGMaker is an svg generator and creator that converts photos into vector graphic file types, with editing and real-time
Turso SQLite connects AI assistants to Turso SQLite databases, offering organization management, queries, and advanced v
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.