setup-timescaledb-hypertables

0
0
Source

Use this skill when creating database schemas or tables for Timescale, TimescaleDB, TigerData, or Tiger Cloud, especially for time-series, IoT, metrics, events, or log data. Use this to improve the performance of any insert-heavy table. **Trigger when user asks to:** - Create or design SQL schemas/tables AND Timescale/TimescaleDB/TigerData/Tiger Cloud is available - Set up hypertables, compression, retention policies, or continuous aggregates - Configure partition columns, segment_by, order_by, or chunk intervals - Optimize time-series database performance or storage - Create tables for sensors, metrics, telemetry, events, or transaction logs **Keywords:** CREATE TABLE, hypertable, Timescale, TimescaleDB, time-series, IoT, metrics, sensor data, compression policy, continuous aggregates, columnstore, retention policy, chunk interval, segment_by, order_by Step-by-step instructions for hypertable creation, column selection, compression policies, retention, continuous aggregates, and indexes.

Install

mkdir -p .claude/skills/setup-timescaledb-hypertables && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8137" && unzip -o skill.zip -d .claude/skills/setup-timescaledb-hypertables && rm skill.zip

Installs to .claude/skills/setup-timescaledb-hypertables

About this skill

TimescaleDB Complete Setup

Instructions for insert-heavy data patterns where data is inserted but rarely changed:

  • Time-series data (sensors, metrics, system monitoring)
  • Event logs (user events, audit trails, application logs)
  • Transaction records (orders, payments, financial transactions)
  • Sequential data (records with auto-incrementing IDs and timestamps)
  • Append-only datasets (immutable records, historical data)

Step 1: Create Hypertable

CREATE TABLE your_table_name (
    timestamp TIMESTAMPTZ NOT NULL,
    entity_id TEXT NOT NULL,          -- device_id, user_id, symbol, etc.
    category TEXT,                    -- sensor_type, event_type, asset_class, etc.
    value_1 DOUBLE PRECISION,         -- price, temperature, latency, etc.
    value_2 DOUBLE PRECISION,         -- volume, humidity, throughput, etc.
    value_3 INTEGER,                  -- count, status, level, etc.
    metadata JSONB                    -- flexible additional data
) WITH (
    tsdb.hypertable,
    tsdb.partition_column='timestamp',
    tsdb.enable_columnstore=true,     -- Disable if table has vector columns
    tsdb.segmentby='entity_id',       -- See selection guide below
    tsdb.orderby='timestamp DESC',     -- See selection guide below
    tsdb.sparse_index='minmax(value_1),minmax(value_2),minmax(value_3)' -- see selection guide below
);

Compression Decision

  • Enable by default for insert-heavy patterns
  • Disable if table has vector type columns (pgvector) - indexes on vector columns incompatible with columnstore

Partition Column Selection

Must be time-based (TIMESTAMP/TIMESTAMPTZ/DATE) or integer (INT/BIGINT) with good temporal/sequential distribution.

Common patterns:

  • TIME-SERIES: timestamp, event_time, measured_at
  • EVENT LOGS: event_time, created_at, logged_at
  • TRANSACTIONS: created_at, transaction_time, processed_at
  • SEQUENTIAL: id (auto-increment when no timestamp), sequence_number
  • APPEND-ONLY: created_at, inserted_at, id

Less ideal: ingested_at (when data entered system - use only if it's your primary query dimension) Avoid: updated_at (breaks time ordering unless it's primary query dimension)

Segment_By Column Selection

PREFER SINGLE COLUMN - multi-column rarely optimal. Multi-column can only work for highly correlated columns (e.g., metric_name + metric_type) with sufficient row density.

Requirements:

  • Frequently used in WHERE clauses (most common filter)
  • Good row density (>100 rows per value per chunk)
  • Primary logical partition/grouping

Examples:

  • IoT: device_id
  • Finance: symbol
  • Metrics: service_name, service_name, metric_type (if sufficient row density), metric_name, metric_type (if sufficient row density)
  • Analytics: user_id if sufficient row density, otherwise session_id
  • E-commerce: product_id if sufficient row density, otherwise category_id

Row density guidelines:

  • Target: >100 rows per segment_by value within each chunk.
  • Poor: <10 rows per segment_by value per chunk → choose less granular column
  • What to do with low-density columns: prepend to order_by column list.

Query pattern drives choice:

SELECT * FROM table WHERE entity_id = 'X' AND timestamp > ...
-- ↳ segment_by: entity_id (if >100 rows per chunk)

Avoid: timestamps, unique IDs, low-density columns (<100 rows/value/chunk), columns rarely used in filtering

Order_By Column Selection

Creates natural time-series progression when combined with segment_by for optimal compression.

Most common: timestamp DESC

Examples:

  • IoT/Finance/E-commerce: timestamp DESC
  • Metrics: metric_name, timestamp DESC (if metric_name has too low density for segment_by)
  • Analytics: user_id, timestamp DESC (user_id has too low density for segment_by)

Alternative patterns:

  • sequence_id DESC for event streams with sequence numbers
  • timestamp DESC, event_order DESC for sub-ordering within same timestamp

Low-density column handling: If a column has <100 rows per chunk (too low for segment_by), prepend it to order_by:

  • Example: metric_name has 20 rows/chunk → use segment_by='service_name', order_by='metric_name, timestamp DESC'
  • Groups similar values together (all temperature readings, then pressure readings) for better compression

Good test: ordering created by (segment_by_column, order_by_column) should form a natural time-series progression. Values close to each other in the progression should be similar.

Avoid in order_by: random columns, columns with high variance between adjacent rows, columns unrelated to segment_by

Compression Sparse Index Selection

Sparse indexes enable query filtering on compressed data without decompression. Store metadata per batch (~1000 rows) to eliminate batches that don't match query predicates.

Types:

  • minmax: Min/max values per batch - for range queries (>, <, BETWEEN) on numeric/temporal columns

Use minmax for: price, temperature, measurement, timestamp (range filtering)

Use for:

  • minmax for outlier detection (temperature > 90).
  • minmax for fields that are highly correlated with segmentby and orderby columns (e.g. if orderby includes created_at, minmax on updated_at is useful).

Avoid: rarely filtered columns.

IMPORTANT: NEVER index columns in segmentby or orderby. Orderby columns will always have minmax indexes without any configuration.

Configuration: The format is a comma-separated list of type_of_index(column_name).

ALTER TABLE table_name SET (
    timescaledb.sparse_index = 'minmax(value_1),minmax(value_2)'
);

Explicit configuration available since v2.22.0 (was auto-created since v2.16.0).

Chunk Time Interval (Optional)

Default: 7 days (use if volume unknown, or ask user). Adjust based on volume:

  • High frequency: 1 hour - 1 day
  • Medium: 1 day - 1 week
  • Low: 1 week - 1 month
SELECT set_chunk_time_interval('your_table_name', INTERVAL '1 day');

Good test: recent chunk indexes should fit in less than 25% of RAM.

Indexes & Primary Keys

Common index patterns - composite indexes on an id and timestamp:

CREATE INDEX idx_entity_timestamp ON your_table_name (entity_id, timestamp DESC);

Important: Only create indexes you'll actually use - each has maintenance overhead.

Primary key and unique constraints rules: Must include partition column.

Option 1: Composite PK with partition column

ALTER TABLE your_table_name ADD PRIMARY KEY (entity_id, timestamp);

Option 2: Single-column PK (only if it's the partition column)

CREATE TABLE ... (id BIGINT PRIMARY KEY, ...) WITH (tsdb.partition_column='id');

Option 3: No PK: strict uniqueness is often not required for insert-heavy patterns.

Step 2: Compression Policy (Optional)

IMPORTANT: If you used tsdb.enable_columnstore=true in Step 1, starting with TimescaleDB version 2.23 a columnstore policy is automatically created with after => INTERVAL '7 days'. You only need to call add_columnstore_policy() if you want to customize the after interval to something other than 7 days.

Set after interval for when: data becomes mostly immutable (some updates/backfill OK) AND B-tree indexes aren't needed for queries (less common criterion).

-- In TimescaleDB 2.23 and later only needed if you want to override the default 7-day policy created by tsdb.enable_columnstore=true
-- Remove the existing auto-created policy first:
-- CALL remove_columnstore_policy('your_table_name');
-- Then add custom policy:
-- CALL add_columnstore_policy('your_table_name', after => INTERVAL '1 day');

Step 3: Retention Policy

IMPORTANT: Don't guess - ask user or comment out if unknown.

-- Example - replace with requirements or comment out
SELECT add_retention_policy('your_table_name', INTERVAL '365 days');

Step 4: Create Continuous Aggregates

Use different aggregation intervals for different uses.

Short-term (Minutes/Hours)

For up-to-the-minute dashboards on high-frequency data.

CREATE MATERIALIZED VIEW your_table_hourly
WITH (timescaledb.continuous) AS
SELECT
    time_bucket(INTERVAL '1 hour', timestamp) AS bucket,
    entity_id,
    category,
    COUNT(*) as record_count,
    AVG(value_1) as avg_value_1,
    MIN(value_1) as min_value_1,
    MAX(value_1) as max_value_1,
    SUM(value_2) as sum_value_2
FROM your_table_name
GROUP BY bucket, entity_id, category;

Long-term (Days/Weeks/Months)

For long-term reporting and analytics.

CREATE MATERIALIZED VIEW your_table_daily
WITH (timescaledb.continuous) AS
SELECT
    time_bucket(INTERVAL '1 day', timestamp) AS bucket,
    entity_id,
    category,
    COUNT(*) as record_count,
    AVG(value_1) as avg_value_1,
    MIN(value_1) as min_value_1,
    MAX(value_1) as max_value_1,
    PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY value_1) as median_value_1,
    PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY value_1) as p95_value_1,
    SUM(value_2) as sum_value_2
FROM your_table_name
GROUP BY bucket, entity_id, category;

Step 5: Aggregate Refresh Policies

Set up refresh policies based on your data freshness requirements.

start_offset: Usually omit (refreshes all). Exception: If you don't care about refreshing data older than X (see below). With retention policy on raw data: match the retention policy.

end_offset: Set beyond active update window (e.g., 15 min if data usually arrives within 10 min). Data newer than end_offset won't appear in queries without real-time aggregation. If you don't know your update window, use the size of the time_bucket in the query, but not less than 5 minutes.

schedule_interval: Set to the same value as the end_offset but not more than 1 hour.

Hourly - frequent refresh for dashboards:

SELECT add_continuous_aggregate_policy('your_table_hourly',
    start_offset => N

---

*Content truncated.*

postgres-hybrid-text-search

timescale

Use this skill to implement hybrid search combining BM25 keyword search with semantic vector search using Reciprocal Rank Fusion (RRF). **Trigger when user asks to:** - Combine keyword and semantic search - Implement hybrid search or multi-modal retrieval - Use BM25/pg_textsearch with pgvector together - Implement RRF (Reciprocal Rank Fusion) for search - Build search that handles both exact terms and meaning **Keywords:** hybrid search, BM25, pg_textsearch, RRF, reciprocal rank fusion, keyword search, full-text search, reranking, cross-encoder Covers: pg_textsearch BM25 index setup, parallel query patterns, client-side RRF fusion (Python/TypeScript), weighting strategies, and optional ML reranking.

100

design-postgres-tables

timescale

Use this skill for general PostgreSQL table design. **Trigger when user asks to:** - Design PostgreSQL tables, schemas, or data models when creating new tables and when modifying existing ones. - Choose data types, constraints, or indexes for PostgreSQL - Create user tables, order tables, reference tables, or JSONB schemas - Understand PostgreSQL best practices for normalization, constraints, or indexing - Design update-heavy, upsert-heavy, or OLTP-style tables **Keywords:** PostgreSQL schema, table design, data types, PRIMARY KEY, FOREIGN KEY, indexes, B-tree, GIN, JSONB, constraints, normalization, identity columns, partitioning, row-level security Comprehensive reference covering data types, indexing strategies, constraints, JSONB patterns, partitioning, and PostgreSQL-specific best practices.

210

migrate-postgres-tables-to-hypertables

timescale

Use this skill to migrate identified PostgreSQL tables to Timescale/TimescaleDB hypertables with optimal configuration and validation. **Trigger when user asks to:** - Migrate or convert PostgreSQL tables to hypertables - Execute hypertable migration with minimal downtime - Plan blue-green migration for large tables - Validate hypertable migration success - Configure compression after migration **Prerequisites:** Tables already identified as candidates (use find-hypertable-candidates first if needed) **Keywords:** migrate to hypertable, convert table, Timescale, TimescaleDB, blue-green migration, in-place conversion, create_hypertable, migration validation, compression setup Step-by-step migration planning including: partition column selection, chunk interval calculation, PK/constraint handling, migration execution (in-place vs blue-green), and performance validation queries.

20

find-hypertable-candidates

timescale

Use this skill to analyze an existing PostgreSQL database and identify which tables should be converted to Timescale/TimescaleDB hypertables. **Trigger when user asks to:** - Analyze database tables for hypertable conversion potential - Identify time-series or event tables in an existing schema - Evaluate if a table would benefit from Timescale/TimescaleDB - Audit PostgreSQL tables for migration to Timescale/TimescaleDB/TigerData - Score or rank tables for hypertable candidacy **Keywords:** hypertable candidate, table analysis, migration assessment, Timescale, TimescaleDB, time-series detection, insert-heavy tables, event logs, audit tables Provides SQL queries to analyze table statistics, index patterns, and query patterns. Includes scoring criteria (8+ points = good candidate) and pattern recognition for IoT, events, transactions, and sequential data.

50

pgvector-semantic-search

timescale

Use this skill for setting up vector similarity search with pgvector for AI/ML embeddings, RAG applications, or semantic search. **Trigger when user asks to:** - Store or search vector embeddings in PostgreSQL - Set up semantic search, similarity search, or nearest neighbor search - Create HNSW or IVFFlat indexes for vectors - Implement RAG (Retrieval Augmented Generation) with PostgreSQL - Optimize pgvector performance, recall, or memory usage - Use binary quantization for large vector datasets **Keywords:** pgvector, embeddings, semantic search, vector similarity, HNSW, IVFFlat, halfvec, cosine distance, nearest neighbor, RAG, LLM, AI search Covers: halfvec storage, HNSW index configuration (m, ef_construction, ef_search), quantization strategies, filtered search, bulk loading, and performance tuning.

230

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

9521,094

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

845846

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

570697

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

548492

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

672464

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

512280

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.