groq-reference-architecture

0
0
Source

Implement Groq reference architecture with best-practice project layout. Use when designing new Groq integrations, reviewing project structure, or establishing architecture standards for Groq applications. Trigger with phrases like "groq architecture", "groq best practices", "groq project structure", "how to organize groq", "groq layout".

Install

mkdir -p .claude/skills/groq-reference-architecture && curl -L -o skill.zip "https://mcp.directory/api/skills/download/8499" && unzip -o skill.zip -d .claude/skills/groq-reference-architecture && rm skill.zip

Installs to .claude/skills/groq-reference-architecture

About this skill

Groq Reference Architecture

Overview

Production architecture for applications built on Groq's LPU inference API. Covers model routing by latency requirements, streaming pipelines, multi-provider fallback, and the middleware layer that ties it together.

Architecture Diagram

┌──────────────────────────────────────────────────────────────┐
│                     Application Layer                         │
│  Chat UI  │  API Backend  │  Batch Processor  │  Agent       │
└─────┬─────┴──────┬────────┴────────┬──────────┴──────┬───────┘
      │            │                 │                 │
      ▼            ▼                 ▼                 ▼
┌──────────────────────────────────────────────────────────────┐
│                    Groq Service Layer                         │
│  ┌─────────────┐  ┌────────────┐  ┌─────────────────────┐   │
│  │ Model Router │  │ Middleware │  │ Fallback Chain      │   │
│  │             │  │            │  │                     │   │
│  │ speed →     │  │ Cache      │  │ Groq (primary)      │   │
│  │   8b-instant│  │ Rate Guard │  │   ↓ 429/5xx         │   │
│  │ quality →   │  │ Metrics    │  │ Groq (fallback model)│  │
│  │   70b-versa.│  │ Logging    │  │   ↓ still failing    │   │
│  │ vision →    │  │ Retry      │  │ OpenAI (backup)     │   │
│  │   llama-4   │  │            │  │   ↓ also failing     │   │
│  │ audio →     │  │            │  │ Graceful degrade    │   │
│  │   whisper   │  │            │  │                     │   │
│  └─────────────┘  └────────────┘  └─────────────────────┘   │
└──────────────────────────────────────────────────────────────┘

Project Structure

src/
├── groq/
│   ├── client.ts            # Singleton Groq client
│   ├── models.ts            # Model constants and capabilities
│   ├── router.ts            # Model selection logic
│   ├── middleware.ts         # Cache, rate limit, metrics
│   ├── fallback.ts          # Multi-provider fallback chain
│   └── types.ts             # Shared types
├── services/
│   ├── chat.ts              # Chat completion service
│   ├── transcription.ts     # Audio transcription (Whisper)
│   ├── extraction.ts        # Structured data extraction
│   └── batch.ts             # Batch processing service
└── api/
    ├── chat.ts              # HTTP endpoint
    ├── transcribe.ts        # Audio endpoint
    └── health.ts            # Health check

Instructions

Step 1: Model Registry

// src/groq/models.ts
export interface ModelSpec {
  id: string;
  tier: "speed" | "quality" | "vision" | "audio";
  contextWindow: number;
  maxOutput: number;
  speedTokPerSec: number;
  inputCostPer1M: number;
  outputCostPer1M: number;
  capabilities: ("text" | "tools" | "json" | "vision" | "audio")[];
}

export const MODELS: Record<string, ModelSpec> = {
  "llama-3.1-8b-instant": {
    id: "llama-3.1-8b-instant",
    tier: "speed",
    contextWindow: 131_072,
    maxOutput: 8_192,
    speedTokPerSec: 560,
    inputCostPer1M: 0.05,
    outputCostPer1M: 0.08,
    capabilities: ["text", "tools", "json"],
  },
  "llama-3.3-70b-versatile": {
    id: "llama-3.3-70b-versatile",
    tier: "quality",
    contextWindow: 131_072,
    maxOutput: 32_768,
    speedTokPerSec: 280,
    inputCostPer1M: 0.59,
    outputCostPer1M: 0.79,
    capabilities: ["text", "tools", "json"],
  },
  "meta-llama/llama-4-scout-17b-16e-instruct": {
    id: "meta-llama/llama-4-scout-17b-16e-instruct",
    tier: "vision",
    contextWindow: 131_072,
    maxOutput: 8_192,
    speedTokPerSec: 460,
    inputCostPer1M: 0.11,
    outputCostPer1M: 0.34,
    capabilities: ["text", "tools", "json", "vision"],
  },
  "whisper-large-v3-turbo": {
    id: "whisper-large-v3-turbo",
    tier: "audio",
    contextWindow: 0,
    maxOutput: 0,
    speedTokPerSec: 0,
    inputCostPer1M: 0,
    outputCostPer1M: 0,
    capabilities: ["audio"],
  },
};

Step 2: Model Router

// src/groq/router.ts
import { MODELS, ModelSpec } from "./models";

interface RoutingRequest {
  maxLatencyMs?: number;
  needsVision?: boolean;
  needsTools?: boolean;
  needsJSON?: boolean;
  contextLength?: number;
  costSensitive?: boolean;
}

export function selectModel(req: RoutingRequest): ModelSpec {
  if (req.needsVision) return MODELS["meta-llama/llama-4-scout-17b-16e-instruct"];

  if (req.costSensitive || (req.maxLatencyMs && req.maxLatencyMs < 100)) {
    return MODELS["llama-3.1-8b-instant"];
  }

  if (req.needsTools || req.needsJSON) {
    return MODELS["llama-3.3-70b-versatile"];
  }

  // Default: speed tier
  return MODELS["llama-3.1-8b-instant"];
}

Step 3: Middleware Layer

// src/groq/middleware.ts
import Groq from "groq-sdk";
import { LRUCache } from "lru-cache";
import { createHash } from "crypto";

const cache = new LRUCache<string, any>({ max: 500, ttl: 10 * 60_000 });

export async function completionWithMiddleware(
  groq: Groq,
  model: string,
  messages: any[],
  options?: { maxTokens?: number; temperature?: number; stream?: boolean }
) {
  const temp = options?.temperature ?? 0.7;

  // Cache check (only for deterministic requests)
  if (temp === 0 && !options?.stream) {
    const key = createHash("sha256").update(JSON.stringify({ model, messages })).digest("hex");
    const cached = cache.get(key);
    if (cached) return cached;
  }

  // Metrics
  const start = performance.now();

  const response = await groq.chat.completions.create({
    model,
    messages,
    max_tokens: options?.maxTokens ?? 1024,
    temperature: temp,
    stream: options?.stream ?? false,
  });

  const latency = performance.now() - start;

  // Emit metrics
  emitMetrics({
    model,
    latencyMs: Math.round(latency),
    tokens: (response as any).usage?.total_tokens ?? 0,
    cached: false,
  });

  // Cache deterministic responses
  if (temp === 0 && !options?.stream) {
    const key = createHash("sha256").update(JSON.stringify({ model, messages })).digest("hex");
    cache.set(key, response);
  }

  return response;
}

function emitMetrics(data: any) {
  // Plug in your metrics system: Prometheus, Datadog, etc.
  console.log(`[groq-metrics] ${JSON.stringify(data)}`);
}

Step 4: Fallback Chain

// src/groq/fallback.ts
import Groq from "groq-sdk";

export async function completionWithFallback(
  groq: Groq,
  messages: any[],
  options?: { primaryModel?: string; maxTokens?: number }
) {
  const primary = options?.primaryModel || "llama-3.3-70b-versatile";
  const fallbackModel = "llama-3.1-8b-instant";

  // Attempt 1: Primary model
  try {
    return await groq.chat.completions.create({
      model: primary,
      messages,
      max_tokens: options?.maxTokens ?? 1024,
    });
  } catch (err: any) {
    if (err.status !== 429 && err.status < 500) throw err;
    console.warn(`Primary model ${primary} failed (${err.status}), trying fallback`);
  }

  // Attempt 2: Fallback model (different rate limit pool)
  try {
    return await groq.chat.completions.create({
      model: fallbackModel,
      messages,
      max_tokens: options?.maxTokens ?? 1024,
    });
  } catch (err: any) {
    console.warn(`Groq fallback also failed (${err.status})`);
  }

  // Attempt 3: Graceful degradation
  return {
    choices: [{
      message: {
        role: "assistant" as const,
        content: "Service temporarily unavailable. Please try again in a moment.",
      },
      finish_reason: "stop" as const,
    }],
    model: "fallback",
    usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 },
  };
}

Step 5: Streaming Pipeline

// src/groq/streaming.ts
import Groq from "groq-sdk";

export async function* streamCompletion(
  groq: Groq,
  messages: any[],
  model = "llama-3.3-70b-versatile"
): AsyncGenerator<{ type: "token" | "done" | "error"; content?: string; error?: string }> {
  try {
    const stream = await groq.chat.completions.create({
      model,
      messages,
      stream: true,
      max_tokens: 2048,
    });

    for await (const chunk of stream) {
      const content = chunk.choices[0]?.delta?.content;
      if (content) yield { type: "token", content };
    }

    yield { type: "done" };
  } catch (err: any) {
    yield { type: "error", error: err.message };
  }
}

Integration Patterns

PatternWhen to UseGroq Feature
Direct completionSimple request/responsechat.completions.create
Streaming SSEReal-time chat UIstream: true
Tool callingAgent with function executiontools parameter
JSON extractionStructured data from textresponse_format: json_object
Batch processingHigh-volume document processingQueue + rate limiting
Audio transcriptionVoice inputaudio.transcriptions.create
Vision analysisImage understandingLlama 4 Scout/Maverick

Error Handling

IssueCauseSolution
429 on primary modelRPM/TPM exceededFall back to different model
High latencyWrong model tierRoute to 8b-instant for latency-critical paths
Context overflowInput > 128K tokensTruncate or chunk input
Vision errorsWrong model for imagesUse Llama 4 Scout full model path

Resources

Next Steps

For multi-environment deployment, see groq-multi-env-setup.

svg-icon-generator

jeremylongshore

Svg Icon Generator - Auto-activating skill for Visual Content. Triggers on: svg icon generator, svg icon generator Part of the Visual Content skill category.

8227

d2-diagram-creator

jeremylongshore

D2 Diagram Creator - Auto-activating skill for Visual Content. Triggers on: d2 diagram creator, d2 diagram creator Part of the Visual Content skill category.

4926

automating-mobile-app-testing

jeremylongshore

This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".

14217

performing-penetration-testing

jeremylongshore

This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.

4615

designing-database-schemas

jeremylongshore

Design and visualize efficient database schemas, normalize data, map relationships, and generate ERD diagrams and SQL statements.

11514

analyzing-logs

jeremylongshore

Analyze application logs to detect performance issues, identify error patterns, and improve stability by extracting key insights.

11410

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

1,1421,171

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

969933

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

683829

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

691549

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

797540

pdf-to-markdown

aliceisjustplaying

Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.

697374

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.