groq-deploy-integration
Deploy Groq integrations to Vercel, Fly.io, and Cloud Run platforms. Use when deploying Groq-powered applications to production, configuring platform-specific secrets, or setting up deployment pipelines. Trigger with phrases like "deploy groq", "groq Vercel", "groq production deploy", "groq Cloud Run", "groq Fly.io".
Install
mkdir -p .claude/skills/groq-deploy-integration && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5978" && unzip -o skill.zip -d .claude/skills/groq-deploy-integration && rm skill.zipInstalls to .claude/skills/groq-deploy-integration
About this skill
Groq Deploy Integration
Overview
Deploy applications using Groq's inference API to Vercel Edge, Cloud Run, Docker, and other platforms. Groq's sub-200ms latency makes it ideal for edge deployments and real-time applications.
Prerequisites
- Groq API key stored in
GROQ_API_KEY - Application using
groq-sdkpackage - Platform CLI installed (vercel, docker, or gcloud)
Instructions
Step 1: Vercel Edge Function
// app/api/chat/route.ts (Next.js App Router)
import Groq from "groq-sdk";
export const runtime = "edge";
export async function POST(req: Request) {
const groq = new Groq({ apiKey: process.env.GROQ_API_KEY! });
const { messages, stream: useStream } = await req.json();
if (useStream) {
const stream = await groq.chat.completions.create({
model: "llama-3.3-70b-versatile",
messages,
stream: true,
max_tokens: 2048,
});
const encoder = new TextEncoder();
const readable = new ReadableStream({
async start(controller) {
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ content })}\n\n`)
);
}
}
controller.enqueue(encoder.encode("data: [DONE]\n\n"));
controller.close();
},
});
return new Response(readable, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
});
}
const completion = await groq.chat.completions.create({
model: "llama-3.3-70b-versatile",
messages,
max_tokens: 2048,
});
return Response.json(completion);
}
Step 2: Vercel Deployment
set -euo pipefail
# Set secret
vercel env add GROQ_API_KEY production
# Deploy
vercel --prod
Step 3: Docker Container
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json .
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s CMD curl -sf http://localhost:3000/health || exit 1
CMD ["node", "dist/index.js"]
Step 4: Cloud Run Deployment
set -euo pipefail
# Store API key in Secret Manager
echo -n "$GROQ_API_KEY" | gcloud secrets create groq-api-key --data-file=-
# Deploy with streaming support
gcloud run deploy groq-api \
--source . \
--region us-central1 \
--set-secrets=GROQ_API_KEY=groq-api-key:latest \
--min-instances=1 \
--max-instances=10 \
--cpu=1 --memory=512Mi \
--allow-unauthenticated \
--timeout=60s
Step 5: Express Server with Health Check
import express from "express";
import Groq from "groq-sdk";
const app = express();
const groq = new Groq();
app.use(express.json());
// Health check -- uses cheapest model with minimal tokens
app.get("/health", async (_req, res) => {
try {
const start = performance.now();
await groq.chat.completions.create({
model: "llama-3.1-8b-instant",
messages: [{ role: "user", content: "OK" }],
max_tokens: 1,
});
res.json({
status: "healthy",
groq: { connected: true, latencyMs: Math.round(performance.now() - start) },
});
} catch (err: any) {
res.status(503).json({
status: "unhealthy",
groq: { connected: false, error: err.message },
});
}
});
// Chat endpoint with streaming
app.post("/api/chat", async (req, res) => {
const { messages, model = "llama-3.3-70b-versatile" } = req.body;
if (req.headers.accept === "text/event-stream") {
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
});
const stream = await groq.chat.completions.create({
model,
messages,
stream: true,
max_tokens: 2048,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
res.write(`data: ${JSON.stringify({ content })}\n\n`);
}
}
res.write("data: [DONE]\n\n");
res.end();
} else {
const completion = await groq.chat.completions.create({
model,
messages,
max_tokens: 2048,
});
res.json(completion);
}
});
app.listen(3000, () => console.log("Groq API server on :3000"));
Step 6: Vercel AI SDK Integration
// Using @ai-sdk/groq for Vercel AI SDK
import { createGroq } from "@ai-sdk/groq";
import { streamText } from "ai";
const groq = createGroq({ apiKey: process.env.GROQ_API_KEY });
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: groq("llama-3.3-70b-versatile"),
messages,
});
return result.toDataStreamResponse();
}
Environment Variable Config
| Platform | Command |
|---|---|
| Vercel | vercel env add GROQ_API_KEY production |
| Cloud Run | gcloud secrets create groq-api-key --data-file=- |
| Fly.io | fly secrets set GROQ_API_KEY=gsk_... |
| Railway | railway variables set GROQ_API_KEY=gsk_... |
| Docker | -e GROQ_API_KEY=gsk_... or Docker secrets |
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Rate limited (429) | Too many requests | Implement request queuing with backoff |
| Edge timeout | Response > 25s | Use streaming for long completions |
| Model unavailable | Capacity or deprecation | Fall back to llama-3.1-8b-instant |
| Cold start latency | Serverless function init | Set min-instances=1 on Cloud Run |
| API key not found | Secret not configured | Check platform secret config |
Resources
Next Steps
For multi-environment setup, see groq-multi-env-setup.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversConnect to Vercel for secure OAuth access to deployments, project settings, and team resources. Simplify Vercel platform
Pipedream — Access hosted MCP servers or deploy your own for 2,500+ APIs (Slack, GitHub, Notion, Google Drive) with buil
XcodeBuild streamlines iOS app development for Apple developers with tools for building, debugging, and deploying iOS an
Access Cloudflare documentation fast via a Cloudflare Worker using an indexed Vectorize DB. Ideal for Cloudflare API doc
Connect Supabase projects to AI with Supabase MCP Server. Standardize LLM communication for secure, efficient developmen
Solana Agent Kit: Easily deploy tokens, mint NFTs, and manage DeFi & cross-chain tasks with Solana integration in chat i
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.