vastai-deploy-integration
Deploy Vast.ai integrations to Vercel, Fly.io, and Cloud Run platforms. Use when deploying Vast.ai-powered applications to production, configuring platform-specific secrets, or setting up deployment pipelines. Trigger with phrases like "deploy vastai", "vastai Vercel", "vastai production deploy", "vastai Cloud Run", "vastai Fly.io".
Install
mkdir -p .claude/skills/vastai-deploy-integration && curl -L -o skill.zip "https://mcp.directory/api/skills/download/9307" && unzip -o skill.zip -d .claude/skills/vastai-deploy-integration && rm skill.zipInstalls to .claude/skills/vastai-deploy-integration
About this skill
Vast.ai Deploy Integration
Overview
Deploy ML training jobs and inference services on Vast.ai GPU cloud. Covers Docker image optimization, automated provisioning scripts, data transfer strategies, and deployment automation.
Prerequisites
- Vast.ai CLI authenticated
- Docker image published to a registry
- Training/inference code tested locally
Instructions
Step 1: Optimized Docker Image
# Dockerfile.vastai — optimized for fast pulls on Vast.ai
FROM pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime
# Install dependencies in a single layer
COPY requirements.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements.txt && rm /tmp/requirements.txt
# Copy application code
COPY src/ /workspace/src/
COPY scripts/ /workspace/scripts/
WORKDIR /workspace
CMD ["python", "src/train.py"]
# Build and push
docker build -t ghcr.io/yourorg/training:v1 -f Dockerfile.vastai .
docker push ghcr.io/yourorg/training:v1
Step 2: Automated Deployment Script
#!/usr/bin/env python3
"""deploy.py — Automated Vast.ai deployment with monitoring."""
import subprocess, json, time, argparse, sys
def deploy(args):
# Search for matching offer
query = (f"num_gpus={args.gpus} gpu_name={args.gpu} "
f"reliability>{args.reliability} dph_total<={args.max_price} "
f"disk_space>={args.disk} rentable=true")
offers = json.loads(subprocess.run(
["vastai", "search", "offers", query, "--order", "dph_total",
"--raw", "--limit", "5"],
capture_output=True, text=True, check=True).stdout)
if not offers:
print(f"ERROR: No offers matching: {query}", file=sys.stderr)
sys.exit(1)
offer = offers[0]
print(f"Selected: {offer['gpu_name']} ${offer['dph_total']:.3f}/hr "
f"(ID: {offer['id']})")
# Create instance
cmd = ["vastai", "create", "instance", str(offer["id"]),
"--image", args.image, "--disk", str(args.disk)]
if args.onstart:
cmd.extend(["--onstart-cmd", args.onstart])
result = json.loads(subprocess.run(
cmd, capture_output=True, text=True, check=True).stdout)
instance_id = result["new_contract"]
print(f"Instance {instance_id} provisioning...")
# Wait for running
for _ in range(30):
info = json.loads(subprocess.run(
["vastai", "show", "instance", str(instance_id), "--raw"],
capture_output=True, text=True).stdout)
if info.get("actual_status") == "running":
print(f"READY: ssh -p {info['ssh_port']} root@{info['ssh_host']}")
return instance_id, info
time.sleep(10)
raise TimeoutError("Instance did not start")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--gpu", default="RTX_4090")
parser.add_argument("--gpus", type=int, default=1)
parser.add_argument("--image", required=True)
parser.add_argument("--disk", type=int, default=50)
parser.add_argument("--max-price", type=float, default=0.50)
parser.add_argument("--reliability", type=float, default=0.95)
parser.add_argument("--onstart", default="")
deploy(parser.parse_args())
Step 3: Data Transfer Strategies
# Small datasets (<5GB): SCP directly
scp -P $PORT ./data.tar.gz root@$HOST:/workspace/
# Large datasets (>5GB): Use rsync with compression
rsync -avz --progress -e "ssh -p $PORT" ./data/ root@$HOST:/workspace/data/
# Very large datasets: Pre-stage on cloud storage
ssh -p $PORT root@$HOST "wget -q https://storage.example.com/dataset.tar.gz -O /workspace/data.tar.gz"
Step 4: Health Check After Deploy
ssh -p $PORT -o StrictHostKeyChecking=no root@$HOST << 'CHECK'
echo "=== Deploy Health Check ==="
nvidia-smi --query-gpu=name,memory.total --format=csv,noheader
python -c "import torch; print(f'CUDA: {torch.cuda.is_available()}')"
df -h /workspace | tail -1
echo "=== Ready ==="
CHECK
Output
- Optimized Docker image for fast Vast.ai pulls
- Automated deployment script with GPU/price selection
- Data transfer patterns (SCP, rsync, cloud storage)
- Post-deploy health check verification
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Docker pull timeout | Image too large (>10GB) | Use multi-stage builds; minimize image layers |
| Disk space exhausted | Insufficient disk allocation | Increase --disk parameter |
| SSH timeout after deploy | Instance still loading image | Wait longer or use smaller base image |
| CUDA version mismatch | Image CUDA > host CUDA | Filter offers by cuda_max_good |
Resources
Next Steps
For event-driven workflows, see vastai-webhooks-events.
Examples
One-command deploy: python deploy.py --gpu A100 --image ghcr.io/org/train:v1 --max-price 2.00 --disk 100
Multi-GPU deploy: Set --gpus 4 and --gpu H100_SXM for distributed training with torchrun.
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversConnect to Vercel for secure OAuth access to deployments, project settings, and team resources. Simplify Vercel platform
Deploy websites from AI assistants via MCP. Support for React components, static sites, markdown to Cloudflare Pages, Ne
Pipedream — Access hosted MCP servers or deploy your own for 2,500+ APIs (Slack, GitHub, Notion, Google Drive) with buil
XcodeBuild streamlines iOS app development for Apple developers with tools for building, debugging, and deploying iOS an
Access Cloudflare documentation fast via a Cloudflare Worker using an indexed Vectorize DB. Ideal for Cloudflare API doc
Connect Supabase projects to AI with Supabase MCP Server. Standardize LLM communication for secure, efficient developmen
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.