klingai-image-to-video
Generate videos from static images using Kling AI. Use when animating images, creating motion from stills, or building image-based content. Trigger with phrases like 'klingai image to video', 'kling ai animate image', 'klingai img2vid', 'animate picture klingai'.
Install
mkdir -p .claude/skills/klingai-image-to-video && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4833" && unzip -o skill.zip -d .claude/skills/klingai-image-to-video && rm skill.zipInstalls to .claude/skills/klingai-image-to-video
About this skill
Kling AI Image-to-Video
Overview
Animate static images using the /v1/videos/image2video endpoint. Supports motion prompts, camera control, dynamic masks (motion brush), static masks, and tail images for start-to-end transitions.
Endpoint: POST https://api.klingai.com/v1/videos/image2video
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model_name | string | Yes | kling-v1-5, kling-v2-1, kling-v2-master, etc. |
image | string | Yes | URL of the source image (JPG, PNG, WebP) |
prompt | string | No | Motion description for the animation |
negative_prompt | string | No | What to exclude |
duration | string | Yes | "5" or "10" seconds |
aspect_ratio | string | No | "16:9" default |
mode | string | No | "standard" or "professional" |
cfg_scale | float | No | Prompt adherence (0.0-1.0) |
image_tail | string | No | End-frame image URL (mutually exclusive with masks/camera) |
camera_control | object | No | Camera movement (mutually exclusive with masks/image_tail) |
static_mask | string | No | Mask image URL for fixed regions |
dynamic_masks | array | No | Motion brush trajectories |
callback_url | string | No | Webhook for completion |
Basic Image-to-Video
import jwt, time, os, requests
BASE = "https://api.klingai.com/v1"
def get_headers():
ak, sk = os.environ["KLING_ACCESS_KEY"], os.environ["KLING_SECRET_KEY"]
token = jwt.encode(
{"iss": ak, "exp": int(time.time()) + 1800, "nbf": int(time.time()) - 5},
sk, algorithm="HS256", headers={"alg": "HS256", "typ": "JWT"}
)
return {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
# Animate a landscape photo
response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
"model_name": "kling-v2-1",
"image": "https://example.com/landscape.jpg",
"prompt": "Clouds slowly drifting across the sky, gentle wind rustling through trees",
"negative_prompt": "static, frozen, blurry",
"duration": "5",
"mode": "standard",
})
task_id = response.json()["data"]["task_id"]
# Poll for result
while True:
time.sleep(15)
result = requests.get(
f"{BASE}/videos/image2video/{task_id}", headers=get_headers()
).json()
if result["data"]["task_status"] == "succeed":
print(f"Video: {result['data']['task_result']['videos'][0]['url']}")
break
elif result["data"]["task_status"] == "failed":
raise RuntimeError(result["data"]["task_status_msg"])
Start-to-End Transition (image_tail)
Use image_tail to specify both the first and last frame. Kling interpolates the motion between them.
response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
"model_name": "kling-v2-master",
"image": "https://example.com/sunrise.jpg", # first frame
"image_tail": "https://example.com/sunset.jpg", # last frame
"prompt": "Time lapse of sun moving across the sky",
"duration": "5",
"mode": "professional",
})
Motion Brush (dynamic_masks)
Draw motion paths for specific elements in the image. Up to 6 motion paths per image in v2.6.
response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
"model_name": "kling-v2-6",
"image": "https://example.com/person-standing.jpg",
"prompt": "Person walking forward naturally",
"duration": "5",
"dynamic_masks": [
{
"mask": "https://example.com/person-mask.png", # white = selected region
"trajectories": [
{"x": 0.5, "y": 0.7, "t": 0.0}, # start position (normalized 0-1)
{"x": 0.5, "y": 0.5, "t": 0.5}, # midpoint
{"x": 0.5, "y": 0.3, "t": 1.0}, # end position
]
}
],
})
Static Mask (freeze regions)
Keep specific areas of the image static while animating the rest.
response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
"model_name": "kling-v2-master",
"image": "https://example.com/scene.jpg",
"prompt": "Water flowing in the river, birds flying",
"duration": "5",
"static_mask": "https://example.com/buildings-mask.png", # white = frozen
})
Mutual Exclusivity Rules
These features cannot be combined in a single request:
| Feature Set A | Feature Set B |
|---|---|
image_tail | dynamic_masks, static_mask, camera_control |
dynamic_masks / static_mask | image_tail, camera_control |
camera_control | image_tail, dynamic_masks, static_mask |
Image Requirements
| Constraint | Value |
|---|---|
| Formats | JPG, PNG, WebP |
| Max size | 10 MB |
| Min resolution | 300x300 px |
| Max resolution | 4096x4096 px |
| Mask format | PNG with white (selected) / black (excluded) |
Error Handling
| Error | Cause | Fix |
|---|---|---|
400 invalid image | URL unreachable or wrong format | Verify image URL is publicly accessible |
400 mutual exclusivity | Combined incompatible features | Use only one feature set per request |
task_status: failed | Image too complex or low quality | Use higher resolution, clearer source |
| Mask mismatch | Mask dimensions differ from source | Ensure mask matches source image dimensions |
Resources
More by jeremylongshore
View all skills by jeremylongshore →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversTurn static images into stunning Ghibli-style animations with this AI video generator. Monitor tasks and credit balance
Jsoncut MCP Server: a json image generator and image generation API for programmatic video generation and dynamic image
Supercharge AI tools with Kagi MCP: fast google web search API, powerful ai summarizer, and seamless ai summary tool int
Extract text and audio from URLs, docs, videos, and images with AI voice generator and text to speech for unified conten
Generate and edit images from text with Nano-Banana, an AI image generator powered by Gemini 2.5 Flash. Fast, seamless,
Generate and edit images with Stability AI's powerful ai image generator using advanced Stable Diffusion models for stun
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.