dust-llm
Step-by-step guide for adding support for a new LLM in Dust. Use when adding a new model, or updating a previous one.
Install
mkdir -p .claude/skills/dust-llm && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4670" && unzip -o skill.zip -d .claude/skills/dust-llm && rm skill.zipInstalls to .claude/skills/dust-llm
About this skill
Adding Support for a New LLM Model
This skill guides you through adding support for a newly released LLM.
Quick Reference
Files to Modify
| File | Purpose |
|---|---|
front/types/assistant/models/{provider}.ts | Model ID + configuration |
front/lib/api/assistant/token_pricing.ts | Pricing per million tokens |
front/types/assistant/models/models.ts | Central registry |
front/lib/api/llm/clients/{provider}/types.ts | Router whitelist |
sdks/js/src/types.ts | SDK types |
front/components/providers/types.ts | UI availability (optional) |
front/lib/api/llm/tests/llm.test.ts | Integration tests |
Prerequisites
Before adding, gather:
- Model ID: Exact provider identifier (e.g.,
gpt-4-turbo-2024-04-09) - Context size: Total context window in tokens
- Pricing: Input/output cost per million tokens
- Capabilities: Vision, structured output, reasoning effort levels
- Tokenizer: Compatible tokenizer for token counting
Step-by-Step: Adding an OpenAI Model
Step 1: Add Model Configuration
Edit front/types/assistant/models/openai.ts:
export const GPT_4_TURBO_2024_04_09_MODEL_ID = "gpt-4-turbo-2024-04-09" as const;
export const GPT_4_TURBO_2024_04_09_MODEL_CONFIG: ModelConfigurationType = {
providerId: "openai",
modelId: GPT_4_TURBO_2024_04_09_MODEL_ID,
displayName: "GPT 4 turbo",
contextSize: 128_000,
recommendedTopK: 32,
recommendedExhaustiveTopK: 64,
largeModel: true,
description: "OpenAI's GPT 4 Turbo model for complex tasks (128k context).",
shortDescription: "OpenAI's second best model.",
isLegacy: false,
isLatest: false,
generationTokensCount: 2048,
supportsVision: true,
minimumReasoningEffort: "none",
maximumReasoningEffort: "none",
defaultReasoningEffort: "none",
supportsResponseFormat: false,
tokenizer: { type: "tiktoken", base: "cl100k_base" },
};
Step 2: Add Pricing
Edit front/lib/api/assistant/token_pricing.ts:
const CURRENT_MODEL_PRICING: Record<BaseModelIdType, PricingEntry> = {
// ... existing
"gpt-4-turbo-2024-04-09": {
input: 10.0, // USD per million input tokens
output: 30.0, // USD per million output tokens
cache_read_input_tokens: 1.0, // Optional: cached reads
cache_creation_input_tokens: 12.5, // Optional: cache creation
},
};
Step 3: Register in Central Registry
Edit front/types/assistant/models/models.ts:
export const MODEL_IDS = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_ID,
] as const;
export const SUPPORTED_MODEL_CONFIGS: ModelConfigurationType[] = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_CONFIG,
];
Step 4: Update Router Whitelist
Edit front/lib/api/llm/clients/openai/types.ts:
export const OPENAI_WHITELISTED_MODEL_IDS = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_ID,
] as const;
Step 5: Update SDK Types
Edit sdks/js/src/types.ts:
const ModelLLMIdSchema = FlexibleEnumSchema<
// ... existing
| "gpt-4-turbo-2024-04-09"
>();
Step 6: Add to UI (Optional)
Edit front/components/providers/types.ts:
export const USED_MODEL_CONFIGS: readonly ModelConfig[] = [
// ... existing
GPT_4_TURBO_2024_04_09_MODEL_CONFIG,
] as const;
Step 7: Test (Mandatory)
Edit front/lib/api/llm/tests/llm.test.ts:
const MODELS = {
// ... existing
[GPT_4_TURBO_2024_04_09_MODEL_ID]: {
runTest: true, // Enable for testing
providerId: "openai",
},
};
Run test:
RUN_LLM_TEST=true npx vitest --config lib/api/llm/tests/vite.config.js lib/api/llm/tests/llm.test.ts --run
After test passes, set runTest: false to avoid expensive CI runs.
Adding Anthropic Models
Same pattern with Anthropic-specific files:
front/types/assistant/models/anthropic.ts- AddCLAUDE_X_MODEL_IDand configfront/lib/api/llm/clients/anthropic/types.ts- Add toANTHROPIC_WHITELISTED_MODEL_IDSfront/types/assistant/models/models.ts- Register in central registryfront/lib/api/assistant/token_pricing.ts- Add pricingsdks/js/src/types.ts- Update SDK types- Test and validate
Model Configuration Properties
| Property | Description |
|---|---|
supportsVision | Can process images |
supportsResponseFormat | Supports structured output (JSON) |
minimumReasoningEffort | Min reasoning level ("none", "low", "medium", "high") |
maximumReasoningEffort | Max reasoning level |
defaultReasoningEffort | Default reasoning level |
tokenizer | Tokenizer config for token counting |
Validation Checklist
- Model config added to provider file
- Pricing updated (input, output, cache if applicable)
- Registered in central registry (
MODEL_IDS+SUPPORTED_MODEL_CONFIGS) - Router whitelist updated
- SDK types updated
- UI config added (if needed)
- Integration test passes
- Test disabled after validation
Troubleshooting
Model not in UI: Check USED_MODEL_CONFIGS in front/components/providers/types.ts
API calls failing: Verify model ID matches provider's exact identifier, check router whitelist
Token counting errors: Validate context size and tokenizer configuration
Pricing issues: Ensure prices are per million tokens in USD
Reference
- See
front/types/assistant/models/openai.tsandanthropic.tsfor examples - Provider docs: OpenAI, Anthropic, Google, Mistral
More by dust-tt
View all →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
rust-coding-skill
UtakataKyosui
Guides Claude in writing idiomatic, efficient, well-structured Rust code using proper data modeling, traits, impl organization, macros, and build-speed best practices.
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.