codeql
Runs CodeQL static analysis for security vulnerability detection using interprocedural data flow and taint tracking. Applicable when finding vulnerabilities, running a security scan, performing a security audit, running CodeQL, building a CodeQL database, selecting query rulesets, creating data extension models, or processing CodeQL SARIF output. NOT for writing custom QL queries or CI/CD pipeline setup.
Install
mkdir -p .claude/skills/codeql && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7081" && unzip -o skill.zip -d .claude/skills/codeql && rm skill.zipInstalls to .claude/skills/codeql
About this skill
CodeQL Analysis
Supported languages: Python, JavaScript/TypeScript, Go, Java/Kotlin, C/C++, C#, Ruby, Swift.
Skill resources: Reference files and templates are located at {baseDir}/references/ and {baseDir}/workflows/.
Essential Principles
-
Database quality is non-negotiable. A database that builds is not automatically good. Always run quality assessment (file counts, baseline LoC, extractor errors) and compare against expected source files. A cached build produces zero useful extraction.
-
Data extensions catch what CodeQL misses. Even projects using standard frameworks (Django, Spring, Express) have custom wrappers around database calls, request parsing, or shell execution. Skipping the create-data-extensions workflow means missing vulnerabilities in project-specific code paths.
-
Explicit suite references prevent silent query dropping. Never pass pack names directly to
codeql database analyze— each pack'sdefaultSuiteFileapplies hidden filters that can produce zero results. Always generate a custom.qlssuite file. -
Zero findings needs investigation, not celebration. Zero results can indicate poor database quality, missing models, wrong query packs, or silent suite filtering. Investigate before reporting clean.
-
macOS Apple Silicon requires workarounds for compiled languages. Exit code 137 is
arm64e/arm64mismatch, not a build failure. Try Homebrew arm64 tools or Rosetta before falling back tobuild-mode=none. -
Follow workflows step by step. Once a workflow is selected, execute it step by step without skipping phases. Each phase gates the next — skipping quality assessment or data extensions leads to incomplete analysis.
Output Directory
All generated files (database, build logs, diagnostics, extensions, results) are stored in a single output directory.
- If the user specifies an output directory in their prompt, use it as
OUTPUT_DIR. - If not specified, default to
./static_analysis_codeql_1. If that already exists, increment to_2,_3, etc.
In both cases, always create the directory with mkdir -p before writing any files.
# Resolve output directory
if [ -n "$USER_SPECIFIED_DIR" ]; then
OUTPUT_DIR="$USER_SPECIFIED_DIR"
else
BASE="static_analysis_codeql"
N=1
while [ -e "${BASE}_${N}" ]; do
N=$((N + 1))
done
OUTPUT_DIR="${BASE}_${N}"
fi
mkdir -p "$OUTPUT_DIR"
The output directory is resolved once at the start before any workflow executes. All workflows receive $OUTPUT_DIR and store their artifacts there:
$OUTPUT_DIR/
├── rulesets.txt # Selected query packs (logged after Step 3)
├── codeql.db/ # CodeQL database (dir containing codeql-database.yml)
├── build.log # Build log
├── codeql-config.yml # Exclusion config (interpreted languages)
├── diagnostics/ # Diagnostic queries and CSVs
├── extensions/ # Data extension YAMLs
├── raw/ # Unfiltered analysis output
│ ├── results.sarif
│ └── <mode>.qls
└── results/ # Final results (filtered for important-only, copied for run-all)
└── results.sarif
Database Discovery
A CodeQL database is identified by the presence of a codeql-database.yml marker file inside its directory. When searching for existing databases, always collect all matches — there may be multiple databases from previous runs or for different languages.
Discovery command:
# Find ALL CodeQL databases (top-level and one subdirectory deep)
find . -maxdepth 3 -name "codeql-database.yml" -not -path "*/\.*" 2>/dev/null \
| while read -r yml; do dirname "$yml"; done
- Inside
$OUTPUT_DIR:find "$OUTPUT_DIR" -maxdepth 2 -name "codeql-database.yml" - Project-wide (for auto-detection):
find . -maxdepth 3 -name "codeql-database.yml"— covers databases at the project top level (./db-name/) and one subdirectory deep (./subdir/db-name/). Does not search deeper.
Never assume a database is named codeql.db — discover it by its marker file.
When multiple databases are found:
For each discovered database, collect metadata to help the user choose:
# For each database, extract language and creation time
for db in $FOUND_DBS; do
CODEQL_LANG=$(codeql resolve database --format=json -- "$db" 2>/dev/null | jq -r '.languages[0]')
CREATED=$(grep '^creationMetadata:' -A5 "$db/codeql-database.yml" 2>/dev/null | grep 'creationTime' | awk '{print $2}')
echo "$db — language: $CODEQL_LANG, created: $CREATED"
done
Then use AskUserQuestion to let the user select which database to use, or to build a new one. Skip AskUserQuestion if the user explicitly stated which database to use or to build a new one in their prompt.
Quick Start
For the common case ("scan this codebase for vulnerabilities"):
# 1. Verify CodeQL is installed
if ! command -v codeql >/dev/null 2>&1; then
echo "NOT INSTALLED: codeql binary not found on PATH"
else
codeql --version || echo "ERROR: codeql found but --version failed (check installation)"
fi
# 2. Resolve output directory
BASE="static_analysis_codeql"; N=1
while [ -e "${BASE}_${N}" ]; do N=$((N + 1)); done
OUTPUT_DIR="${BASE}_${N}"; mkdir -p "$OUTPUT_DIR"
Then execute the full pipeline: build database → create data extensions → run analysis using the workflows below.
When to Use
- Scanning a codebase for security vulnerabilities with deep data flow analysis
- Building a CodeQL database from source code (with build capability for compiled languages)
- Finding complex vulnerabilities that require interprocedural taint tracking or AST/CFG analysis
- Performing comprehensive security audits with multiple query packs
When NOT to Use
- Writing custom queries - Use a dedicated query development skill
- CI/CD integration - Use GitHub Actions documentation directly
- Quick pattern searches - Use Semgrep or grep for speed
- No build capability for compiled languages - Consider Semgrep instead
- Single-file or lightweight analysis - Semgrep is faster for simple pattern matching
Rationalizations to Reject
These shortcuts lead to missed findings. Do not accept them:
- "security-extended is enough" - It is the baseline. Always check if Trail of Bits packs and Community Packs are available for the language. They catch categories
security-extendedmisses entirely. - "security-and-quality is the broadest suite" -
security-and-qualityexcludes allexperimental/query paths. For run-all mode, import bothsecurity-and-qualityandsecurity-experimental. The delta is 1–52 queries depending on the language. - "The database built, so it's good" - A database that builds does not mean it extracted well. Always run quality assessment and check file counts against expected source files.
- "Data extensions aren't needed for standard frameworks" - Even Django/Spring apps have custom wrappers that CodeQL does not model. Skipping extensions means missing vulnerabilities.
- "build-mode=none is fine for compiled languages" - It produces severely incomplete analysis. Only use as an absolute last resort. On macOS, try the arm64 toolchain workaround or Rosetta first.
- "The build fails on macOS, just use build-mode=none" - Exit code 137 is caused by
arm64e/arm64mismatch, not a fundamental build failure. See macos-arm64e-workaround.md. - "No findings means the code is secure" - Zero findings can indicate poor database quality, missing models, or wrong query packs. Investigate before reporting clean results.
- "I'll just run the default suite" / "I'll just pass the pack names directly" - Each pack's
defaultSuiteFileapplies hidden filters and can produce zero results. Always use an explicit suite reference. - "I'll put files in the current directory" - All generated files must go in
$OUTPUT_DIR. Scattering files in the working directory makes cleanup impossible and risks overwriting previous runs. - "Just use the first database I find" - Multiple databases may exist for different languages or from previous runs. When more than one is found, present all options to the user. Only skip the prompt when the user already specified which database to use.
- "The user said 'scan', that means they want me to pick a database" - "Scan" is not database selection. If multiple databases exist and the user didn't name one, ask.
Workflow Selection
This skill has three workflows. Once a workflow is selected, execute it step by step without skipping phases.
| Workflow | Purpose |
|---|---|
| build-database | Create CodeQL database using build methods in sequence |
| create-data-extensions | Detect or generate data extension models for project APIs |
| run-analysis | Select rulesets, execute queries, process results |
Auto-Detection Logic
If user explicitly specifies what to do (e.g., "build a database", "run analysis on ./my-db"), execute that workflow directly. Do NOT call AskUserQuestion for database selection if the user's prompt already makes their intent clear — e.g., "build a new database", "analyze the codeql database in static_analysis_codeql_2", "run a full scan from scratch".
Default pipeline for "test", "scan", "analyze", or similar: Discover existing databases first, then decide.
# Find ALL CodeQL databases by looking for codeql-database.yml marker file
# Search top-level dirs and one subdirectory deep
FOUND_DBS=()
while IFS= read -r yml; do
db_dir=$(dirname "$yml")
codeql resolve database -- "$db_dir" >/dev/null 2>&1 && FOUND_DBS+=("$db_dir")
done < <(find . -maxdepth 3 -name "codeql-database.yml" -not -path "*/\.*" 2>/dev/null)
echo "Found ${#FOUND_DBS[@]} exis
---
*Content truncated.*
More by trailofbits
View all skills by trailofbits →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversSemgrep is a leading code analysis tool that scans code for vulnerabilities, helping developers fix issues swiftly withi
Integrate DeepSource's static code analysis tools for real-time project metrics, issue tracking, and code quality insigh
Optimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Advanced MCP server enabling AI agents to autonomously run 150+ security and penetration testing tools. Covers reconnais
Codex CLI is a code analysis tool for structured command execution, brainstorming, and workflow automation with static c
Connect with CrowdStrike Falcon, a leading endpoint protection platform, for intelligent security analysis and advanced
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.