python-parallelization
Transform sequential Python code into parallel/concurrent implementations. Use when asked to parallelize Python code, improve code performance through concurrency, convert loops to parallel execution, or identify parallelization opportunities. Handles CPU-bound (multiprocessing), I/O-bound (asyncio, threading), and data-parallel (vectorization) scenarios.
Install
mkdir -p .claude/skills/python-parallelization && curl -L -o skill.zip "https://mcp.directory/api/skills/download/5815" && unzip -o skill.zip -d .claude/skills/python-parallelization && rm skill.zipInstalls to .claude/skills/python-parallelization
About this skill
Python Parallelization Skill
Transform sequential Python code to leverage parallel and concurrent execution patterns.
Workflow
- Analyze the code to identify parallelization candidates
- Classify the workload type (CPU-bound, I/O-bound, or data-parallel)
- Select the appropriate parallelization strategy
- Transform the code with proper synchronization and error handling
- Verify correctness and measure expected speedup
Parallelization Decision Tree
Is the bottleneck CPU-bound or I/O-bound?
CPU-bound (computation-heavy):
├── Independent iterations? → multiprocessing.Pool / ProcessPoolExecutor
├── Shared state needed? → multiprocessing with Manager or shared memory
├── NumPy/Pandas operations? → Vectorization first, then consider numba/dask
└── Large data chunks? → chunked processing with Pool.map
I/O-bound (network, disk, database):
├── Many independent requests? → asyncio with aiohttp/aiofiles
├── Legacy sync code? → ThreadPoolExecutor
├── Mixed sync/async? → asyncio.to_thread()
└── Database queries? → Connection pooling + async drivers
Data-parallel (array/matrix ops):
├── NumPy arrays? → Vectorize, avoid Python loops
├── Pandas DataFrames? → Use built-in vectorized methods
├── Large datasets? → Dask for out-of-core parallelism
└── GPU available? → Consider CuPy or JAX
Transformation Patterns
Pattern 1: Loop to ProcessPoolExecutor (CPU-bound)
Before:
results = []
for item in items:
results.append(expensive_computation(item))
After:
from concurrent.futures import ProcessPoolExecutor
with ProcessPoolExecutor() as executor:
results = list(executor.map(expensive_computation, items))
Pattern 2: Sequential I/O to Async (I/O-bound)
Before:
import requests
def fetch_all(urls):
return [requests.get(url).json() for url in urls]
After:
import asyncio
import aiohttp
async def fetch_all(urls):
async with aiohttp.ClientSession() as session:
tasks = [fetch_one(session, url) for url in urls]
return await asyncio.gather(*tasks)
async def fetch_one(session, url):
async with session.get(url) as response:
return await response.json()
Pattern 3: Nested Loops to Vectorization
Before:
result = []
for i in range(len(a)):
row = []
for j in range(len(b)):
row.append(a[i] * b[j])
result.append(row)
After:
import numpy as np
result = np.outer(a, b)
Pattern 4: Mixed CPU/IO with asyncio
import asyncio
from concurrent.futures import ProcessPoolExecutor
async def hybrid_pipeline(data, urls):
loop = asyncio.get_event_loop()
# CPU-bound in process pool
with ProcessPoolExecutor() as pool:
processed = await loop.run_in_executor(pool, cpu_heavy_fn, data)
# I/O-bound with async
results = await asyncio.gather(*[fetch(url) for url in urls])
return processed, results
Parallelization Candidates
Look for these patterns in code:
| Pattern | Indicator | Strategy |
|---|---|---|
for item in collection with independent iterations | No shared mutation | Pool.map / executor.map |
Multiple requests.get() or file reads | Sequential I/O | asyncio.gather() |
| Nested loops over arrays | Numerical computation | NumPy vectorization |
time.sleep() or blocking waits | Waiting on external | Threading or async |
| Large list comprehensions | Independent transforms | Pool.map with chunking |
Safety Requirements
Always preserve correctness when parallelizing:
- Identify shared state - variables modified across iterations break parallelism
- Check dependencies - iteration N depending on N-1 requires sequential execution
- Handle exceptions - wrap parallel code in try/except, use
executor.submit()for granular error handling - Manage resources - use context managers, limit worker count to avoid exhaustion
- Preserve ordering - use
map()oversubmit()when order matters
Common Pitfalls
- GIL trap: Threading doesn't help CPU-bound Python code—use multiprocessing
- Pickle failures: Lambda functions and nested classes can't be pickled for multiprocessing
- Memory explosion: ProcessPoolExecutor copies data to each process—use shared memory for large data
- Async in sync: Can't just add
asyncto existing code—requires restructuring call chain - Over-parallelization: Parallel overhead exceeds gains for small workloads (<1000 items typically)
Verification Checklist
Before finalizing transformed code:
- Output matches sequential version for test inputs
- No race conditions (shared mutable state properly synchronized)
- Exceptions are caught and handled appropriately
- Resources are properly cleaned up (pools closed, connections released)
- Worker count is bounded (default or explicit limit)
- Added appropriate imports
More by benchflow-ai
View all skills by benchflow-ai →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversOptimize your codebase for AI with Repomix—transform, compress, and secure repos for easier analysis with modern AI tool
Unlock seamless Figma to code: streamline Figma to HTML with Framelink MCP Server for fast, accurate design-to-code work
Desktop Commander MCP unifies code management with advanced source control, git, and svn support—streamlining developmen
Convert almost anything to Markdown. Transforms PDFs, images, web pages, DOCX, XLSX, and other formats into clean Markdo
Effortlessly manage Google Cloud with this user-friendly multi cloud management platform—simplify operations, automate t
Transform Figma designs into high-quality code with AI. Seamless figma to code and figma to html workflows for efficient
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.