memory-optimization

1
0
Source

Optimize Python code for reduced memory usage and improved memory efficiency. Use when asked to reduce memory footprint, fix memory leaks, optimize data structures for memory, handle large datasets efficiently, or diagnose memory issues. Covers object sizing, generator patterns, efficient data structures, and memory profiling strategies.

Install

mkdir -p .claude/skills/memory-optimization && curl -L -o skill.zip "https://mcp.directory/api/skills/download/4400" && unzip -o skill.zip -d .claude/skills/memory-optimization && rm skill.zip

Installs to .claude/skills/memory-optimization

About this skill

Memory Optimization Skill

Transform Python code to minimize memory usage while maintaining functionality.

Workflow

  1. Profile to identify memory bottlenecks (largest allocations, leak patterns)
  2. Analyze data structures and object lifecycles
  3. Select optimization strategies based on access patterns
  4. Transform code with memory-efficient alternatives
  5. Verify memory reduction without correctness loss

Memory Optimization Decision Tree

What's consuming memory?

Large collections:
├── List of objects → __slots__, namedtuple, or dataclass(slots=True)
├── List built all at once → Generator/iterator pattern
├── Storing strings → String interning, categorical encoding
└── Numeric data → NumPy arrays instead of lists

Data processing:
├── Loading full file → Chunked reading, memory-mapped files
├── Intermediate copies → In-place operations, views
├── Keeping processed data → Process-and-discard pattern
└── DataFrame operations → Downcast dtypes, sparse arrays

Object lifecycle:
├── Objects never freed → Check circular refs, use weakref
├── Cache growing unbounded → LRU cache with maxsize
├── Global accumulation → Explicit cleanup, context managers
└── Large temporary objects → Delete explicitly, gc.collect()

Transformation Patterns

Pattern 1: Class to slots

Reduces per-instance memory by 40-60%:

Before:

class Point:
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

After:

class Point:
    __slots__ = ('x', 'y', 'z')

    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

Pattern 2: List to Generator

Avoid materializing entire sequences:

Before:

def get_all_records(files):
    records = []
    for f in files:
        records.extend(parse_file(f))
    return records

all_data = get_all_records(files)
for record in all_data:
    process(record)

After:

def get_all_records(files):
    for f in files:
        yield from parse_file(f)

for record in get_all_records(files):
    process(record)

Pattern 3: Downcast Numeric Types

Reduce NumPy/Pandas memory by 2-8x:

Before:

df = pd.read_csv('data.csv')  # Default int64, float64

After:

def optimize_dtypes(df):
    for col in df.select_dtypes(include=['int']):
        df[col] = pd.to_numeric(df[col], downcast='integer')
    for col in df.select_dtypes(include=['float']):
        df[col] = pd.to_numeric(df[col], downcast='float')
    return df

df = optimize_dtypes(pd.read_csv('data.csv'))

Pattern 4: String Deduplication

For repeated strings:

Before:

records = [{'status': 'active', 'type': 'user'} for _ in range(1000000)]

After:

import sys

STATUS_ACTIVE = sys.intern('active')
TYPE_USER = sys.intern('user')

records = [{'status': STATUS_ACTIVE, 'type': TYPE_USER} for _ in range(1000000)]

Or with Pandas:

df['status'] = df['status'].astype('category')

Pattern 5: Memory-Mapped File Processing

Process files larger than RAM:

import mmap
import numpy as np

# For binary data
with open('large_file.bin', 'rb') as f:
    mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
    # Process chunks without loading entire file

# For NumPy arrays
arr = np.memmap('large_array.dat', dtype='float32', mode='r', shape=(1000000, 100))

Pattern 6: Chunked DataFrame Processing

def process_large_csv(filepath, chunksize=10000):
    results = []
    for chunk in pd.read_csv(filepath, chunksize=chunksize):
        result = process_chunk(chunk)
        results.append(result)
        del chunk  # Explicit cleanup
    return pd.concat(results)

Data Structure Memory Comparison

StructureMemory per itemUse case
list of dict~400+ bytesFlexible, small datasets
list of class~300 bytesObject-oriented, small
list of __slots__ class~120 bytesMany similar objects
namedtuple~80 bytesImmutable records
numpy.ndarray8 bytes (float64)Numeric, vectorized ops
pandas.DataFrame~10-50 bytes/cellTabular, analysis

Memory Leak Detection

Common leak patterns and fixes:

PatternCauseFix
Growing cacheNo eviction policy@lru_cache(maxsize=1000)
Event listenersNot unregisteredWeak references or explicit removal
Circular referencesObjects reference each otherweakref, break cycles
Global listsAppend without cleanupBounded deque, periodic clear
ClosuresCapture large objectsCapture only needed values

Profiling Commands

# Object size
import sys
sys.getsizeof(obj)  # Shallow size only

# Deep size with pympler
from pympler import asizeof
asizeof.asizeof(obj)  # Includes referenced objects

# Memory profiler decorator
from memory_profiler import profile
@profile
def my_function():
    pass

# Tracemalloc for allocation tracking
import tracemalloc
tracemalloc.start()
# ... code ...
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')

Verification Checklist

Before finalizing optimized code:

  • Memory usage reduced (measure with profiler)
  • Functionality preserved (same outputs)
  • No new memory leaks introduced
  • Performance acceptable (generators may add iteration overhead)
  • Code remains readable and maintainable

latex-writing

benchflow-ai

Guide LaTeX document authoring following best practices and proper semantic markup. Use proactively when: (1) writing or editing .tex files, (2) writing or editing .nw literate programming files, (3) literate-programming skill is active and working with .nw files, (4) user mentions LaTeX, BibTeX, or document formatting, (5) reviewing LaTeX code quality. Ensures proper use of semantic environments (description vs itemize), csquotes (\enquote{} not ``...''), and cleveref (\cref{} not \S\ref{}).

4935

geospatial-analysis

benchflow-ai

Analyze geospatial data using geopandas with proper coordinate projections. Use when calculating distances between geographic features, performing spatial filtering, or working with plate boundaries and earthquake data.

287

pytorch

benchflow-ai

Building and training neural networks with PyTorch. Use when implementing deep learning models, training loops, data pipelines, model optimization with torch.compile, distributed training, or deploying PyTorch models.

305

search-flights

benchflow-ai

Search flights by origin, destination, and departure date using the bundled flights dataset. Use this skill when proposing flight options or checking whether a route/date combination exists.

214

d3js-visualization

benchflow-ai

Build deterministic, verifiable data visualizations with D3.js (v6). Generate standalone HTML/SVG (and optional PNG) from local data files without external network dependencies. Use when tasks require charts, plots, axes/scales, legends, tooltips, or data-driven SVG output.

174

deep-learning

benchflow-ai

PyTorch, TensorFlow, neural networks, CNNs, transformers, and deep learning for production

83

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

643969

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

591705

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

318398

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

339397

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

451339

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

304231

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.