smart-ocr
Extract text from images and scanned documents using PaddleOCR - supports 100+ languages
Install
mkdir -p .claude/skills/smart-ocr && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7909" && unzip -o skill.zip -d .claude/skills/smart-ocr && rm skill.zipInstalls to .claude/skills/smart-ocr
About this skill
Smart OCR Skill
Overview
This skill enables intelligent text extraction from images and scanned documents using PaddleOCR - a leading OCR engine supporting 100+ languages. Extract text from photos, screenshots, scanned PDFs, and handwritten documents with high accuracy.
How to Use
- Provide the image or scanned document
- Optionally specify language(s) to detect
- I'll extract text with position and confidence data
Example prompts:
- "Extract all text from this screenshot"
- "OCR this scanned PDF document"
- "Read the text from this business card photo"
- "Extract Chinese and English text from this image"
Domain Knowledge
PaddleOCR Fundamentals
from paddleocr import PaddleOCR
# Initialize OCR engine
ocr = PaddleOCR(use_angle_cls=True, lang='en')
# Run OCR on image
result = ocr.ocr('image.png', cls=True)
# Result structure: [[box, (text, confidence)], ...]
for line in result[0]:
box = line[0] # [[x1,y1], [x2,y2], [x3,y3], [x4,y4]]
text = line[1][0] # Extracted text
conf = line[1][1] # Confidence score
print(f"{text} ({conf:.2f})")
Supported Languages
# Common language codes
languages = {
'en': 'English',
'ch': 'Chinese (Simplified)',
'cht': 'Chinese (Traditional)',
'japan': 'Japanese',
'korean': 'Korean',
'french': 'French',
'german': 'German',
'spanish': 'Spanish',
'russian': 'Russian',
'arabic': 'Arabic',
'hindi': 'Hindi',
'vi': 'Vietnamese',
'th': 'Thai',
# ... 100+ languages supported
}
# Use specific language
ocr = PaddleOCR(lang='ch') # Chinese
ocr = PaddleOCR(lang='japan') # Japanese
ocr = PaddleOCR(lang='multilingual') # Auto-detect
Configuration Options
from paddleocr import PaddleOCR
ocr = PaddleOCR(
# Detection settings
det_model_dir=None, # Custom detection model
det_limit_side_len=960, # Max side length for detection
det_db_thresh=0.3, # Binarization threshold
det_db_box_thresh=0.5, # Box score threshold
# Recognition settings
rec_model_dir=None, # Custom recognition model
rec_char_dict_path=None, # Custom character dictionary
# Angle classification
use_angle_cls=True, # Enable angle classification
cls_model_dir=None, # Custom classification model
# Language
lang='en', # Language code
# Performance
use_gpu=True, # Use GPU if available
gpu_mem=500, # GPU memory limit (MB)
enable_mkldnn=True, # CPU optimization
# Output
show_log=False, # Suppress logs
)
Processing Different Sources
Image Files
# Single image
result = ocr.ocr('image.png')
# Multiple images
images = ['img1.png', 'img2.png', 'img3.png']
for img in images:
result = ocr.ocr(img)
process_result(result)
PDF Files (Scanned)
from pdf2image import convert_from_path
def ocr_pdf(pdf_path):
"""OCR a scanned PDF."""
# Convert PDF pages to images
images = convert_from_path(pdf_path)
all_text = []
for i, img in enumerate(images):
# Save temp image
temp_path = f'temp_page_{i}.png'
img.save(temp_path)
# OCR the image
result = ocr.ocr(temp_path)
# Extract text
page_text = '\n'.join([line[1][0] for line in result[0]])
all_text.append(f"--- Page {i+1} ---\n{page_text}")
os.remove(temp_path)
return '\n\n'.join(all_text)
URLs and Bytes
import requests
from io import BytesIO
# From URL
response = requests.get('https://example.com/image.png')
result = ocr.ocr(BytesIO(response.content))
# From bytes
with open('image.png', 'rb') as f:
img_bytes = f.read()
result = ocr.ocr(BytesIO(img_bytes))
Result Processing
def process_ocr_result(result):
"""Process OCR result into structured data."""
lines = []
for line in result[0]:
box = line[0]
text = line[1][0]
confidence = line[1][1]
# Calculate bounding box
x_coords = [p[0] for p in box]
y_coords = [p[1] for p in box]
lines.append({
'text': text,
'confidence': confidence,
'bbox': {
'left': min(x_coords),
'top': min(y_coords),
'right': max(x_coords),
'bottom': max(y_coords),
},
'raw_box': box
})
return lines
# Sort by position (top to bottom, left to right)
def sort_by_position(lines):
return sorted(lines, key=lambda x: (x['bbox']['top'], x['bbox']['left']))
Text Layout Reconstruction
def reconstruct_layout(result, line_threshold=10):
"""Reconstruct text layout from OCR results."""
lines = process_ocr_result(result)
lines = sort_by_position(lines)
# Group into logical lines
text_lines = []
current_line = []
current_y = None
for line in lines:
y = line['bbox']['top']
if current_y is None or abs(y - current_y) < line_threshold:
current_line.append(line)
current_y = y
else:
# New line
text_lines.append(' '.join([l['text'] for l in current_line]))
current_line = [line]
current_y = y
# Add last line
if current_line:
text_lines.append(' '.join([l['text'] for l in current_line]))
return '\n'.join(text_lines)
Best Practices
- Preprocess Images: Improve quality before OCR
- Choose Correct Language: Specify language for better accuracy
- Handle Multi-column: Process columns separately
- Filter Low Confidence: Skip results below threshold
- Batch Processing: Process multiple images efficiently
Common Patterns
Image Preprocessing
from PIL import Image, ImageEnhance, ImageFilter
def preprocess_image(image_path):
"""Preprocess image for better OCR."""
img = Image.open(image_path)
# Convert to grayscale
img = img.convert('L')
# Enhance contrast
enhancer = ImageEnhance.Contrast(img)
img = enhancer.enhance(2.0)
# Sharpen
img = img.filter(ImageFilter.SHARPEN)
# Save preprocessed
preprocessed_path = 'preprocessed.png'
img.save(preprocessed_path)
return preprocessed_path
Batch OCR with Progress
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor
def batch_ocr(image_paths, max_workers=4):
"""OCR multiple images in parallel."""
results = {}
def process_single(img_path):
result = ocr.ocr(img_path)
return img_path, result
with ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = [executor.submit(process_single, p) for p in image_paths]
for future in tqdm(futures, desc="Processing OCR"):
path, result = future.result()
results[path] = result
return results
Examples
Example 1: Business Card Reader
from paddleocr import PaddleOCR
import re
def read_business_card(image_path):
"""Extract contact info from business card."""
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr(image_path)
# Extract all text
all_text = []
for line in result[0]:
all_text.append(line[1][0])
full_text = '\n'.join(all_text)
# Parse contact info
contact = {
'name': None,
'email': None,
'phone': None,
'company': None,
'title': None,
'raw_text': full_text
}
# Email pattern
email_match = re.search(r'[\w\.-]+@[\w\.-]+\.\w+', full_text)
if email_match:
contact['email'] = email_match.group()
# Phone pattern
phone_match = re.search(r'[\+\d][\d\s\-\(\)]{8,}', full_text)
if phone_match:
contact['phone'] = phone_match.group().strip()
# Name is usually the largest/first text
if all_text:
contact['name'] = all_text[0]
return contact
card_info = read_business_card('business_card.jpg')
print(f"Name: {card_info['name']}")
print(f"Email: {card_info['email']}")
print(f"Phone: {card_info['phone']}")
Example 2: Receipt Scanner
from paddleocr import PaddleOCR
import re
def scan_receipt(image_path):
"""Extract items and total from receipt."""
ocr = PaddleOCR(use_angle_cls=True, lang='en')
result = ocr.ocr(image_path)
lines = []
for line in result[0]:
text = line[1][0]
y_pos = line[0][0][1]
lines.append({'text': text, 'y': y_pos})
# Sort by vertical position
lines.sort(key=lambda x: x['y'])
receipt = {
'items': [],
'subtotal': None,
'tax': None,
'total': None
}
for line in lines:
text = line['text']
# Look for total
if 'total' in text.lower():
amount = re.search(r'\$?([\d,]+\.?\d*)', text)
if amount:
if 'sub' in text.lower():
receipt['subtotal'] = float(amount.group(1).replace(',', ''))
else:
receipt['total'] = float(amount.group(1).replace(',', ''))
# Look for tax
elif 'tax' in text.lower():
amount = re.search(r'\$?([\d,]+\.?\d*)', text)
if amount:
receipt['tax'] = float(amount.group(1).replace(',', ''))
# Look for items (line with price)
else:
item_match = re.search(r'(.+?)\s+\$?([\d,]+\.?\d+)$', text)
if item_match:
receipt['items'].append({
'name': item_match.group(1).strip(),
'price': float(item_match.
---
*Content truncated.*
More by openclaw
View all skills by openclaw →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversUnlock powerful image manipulation with ImageSorcery: resize, crop, detect objects, and perform optical character recogn
Extract text and audio from URLs, docs, videos, and images with AI voice generator and text to speech for unified conten
Connect Claude with Vectorize.io's vector database to extract text from images and enable advanced retrieval for researc
Textin MCP Server: OCR to Markdown for images, PDFs & Word docs — fast document OCR, PDF OCR converter and OCR data extr
Extract plot data from images with Axiomatic AI. Use advanced web plot digitizer features for scientific imaging and ana
Extract text from images with RapidOCR. Convert image to text efficiently for automated document processing via base64 o
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.