doc-parser
Parse complex documents with IBM's docling - handles tables, figures, and multi-column layouts
Install
mkdir -p .claude/skills/doc-parser && curl -L -o skill.zip "https://mcp.directory/api/skills/download/3142" && unzip -o skill.zip -d .claude/skills/doc-parser && rm skill.zipInstalls to .claude/skills/doc-parser
About this skill
Document Parser Skill
Overview
This skill enables advanced document parsing using docling - IBM's state-of-the-art document understanding library. Parse complex PDFs, Word documents, and images while preserving structure, extracting tables, figures, and handling multi-column layouts.
How to Use
- Provide the document to parse
- Specify what you want to extract (text, tables, figures, etc.)
- I'll parse it and return structured data
Example prompts:
- "Parse this PDF and extract all tables"
- "Convert this academic paper to structured markdown"
- "Extract figures and captions from this document"
- "Parse this report preserving the document structure"
Domain Knowledge
docling Fundamentals
from docling.document_converter import DocumentConverter
# Initialize converter
converter = DocumentConverter()
# Convert document
result = converter.convert("document.pdf")
# Access parsed content
doc = result.document
print(doc.export_to_markdown())
Supported Formats
| Format | Extension | Notes |
|---|---|---|
| Native and scanned | ||
| Word | .docx | Full structure preserved |
| PowerPoint | .pptx | Slides as sections |
| Images | .png, .jpg | OCR + layout analysis |
| HTML | .html | Structure preserved |
Basic Usage
from docling.document_converter import DocumentConverter
# Create converter
converter = DocumentConverter()
# Convert single document
result = converter.convert("report.pdf")
# Access document
doc = result.document
# Export options
markdown = doc.export_to_markdown()
text = doc.export_to_text()
json_doc = doc.export_to_dict()
Advanced Configuration
from docling.document_converter import DocumentConverter
from docling.datamodel.base_models import InputFormat
from docling.datamodel.pipeline_options import PdfPipelineOptions
# Configure pipeline
pipeline_options = PdfPipelineOptions()
pipeline_options.do_ocr = True
pipeline_options.do_table_structure = True
pipeline_options.table_structure_options.do_cell_matching = True
# Create converter with options
converter = DocumentConverter(
allowed_formats=[InputFormat.PDF, InputFormat.DOCX],
pdf_backend_options=pipeline_options
)
result = converter.convert("document.pdf")
Document Structure
# Document hierarchy
doc = result.document
# Access metadata
print(doc.name)
print(doc.origin)
# Iterate through content
for element in doc.iterate_items():
print(f"Type: {element.type}")
print(f"Text: {element.text}")
if element.type == "table":
print(f"Rows: {len(element.data.table_cells)}")
Extracting Tables
from docling.document_converter import DocumentConverter
import pandas as pd
def extract_tables(doc_path):
"""Extract all tables from document."""
converter = DocumentConverter()
result = converter.convert(doc_path)
doc = result.document
tables = []
for element in doc.iterate_items():
if element.type == "table":
# Get table data
table_data = element.export_to_dataframe()
tables.append({
'page': element.prov[0].page_no if element.prov else None,
'dataframe': table_data
})
return tables
# Usage
tables = extract_tables("report.pdf")
for i, table in enumerate(tables):
print(f"Table {i+1} on page {table['page']}:")
print(table['dataframe'])
Extracting Figures
def extract_figures(doc_path, output_dir):
"""Extract figures with captions."""
import os
converter = DocumentConverter()
result = converter.convert(doc_path)
doc = result.document
figures = []
os.makedirs(output_dir, exist_ok=True)
for element in doc.iterate_items():
if element.type == "picture":
figure_info = {
'caption': element.caption if hasattr(element, 'caption') else None,
'page': element.prov[0].page_no if element.prov else None,
}
# Save image if available
if hasattr(element, 'image'):
img_path = os.path.join(output_dir, f"figure_{len(figures)+1}.png")
element.image.save(img_path)
figure_info['path'] = img_path
figures.append(figure_info)
return figures
Handling Multi-column Layouts
from docling.document_converter import DocumentConverter
def parse_multicolumn(doc_path):
"""Parse document with multi-column layout."""
converter = DocumentConverter()
result = converter.convert(doc_path)
doc = result.document
# docling automatically handles column detection
# Text is returned in reading order
structured_content = []
for element in doc.iterate_items():
content_item = {
'type': element.type,
'text': element.text if hasattr(element, 'text') else None,
'level': element.level if hasattr(element, 'level') else None,
}
# Add bounding box if available
if element.prov:
content_item['bbox'] = element.prov[0].bbox
content_item['page'] = element.prov[0].page_no
structured_content.append(content_item)
return structured_content
Export Formats
from docling.document_converter import DocumentConverter
converter = DocumentConverter()
result = converter.convert("document.pdf")
doc = result.document
# Markdown export
markdown = doc.export_to_markdown()
with open("output.md", "w") as f:
f.write(markdown)
# Plain text
text = doc.export_to_text()
# JSON/dict format
json_doc = doc.export_to_dict()
# HTML format (if supported)
# html = doc.export_to_html()
Batch Processing
from docling.document_converter import DocumentConverter
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
def batch_parse(input_dir, output_dir, max_workers=4):
"""Parse multiple documents in parallel."""
input_path = Path(input_dir)
output_path = Path(output_dir)
output_path.mkdir(exist_ok=True)
converter = DocumentConverter()
def process_single(doc_path):
try:
result = converter.convert(str(doc_path))
md = result.document.export_to_markdown()
out_file = output_path / f"{doc_path.stem}.md"
with open(out_file, 'w') as f:
f.write(md)
return {'file': str(doc_path), 'status': 'success'}
except Exception as e:
return {'file': str(doc_path), 'status': 'error', 'error': str(e)}
docs = list(input_path.glob('*.pdf')) + list(input_path.glob('*.docx'))
with ThreadPoolExecutor(max_workers=max_workers) as executor:
results = list(executor.map(process_single, docs))
return results
Best Practices
- Use Appropriate Pipeline: Configure for your document type
- Handle Large Documents: Process in chunks if needed
- Verify Table Extraction: Complex tables may need review
- Check OCR Quality: Enable OCR for scanned documents
- Cache Results: Store parsed documents for reuse
Common Patterns
Academic Paper Parser
def parse_academic_paper(pdf_path):
"""Parse academic paper structure."""
converter = DocumentConverter()
result = converter.convert(pdf_path)
doc = result.document
paper = {
'title': None,
'abstract': None,
'sections': [],
'references': [],
'tables': [],
'figures': []
}
current_section = None
for element in doc.iterate_items():
text = element.text if hasattr(element, 'text') else ''
if element.type == 'title':
paper['title'] = text
elif element.type == 'heading':
if 'abstract' in text.lower():
current_section = 'abstract'
elif 'reference' in text.lower():
current_section = 'references'
else:
paper['sections'].append({
'title': text,
'content': ''
})
current_section = 'section'
elif element.type == 'paragraph':
if current_section == 'abstract':
paper['abstract'] = text
elif current_section == 'section' and paper['sections']:
paper['sections'][-1]['content'] += text + '\n'
elif element.type == 'table':
paper['tables'].append({
'caption': element.caption if hasattr(element, 'caption') else None,
'data': element.export_to_dataframe() if hasattr(element, 'export_to_dataframe') else None
})
return paper
Report to Structured Data
def parse_business_report(doc_path):
"""Parse business report into structured format."""
converter = DocumentConverter()
result = converter.convert(doc_path)
doc = result.document
report = {
'metadata': {
'title': None,
'date': None,
'author': None
},
'executive_summary': None,
'sections': [],
'key_metrics': [],
'recommendations': []
}
# Parse document structure
for element in doc.iterate_items():
# Implement parsing logic based on document structure
pass
return report
Examples
Example 1: Parse Financial Report
from docling.document_converter import DocumentConverter
def parse_financial_report(pdf_path):
"""Extract structured data from financial report."""
converter = DocumentConverter()
result = converter.convert(pdf_path)
doc = result.document
financial_data = {
'income_sta
---
*Content truncated.*
More by openclaw
View all skills by openclaw →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
fastapi-templates
wshobson
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
Related MCP Servers
Browse all serversBreak down complex problems with Sequential Thinking, a structured tool and step by step math solver for dynamic, reflec
Enhance software testing with Playwright MCP: Fast, reliable browser automation, an innovative alternative to Selenium s
Integrate FireCrawl for advanced web scraping to extract clean, structured data from complex websites—fast, scalable, an
Create modern React UI components instantly with Magic AI Agent. Integrates with top IDEs for fast, stunning design and
Enhance productivity with AI-driven Notion automation. Leverage the Notion API for secure, automated workspace managemen
Effortlessly create 25+ chart types with MCP Server Chart. Visualize complex datasets using TypeScript and AntV for powe
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.