llamaindex
Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.
Install
mkdir -p .claude/skills/llamaindex && curl -L -o skill.zip "https://mcp.directory/api/skills/download/779" && unzip -o skill.zip -d .claude/skills/llamaindex && rm skill.zipInstalls to .claude/skills/llamaindex
About this skill
LlamaIndex - Data Framework for LLM Applications
The leading framework for connecting LLMs with your data.
When to use LlamaIndex
Use LlamaIndex when:
- Building RAG (retrieval-augmented generation) applications
- Need document question-answering over private data
- Ingesting data from multiple sources (300+ connectors)
- Creating knowledge bases for LLMs
- Building chatbots with enterprise data
- Need structured data extraction from documents
Metrics:
- 45,100+ GitHub stars
- 23,000+ repositories use LlamaIndex
- 300+ data connectors (LlamaHub)
- 1,715+ contributors
- v0.14.7 (stable)
Use alternatives instead:
- LangChain: More general-purpose, better for agents
- Haystack: Production search pipelines
- txtai: Lightweight semantic search
- Chroma: Just need vector storage
Quick start
Installation
# Starter package (recommended)
pip install llama-index
# Or minimal core + specific integrations
pip install llama-index-core
pip install llama-index-llms-openai
pip install llama-index-embeddings-openai
5-line RAG example
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Load documents
documents = SimpleDirectoryReader("data").load_data()
# Create index
index = VectorStoreIndex.from_documents(documents)
# Query
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
Core concepts
1. Data connectors - Load documents
from llama_index.core import SimpleDirectoryReader, Document
from llama_index.readers.web import SimpleWebPageReader
from llama_index.readers.github import GithubRepositoryReader
# Directory of files
documents = SimpleDirectoryReader("./data").load_data()
# Web pages
reader = SimpleWebPageReader()
documents = reader.load_data(["https://example.com"])
# GitHub repository
reader = GithubRepositoryReader(owner="user", repo="repo")
documents = reader.load_data(branch="main")
# Manual document creation
doc = Document(
text="This is the document content",
metadata={"source": "manual", "date": "2025-01-01"}
)
2. Indices - Structure data
from llama_index.core import VectorStoreIndex, ListIndex, TreeIndex
# Vector index (most common - semantic search)
vector_index = VectorStoreIndex.from_documents(documents)
# List index (sequential scan)
list_index = ListIndex.from_documents(documents)
# Tree index (hierarchical summary)
tree_index = TreeIndex.from_documents(documents)
# Save index
index.storage_context.persist(persist_dir="./storage")
# Load index
from llama_index.core import load_index_from_storage, StorageContext
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context)
3. Query engines - Ask questions
# Basic query
query_engine = index.as_query_engine()
response = query_engine.query("What is the main topic?")
print(response)
# Streaming response
query_engine = index.as_query_engine(streaming=True)
response = query_engine.query("Explain quantum computing")
for text in response.response_gen:
print(text, end="", flush=True)
# Custom configuration
query_engine = index.as_query_engine(
similarity_top_k=3, # Return top 3 chunks
response_mode="compact", # Or "tree_summarize", "simple_summarize"
verbose=True
)
4. Retrievers - Find relevant chunks
# Vector retriever
retriever = index.as_retriever(similarity_top_k=5)
nodes = retriever.retrieve("machine learning")
# With filtering
retriever = index.as_retriever(
similarity_top_k=3,
filters={"metadata.category": "tutorial"}
)
# Custom retriever
from llama_index.core.retrievers import BaseRetriever
class CustomRetriever(BaseRetriever):
def _retrieve(self, query_bundle):
# Your custom retrieval logic
return nodes
Agents with tools
Basic agent
from llama_index.core.agent import FunctionAgent
from llama_index.llms.openai import OpenAI
# Define tools
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
# Create agent
llm = OpenAI(model="gpt-4o")
agent = FunctionAgent.from_tools(
tools=[multiply, add],
llm=llm,
verbose=True
)
# Use agent
response = agent.chat("What is 25 * 17 + 142?")
print(response)
RAG agent (document search + tools)
from llama_index.core.tools import QueryEngineTool
# Create index as before
index = VectorStoreIndex.from_documents(documents)
# Wrap query engine as tool
query_tool = QueryEngineTool.from_defaults(
query_engine=index.as_query_engine(),
name="python_docs",
description="Useful for answering questions about Python programming"
)
# Agent with document search + calculator
agent = FunctionAgent.from_tools(
tools=[query_tool, multiply, add],
llm=llm
)
# Agent decides when to search docs vs calculate
response = agent.chat("According to the docs, what is Python used for?")
Advanced RAG patterns
Chat engine (conversational)
from llama_index.core.chat_engine import CondensePlusContextChatEngine
# Chat with memory
chat_engine = index.as_chat_engine(
chat_mode="condense_plus_context", # Or "context", "react"
verbose=True
)
# Multi-turn conversation
response1 = chat_engine.chat("What is Python?")
response2 = chat_engine.chat("Can you give examples?") # Remembers context
response3 = chat_engine.chat("What about web frameworks?")
Metadata filtering
from llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter
# Filter by metadata
filters = MetadataFilters(
filters=[
ExactMatchFilter(key="category", value="tutorial"),
ExactMatchFilter(key="difficulty", value="beginner")
]
)
retriever = index.as_retriever(
similarity_top_k=3,
filters=filters
)
query_engine = index.as_query_engine(filters=filters)
Structured output
from pydantic import BaseModel
from llama_index.core.output_parsers import PydanticOutputParser
class Summary(BaseModel):
title: str
main_points: list[str]
conclusion: str
# Get structured response
output_parser = PydanticOutputParser(output_cls=Summary)
query_engine = index.as_query_engine(output_parser=output_parser)
response = query_engine.query("Summarize the document")
summary = response # Pydantic model
print(summary.title, summary.main_points)
Data ingestion patterns
Multiple file types
# Load all supported formats
documents = SimpleDirectoryReader(
"./data",
recursive=True,
required_exts=[".pdf", ".docx", ".txt", ".md"]
).load_data()
Web scraping
from llama_index.readers.web import BeautifulSoupWebReader
reader = BeautifulSoupWebReader()
documents = reader.load_data(urls=[
"https://docs.python.org/3/tutorial/",
"https://docs.python.org/3/library/"
])
Database
from llama_index.readers.database import DatabaseReader
reader = DatabaseReader(
sql_database_uri="postgresql://user:pass@localhost/db"
)
documents = reader.load_data(query="SELECT * FROM articles")
API endpoints
from llama_index.readers.json import JSONReader
reader = JSONReader()
documents = reader.load_data("https://api.example.com/data.json")
Vector store integrations
Chroma (local)
from llama_index.vector_stores.chroma import ChromaVectorStore
import chromadb
# Initialize Chroma
db = chromadb.PersistentClient(path="./chroma_db")
collection = db.get_or_create_collection("my_collection")
# Create vector store
vector_store = ChromaVectorStore(chroma_collection=collection)
# Use in index
from llama_index.core import StorageContext
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
Pinecone (cloud)
from llama_index.vector_stores.pinecone import PineconeVectorStore
import pinecone
# Initialize Pinecone
pinecone.init(api_key="your-key", environment="us-west1-gcp")
pinecone_index = pinecone.Index("my-index")
# Create vector store
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
FAISS (fast)
from llama_index.vector_stores.faiss import FaissVectorStore
import faiss
# Create FAISS index
d = 1536 # Dimension of embeddings
faiss_index = faiss.IndexFlatL2(d)
vector_store = FaissVectorStore(faiss_index=faiss_index)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
Customization
Custom LLM
from llama_index.llms.anthropic import Anthropic
from llama_index.core import Settings
# Set global LLM
Settings.llm = Anthropic(model="claude-sonnet-4-5-20250929")
# Now all queries use Anthropic
query_engine = index.as_query_engine()
Custom embeddings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
# Use HuggingFace embeddings
Settings.embed_model = HuggingFaceEmbedding(
model_name="sentence-transformers/all-mpnet-base-v2"
)
index = VectorStoreIndex.from_documents(documents)
Custom prompt templates
from llama_index.core import PromptTemplate
qa_prompt = PromptTemplate(
"Context: {context_str}\n"
"Question: {query_str}\n"
"Answer the question based only on the context. "
"If the answer is not in the context, say 'I don't know'.\n"
"Answer: "
)
query_engine = index.as_query_engine(text_qa_template=qa_prompt)
Multi-modal RAG
Image + text
from llama_index.core import SimpleDirector
---
*Content truncated.*
More by davila7
View all skills by davila7 →You might also like
flutter-development
aj-geddes
Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.
ui-ux-pro-max
nextlevelbuilder
"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."
drawio-diagrams-enhanced
jgtolentino
Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.
godot
bfollington
This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.
nano-banana-pro
garg-aayush
Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.
pdf-to-markdown
aliceisjustplaying
Convert entire PDF documents to clean, structured Markdown for full context loading. Use this skill when the user wants to extract ALL text from a PDF into context (not grep/search), when discussing or analyzing PDF content in full, when the user mentions "load the whole PDF", "bring the PDF into context", "read the entire PDF", or when partial extraction/grepping would miss important context. This is the preferred method for PDF text extraction over page-by-page or grep approaches.
Related MCP Servers
Browse all serversEnhance productivity with AI-driven Notion automation. Leverage the Notion API for secure, automated workspace managemen
Unlock browser automation studio with Browserbase MCP Server. Enhance Selenium software testing and AI-driven workflows
Use Firebase to integrate Firebase Authentication, Firestore, and Storage for seamless backend services in your apps.
Integrate TomTom's APIs for advanced location-aware apps with maps, routing, geocoding, and traffic—an alternative to Go
Access Tyler Forge’s design system, React UI library, component APIs, and framework guides for seamless app development
Raindrop: AI DevOps to convert Claude Code into an infrastructure-as-code full-stack deployment platform, automating app
Stay ahead of the MCP ecosystem
Get weekly updates on new skills and servers.