
AWS S3
Connects to Amazon S3 buckets to list, browse, and retrieve file contents with automatic text extraction from PDFs and other document types.
Provides direct access to Amazon S3 storage for listing buckets, browsing objects, and retrieving file contents with automatic text extraction from PDFs and other file types.
What it does
- List S3 buckets with filtering
- Browse objects within buckets
- Retrieve file contents from S3 objects
- Extract text from PDFs and documents
- Filter objects by prefix
- Handle both text and binary files
Best for
About AWS S3
AWS S3 is a community-built MCP server published by samuraikun that provides AI assistants with tools and capabilities via the Model Context Protocol. Access AWS S3 storage to list buckets, browse objects, and extract text from files like PDFs with ease. It is categorized under cloud infrastructure, file systems.
How to install
You can install AWS S3 in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
AWS S3 is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
S3 MCP Server
An Amazon S3 Model Context Protocol (MCP) server that provides tools for interacting with S3 buckets and objects.
https://github.com/user-attachments/assets/d05ff0f1-e2bf-43b9-8d0c-82605abfb666
Features
π MCP Transport Support
- β STDIO Transport - Direct process communication for Claude Desktop
- β HTTP Transport - REST API with Server-Sent Events for web clients
- β Streamable HTTP - Real-time streaming for responsive interactions
π οΈ Available Tools
- β list-buckets - List accessible S3 buckets with filtering
- β list-objects - Browse objects within buckets with prefix filtering
- β get-object - Retrieve object contents (text/binary support)
π³ Deployment Options
- β Local Node.js - Direct execution with npm/node
- β Docker CLI - Containerized deployment with custom configuration
- β Docker Compose - Full stack with MinIO for local testing
- β MCP Inspector - Built-in debugging and testing interface
Overview
This MCP server allows Large Language Models (LLMs) like Claude to interact with AWS S3 storage. It provides tools for:
- Listing available S3 buckets
- Listing objects within a bucket
- Retrieving object contents
The server is built using TypeScript and the MCP SDK, providing a secure and standardized way for LLMs to interface with S3.
Installation
Prerequisites
- Node.js 18 or higher
- npm or yarn
- AWS credentials configured (either through environment variables or AWS credentials file)
- Docker (optional, for containerized setup)
Setup
- Install via npm:
# Install globally via npm
npm install -g aws-s3-mcp
# Or as a dependency in your project
npm install aws-s3-mcp
- If building from source:
# Clone the repository
git clone https://github.com/samuraikun/aws-s3-mcp.git
cd aws-s3-mcp
# Install dependencies and build
npm install
npm run build
- Configure AWS credentials and S3 access:
Create a .env file with your AWS configuration:
AWS_REGION=us-east-1
S3_BUCKETS=bucket1,bucket2,bucket3
S3_MAX_BUCKETS=5
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
Or set these as environment variables.
Configuration
The server can be configured using the following environment variables:
| Variable | Description | Default |
|---|---|---|
AWS_REGION | AWS region where your S3 buckets are located | us-east-1 |
S3_BUCKETS | Comma-separated list of allowed S3 bucket names | (empty) |
S3_MAX_BUCKETS | Maximum number of buckets to return in listing | 5 |
AWS_ACCESS_KEY_ID | AWS access key (if not using default credentials) | (from AWS config) |
AWS_SECRET_ACCESS_KEY | AWS secret key (if not using default credentials) | (from AWS config) |
Running the Server
Direct Node.js Execution
The server runs with HTTP transport by default, making it easy to test and debug:
# Using npx (HTTP transport by default)
npx aws-s3-mcp
# If installed globally (HTTP transport)
npm install -g aws-s3-mcp
aws-s3-mcp
# If running from cloned repository (HTTP transport)
npm start
# Or directly (HTTP transport)
node dist/index.js
# Explicit HTTP transport
node dist/index.js --http
# STDIO transport (for Claude Desktop integration)
node dist/index.js --stdio
When running with HTTP transport (default), the server will start on port 3000 and provide:
- Health check endpoint:
http://localhost:3000/health - MCP endpoint:
http://localhost:3000/mcp - SSE endpoint:
http://localhost:3000/sse
Docker Setup π³
You can run the S3 MCP server as a Docker container using either Docker CLI or Docker Compose.
Using Docker CLI
- Build the Docker image:
docker build -t aws-s3-mcp .
- Run the container with environment variables:
# Option 1: Pass environment variables directly
docker run -d \
-e AWS_REGION=us-east-1 \
-e S3_BUCKETS=bucket1,bucket2 \
-e S3_MAX_BUCKETS=5 \
-e AWS_ACCESS_KEY_ID=your-access-key \
-e AWS_SECRET_ACCESS_KEY=your-secret-key \
--name aws-s3-mcp-server \
aws-s3-mcp
# Option 2: Use environment variables from .env file
docker run -d \
--env-file .env \
--name aws-s3-mcp-server \
aws-s3-mcp
- Check container logs:
docker logs aws-s3-mcp-server
- Stop and remove the container:
docker stop aws-s3-mcp-server
docker rm aws-s3-mcp-server
Note: For HTTP transport (default), add -p 3000:3000 to expose the HTTP port. For STDIO transport (Claude Desktop), no port mapping is needed as it uses Docker exec for direct communication.
Using Docker Compose
- Build and start the Docker container:
# Build and start the container
docker compose up -d s3-mcp
# View logs
docker compose logs -f s3-mcp
- To stop the container:
docker compose down
Using Docker with MinIO for Testing
The Docker Compose setup includes a MinIO service for local testing:
# Start MinIO and the MCP server
docker compose up -d
# Access MinIO console at http://localhost:9001
# Default credentials: minioadmin/minioadmin
The MinIO service automatically creates two test buckets (test-bucket-1 and test-bucket-2) and uploads sample files for testing.
Debugging with MCP Inspector π
The run-inspector.sh script provides an easy way to test and debug the S3 MCP server using the MCP Inspector. It supports multiple transport types and deployment modes.
Quick Start
# Show all available options
./run-inspector.sh --help
# Run locally with HTTP transport (default)
./run-inspector.sh
# Run with Docker Compose and MinIO for testing
./run-inspector.sh --docker-compose
Transport Types
The server supports two transport protocols:
HTTP Transport
- Best for: Web-based debugging, external client connections
- Provides: REST API endpoints, Server-Sent Events (SSE)
- Ports: 3000 (HTTP), 3001+ (Inspector UI)
STDIO Transport
- Best for: Direct process communication, Claude Desktop integration
- Provides: Standard input/output communication
- Ports: None (direct process communication)
Usage Examples
1. Local Development (HTTP)
# Default: HTTP transport for local debugging
./run-inspector.sh
# Explicit HTTP transport
./run-inspector.sh --http
This will:
- Build the project if needed
- Start the MCP server with HTTP transport on port 3000
- Launch MCP Inspector in your browser
- Provide endpoints:
- Health check:
http://localhost:3000/health - MCP endpoint:
http://localhost:3000/mcp - SSE endpoint:
http://localhost:3000/sse
- Health check:
2. Local Development (STDIO)
# STDIO transport for local debugging
./run-inspector.sh --stdio
This mode directly connects the MCP Inspector to the server process using standard input/output.
3. Docker with Real AWS (STDIO)
# Create .env file with your AWS credentials
cp .env.example .env
# Edit .env with your AWS credentials
# Run with Docker using STDIO transport (default for Docker)
./run-inspector.sh --docker
This will:
- Build the Docker image if needed
- Start a container with your AWS credentials
- Connect MCP Inspector via Docker exec
4. Docker with Real AWS (HTTP)
# Run with Docker using HTTP transport
./run-inspector.sh --docker --http
This will:
- Start a containerized HTTP server on port 3000
- Connect MCP Inspector to the HTTP endpoint
- Useful for testing HTTP-based integrations
5. Docker Compose with MinIO (Testing)
# Run with MinIO for local testing (no AWS credentials needed)
./run-inspector.sh --docker-compose
This will:
- Start MinIO S3-compatible storage
- Create test buckets:
test-bucket-1,test-bucket-2 - Upload sample files for testing
- Start the S3 MCP server connected to MinIO
- Launch MCP Inspector
- Provide MinIO Web UI at
http://localhost:9001(login: minioadmin/minioadmin)
Advanced Options
Force Rebuild
# Force Docker image rebuild
./run-inspector.sh --docker --force-rebuild
./run-inspector.sh --docker-compose --force-rebuild
Debugging Tips
-
Check container logs:
# For Docker CLI mode docker logs aws-s3-mcp-server # For Docker Compose mode docker compose logs s3-mcp -
Test endpoints manually:
# Health check curl http://localhost:3000/health # MinIO health (Docker Compose) curl http://localhost:9000/minio/health/live -
Access MinIO Web UI (Docker Compose only):
- URL:
http://localhost:9001 - Username:
minioadmin - Password:
minioadmin
- URL:
Cleanup
# Stop and remove Docker containers
docker stop aws-s3-mcp-server && docker rm aws-s3-mcp-server
# Stop Docker Compose services
docker compose down
# St
---
*README truncated. [View full README on GitHub](https://github.com/samuraikun/aws-s3-mcp).*
Alternatives
Related Skills
Browse all skillsAWS development with infrastructure automation and cloud architecture patterns
AWS CloudFormation infrastructure as code for stack management. Use when writing templates, deploying stacks, managing drift, troubleshooting deployments, or organizing infrastructure with nested stacks.
Design multi-cloud architectures using a decision framework to select and integrate services across AWS, Azure, and GCP. Use when building multi-cloud systems, avoiding vendor lock-in, or leveraging best-of-breed services from multiple providers.
Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.
Expert Terraform/OpenTofu specialist mastering advanced IaC automation, state management, and enterprise infrastructure patterns. Handles complex module design, multi-cloud deployments, GitOps workflows, policy as code, and CI/CD integration. Covers migration strategies, security best practices, and modern IaC ecosystems. Use PROACTIVELY for advanced IaC, state management, or infrastructure automation.
This skill enables Claude to generate Infrastructure as Code (IaC) configurations. It uses the infrastructure-as-code-generator plugin to create production-ready IaC for Terraform, CloudFormation, Pulumi, ARM Templates, and CDK. Use this skill when the user requests IaC configurations for cloud infrastructure, specifying the platform (e.g., Terraform, CloudFormation) and cloud provider (e.g., AWS, Azure, GCP), or when the user needs help automating infrastructure deployment. Trigger terms include: "generate IaC", "create Terraform", "CloudFormation template", "Pulumi program", "infrastructure code".