Confluent Cloud

Confluent Cloud

Official
confluentinc

Manages Kafka topics, connectors, and Flink SQL statements in Confluent Cloud through natural language commands via REST APIs.

Enables natural language management of Kafka topics, connectors, and Flink SQL statements through Confluent Cloud REST APIs for streamlined data streaming operations

138299 views42Local (stdio)

What it does

  • Create and manage Kafka topics
  • Configure data connectors
  • Execute Flink SQL statements
  • Query streaming data pipelines
  • Monitor Kafka cluster status
  • Manage schema registry objects

Best for

Data engineers building streaming pipelinesDevOps teams managing Kafka infrastructureAnalytics teams querying real-time data streams
Natural language interface to Confluent CloudSupports multiple AI clients (Claude, Goose)

About Confluent Cloud

Confluent Cloud is an official MCP server published by confluentinc that provides AI assistants with tools and capabilities via the Model Context Protocol. Manage Kafka data streaming with Confluent Cloud APIs. Streamline Kafka stream operations using natural language and RES It is categorized under cloud infrastructure, analytics data.

How to install

You can install Confluent Cloud in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Confluent Cloud is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

mcp-confluent

An MCP server implementation that enables AI assistants to interact with Confluent Cloud REST APIs. This server allows AI tools like Claude Desktop and Goose CLI to manage Kafka topics, connectors, and Flink SQL statements through natural language interactions.

mcp-confluent MCP server

Ask DeepWiki

Demo

Goose CLI

Goose CLI Demo

Claude Desktop

Claude Desktop Demo

Table of Contents

User Guide

Getting Started

  1. Create a .env file: Copy the provided .env.example file to .env in the root of your project:

    cp .env.example .env
    
  2. Populate the .env file: Fill in the necessary values for your Confluent Cloud environment. See the Configuration section for details on each variable.

  3. Install Node.js (if not already installed)

    • We recommend using NVM (Node Version Manager) to manage Node.js versions
    • Install and use Node.js:
    nvm install 22
    nvm use 22
    

Configuration

Copy .env.example to .env in the root directory and fill in your values. See the example structure below:

Example .env file structure
# .env file
BOOTSTRAP_SERVERS="pkc-v12gj.us-east4.gcp.confluent.cloud:9092"
KAFKA_API_KEY="..."
KAFKA_API_SECRET="..."
KAFKA_REST_ENDPOINT="https://pkc-v12gj.us-east4.gcp.confluent.cloud:443"
KAFKA_CLUSTER_ID=""
KAFKA_ENV_ID="env-..."
FLINK_ENV_ID="env-..."
FLINK_ORG_ID=""
FLINK_REST_ENDPOINT="https://flink.us-east4.gcp.confluent.cloud"
FLINK_ENV_NAME=""
FLINK_DATABASE_NAME=""
FLINK_API_KEY=""
FLINK_API_SECRET=""
FLINK_COMPUTE_POOL_ID="lfcp-..."
TABLEFLOW_API_KEY=""
TABLEFLOW_API_SECRET=""
CONFLUENT_CLOUD_API_KEY=""
CONFLUENT_CLOUD_API_SECRET=""
CONFLUENT_CLOUD_REST_ENDPOINT="https://api.confluent.cloud"
SCHEMA_REGISTRY_API_KEY="..."
SCHEMA_REGISTRY_API_SECRET="..."
SCHEMA_REGISTRY_ENDPOINT="https://psrc-zv01y.northamerica-northeast2.gcp.confluent.cloud"

Prerequisites & Setup for Tableflow Commands

In order to leverage Tableflow commands to interact with your data ecosystem and successfully execute these Tableflow commands and manage resources (e.g., interacting with data storage like AWS S3 and metadata catalogs like AWS Glue), certain IAM (Identity and Access Management) permissions and configurations are essential.

It is crucial to set up the necessary roles and policies in your cloud environment (e.g., AWS) and link them correctly within Confluent Cloud. This ensures your Flink SQL cluster, which powers Tableflow, has the required authorization to perform operations on your behalf.

Please refer to the following Confluent Cloud documentation for detailed instructions on setting up these permissions and integrating with custom storage and Glue:

Ensuring these prerequisites are met will prevent authorization errors when the mcp-server attempts to provision or manage Tableflow-enabled tables.

Authentication for HTTP/SSE Transports

When using HTTP or SSE transports, the MCP server requires API key authentication to prevent unauthorized access and protect against DNS rebinding attacks. This is enabled by default.

Generating an API Key

Generate a secure API key using the built-in utility:

npx @confluentinc/mcp-confluent --generate-key

This will output a 64-character key generated using secure cryptography:

Generated MCP API Key:
================================================================
a1b2c3d4e5f6...your-64-char-key-here...
================================================================

Configuring Authentication

Add the generated key to your .env file:

# MCP Server Authentication (required for HTTP/SSE transports)
MCP_API_KEY=your-generated-64-char-key-here

Making Authenticated Requests

Include the API key in the cflt-mcp-api-Key header for all HTTP/SSE requests:

curl -H "cflt-mcp-api-Key: your-api-key" http://localhost:8080/mcp

DNS Rebinding Protection

The server includes additional protections against DNS rebinding attacks:

  • Host Header Validation: Only requests with allowed Host headers are accepted

Configure allowed hosts if needed:

# Allow additional hosts (comma-separated)
MCP_ALLOWED_HOSTS=localhost,127.0.0.1,myhost.local

Additional security to prevent internet exposure of MCP server

  • Localhost Binding: Server binds to 127.0.0.1 by default (not 0.0.0.0)

Disabling Authentication (Development Only)

For local development, you can disable authentication:

# Via CLI flag
npx @confluentinc/mcp-confluent -e .env --transport http --disable-auth

# Or via environment variable
MCP_AUTH_DISABLED=true

Warning: Never disable authentication in production or when the server is network-accessible.

Environment Variables Reference

VariableDescriptionDefault ValueRequired
HTTP_HOSTHost to bind for HTTP transport. Defaults to localhost only for security."127.0.0.1"Yes
HTTP_MCP_ENDPOINT_PATHHTTP endpoint path for MCP transport (e.g., '/mcp') (string)"/mcp"Yes
HTTP_PORTPort to use for HTTP transport (number (min: 0))8080Yes
LOG_LEVELLog level for application logging (trace, debug, info, warn, error, fatal)"info"Yes
MCP_API_KEYAPI key for HTTP/SSE authentication. Generate using --generate-key. Required when auth is enabled.No*
MCP_AUTH_DISABLEDDisable authentication for HTTP/SSE transports. WARNING: Only use in development environments.falseNo
MCP_ALLOWED_HOSTSComma-separated list of allowed Host header values for DNS rebinding protection."localhost,127.0.0.1"No
SSE_MCP_ENDPOINT_PATHSSE endpoint path for establishing SSE connections (e.g., '/sse', '/events') (string)"/sse"Yes
SSE_MCP_MESSAGE_ENDPOINT_PATHSSE message endpoint path for receiving messages (e.g., '/messages', '/events/messages') (string)"/messages"Yes
BOOTSTRAP_SERVERSList of Kafka broker addresses in the format host1:port1,host

README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
data-engineer

Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms. Use PROACTIVELY for data pipeline design, analytics infrastructure, or modern data stack implementation.

0
hugging-face-cli

Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.

0
hybrid-cloud-networking

Configure secure, high-performance connectivity between on-premises infrastructure and cloud platforms using VPN and dedicated connections. Use when building hybrid cloud architectures, connecting data centers to cloud, or implementing secure cross-premises networking.

0
database-admin

Expert database administrator specializing in modern cloud databases, automation, and reliability engineering. Masters AWS/Azure/GCP database services, Infrastructure as Code, high availability, disaster recovery, performance optimization, and compliance. Handles multi-cloud strategies, container databases, and cost optimization. Use PROACTIVELY for database architecture, operations, or reliability engineering.

0
hugging-face-jobs

This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.

0
hugging-face-model-trainer

This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.

0