Kubernetes

Kubernetes

feiskyer

Provides direct CLI-level access to Kubernetes clusters for managing resources, debugging pods, and monitoring cluster state. Uses your existing kubeconfig to connect to any K8s cluster.

Enables direct Kubernetes cluster management through kubectl command execution, providing a bridge for real-time resource administration within conversations.

16749 views9Local (stdio)

What it does

  • Apply YAML manifests to create or update resources
  • Get logs from running pods
  • Execute commands inside pods
  • List and inspect any K8s resource type
  • Monitor cluster events and node status

Best for

DevOps engineers managing K8s deploymentsDevelopers debugging application podsPlatform teams monitoring cluster health
Works with existing kubeconfig9 comprehensive toolsBuilt in Go for performance

About Kubernetes

Kubernetes is a community-built MCP server published by feiskyer that provides AI assistants with tools and capabilities via the Model Context Protocol. Manage Kubernetes clusters in real-time using kubectl commands for seamless resource administration directly within conv It is categorized under cloud infrastructure, developer tools.

How to install

You can install Kubernetes in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

License

Kubernetes is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

mcp-kubernetes-server

PyPI version License Build Status

The mcp-kubernetes-server is a server implementing the Model Context Protocol (MCP) to enable AI assistants (such as Claude, Cursor, and GitHub Copilot) to interact with Kubernetes clusters. It acts as a bridge, translating natural language requests from these assistants into Kubernetes operations and returning the results.

It allows AI assistants to:

  • Query Kubernetes resources
  • Execute kubectl commands
  • Manage Kubernetes clusters through natural language interactions
  • Diagnose and interpret the states of Kubernetes resources

How It Works

The mcp-kubernetes-server acts as an intermediary between AI assistants (that support the Model Context Protocol) and your Kubernetes cluster. It receives natural language requests from these assistants, translates them into kubectl commands or direct Kubernetes API calls, and executes them against the target cluster. The server then processes the results and returns a structured response, enabling seamless interaction with your Kubernetes environment via the AI assistant.

README image

How To Install

Prerequisites

Before installing mcp-kubernetes-server, ensure you have the following:

  • A working Kubernetes cluster.
  • A kubeconfig file correctly configured to access your Kubernetes cluster (the server requires this file for interaction).
  • The kubectl command-line tool installed and in your system's PATH (used by the server to execute many Kubernetes commands).
  • The helm command-line tool installed and in your system's PATH (used by the server for Helm chart operations).
  • Python >= 3.11, if you plan to install and run the server directly using uvx (without Docker).

Docker

Get your kubeconfig file for your Kubernetes cluster and setup in the mcpServers (replace src path with your kubeconfig path):

{
  "mcpServers": {
    "kubernetes": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "--mount", "type=bind,src=/home/username/.kube/config,dst=/home/mcp/.kube/config",
        "ghcr.io/feiskyer/mcp-kubernetes-server"
      ]
    }
  }
}

UVX

To run the server using uvx (a tool included with uv, the Python packager), first ensure uv is installed:

Install uv

Install uv if it's not installed yet and add it to your PATH, e.g. using curl:

# For Linux and MacOS
curl -LsSf https://astral.sh/uv/install.sh | sh
Install kubectl

Install kubectl if it's not installed yet and add it to your PATH, e.g.

# For Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# For MacOS
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
Install helm

Install helm if it's not installed yet and add it to your PATH, e.g.

curl -sSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Config your MCP servers in Claude Desktop, Cursor, ChatGPT Copilot, Github Copilot and other supported AI clients, e.g.

{
  "mcpServers": {
    "kubernetes": {
      "command": "uvx",
      "args": [
        "mcp-kubernetes-server"
      ],
      "env": {
        "KUBECONFIG": "<your-kubeconfig-path>"
      }
    }
  }
}

MCP Server Options

Environment Variables

Environment variables:

  • KUBECONFIG: Path to your kubeconfig file, e.g. /home/<username>/.kube/config.
Command line arguments

Command-line Arguments:

usage: main.py [-h] [--disable-kubectl] [--disable-helm] [--disable-write]
               [--disable-delete] [--transport {stdio,sse,streamable-http}]
               [--host HOST] [--port PORT]

MCP Kubernetes Server

options:
  -h, --help            show this help message and exit
  --disable-kubectl     Disable kubectl command execution
  --disable-helm        Disable helm command execution
  --disable-write       Disable write operations
  --disable-delete      Disable delete operations
  --transport {stdio,sse,streamable-http}
                        Transport mechanism to use (stdio or sse or streamable-http)
  --host HOST           Host to use for sse or streamable-http server
  --port PORT           Port to use for sse or streamable-http server

Usage

Once the mcp-kubernetes-server is installed and configured in your AI client (using the JSON snippets provided in the 'How to install' section for Docker or UVX), you can start interacting with your Kubernetes cluster through natural language. For example, you can ask:

What is the status of my Kubernetes cluster?

What is wrong with my nginx pod?

Verifying the server: If you're running the server with stdio transport (common for uvx direct execution), the AI client will typically start and manage the server process. For sse or streamable-http transports, the server runs independently. You would have started it manually (e.g., uvx mcp-kubernetes-server --transport sse) and should see output in your terminal indicating it's running (e.g., INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)). You can also check for any error messages in the server terminal if the AI client fails to connect.

Available Tools

The mcp-kubernetes-server provides a comprehensive set of tools for interacting with Kubernetes clusters, categorized by operation type:

Command Tools

Command Tools

These tools provide general command execution capabilities:

ToolDescriptionParameters
kubectlRun any kubectl command and return the outputcommand (string)
helmRun any helm command and return the outputcommand (string)
Read Tools

Read Tools

These tools provide read-only access to Kubernetes resources:

ToolDescriptionParameters
k8s_getFetch any Kubernetes object (or list) as JSON stringresource (string), name (string), namespace (string)
k8s_describeShow detailed information about a specific resource or group of resourcesresource_type (string), name (string, optional), namespace (string, optional), selector (string, optional), all_namespaces (boolean, optional)
k8s_logsPrint the logs for a container in a podpod_name (string), container (string, optional), namespace (string, optional), tail (integer, optional), previous (boolean, optional), since (string, optional), timestamps (boolean, optional), follow (boolean, optional)
k8s_eventsList events in the clusternamespace (string, optional), all_namespaces (boolean, optional), field_selector (string, optional), resource_type (string, optional), resource_name (string, optional), sort_by (string, optional), watch (boolean, optional)
k8s_apisList all available APIs in the Kubernetes clusternone
k8s_crdsList all Custom Resource Definitions (CRDs) in the Kubernetes clusternone
k8s_top_nodesDisplay resource usage (CPU/memory) of nodessort_by (string, optional)
k8s_top_podsDisplay resource usage (CPU/memory) of podsnamespace (string, optional), all_namespaces (boolean, optional), sort_by (string, optional), selector (string, optional)
k8s_rollout_statusGet the status of a rollout for a deployment, daemonset, or statefulsetresource_type (string), name (string), namespace (string, optional)
k8s_rollout_historyGet the rollout history for a deployment, daemonset, or statefulsetresource_type (string), name (string), namespace (string, optional), revision (string, optional)
k8s_auth_can_iCheck whether an action is allowedverb (string), resource (string), subresource (string, optional), namespace (string, optional), name (string, optional)
k8s_auth_whoamiShow the subject that you are currently authenticated asnone
Write Tools

Write Tools

These tools provide create, update or patch operations to Kubernetes resources:

ToolDescriptionParameters
k8s_createCreate a Kubernetes resource from YAML/JSON contentyaml_content (string), namespace (string, optional)
k8s_applyApply a configuration to a resource by filename or stdinyaml_content (string), namespace (string, optional)
k8s_exposeExpose a resource as a new Kubernetes serviceresource_type (string), name (string), port (integer), target_port (integer, optional), namespace (string, optional), protocol (string, optional), service_name (string, optional), labels (object, optional), selector (string, optional), type (string, optional)
k8s_runCreate and

README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
kubernetes-architect

Expert Kubernetes architect specializing in cloud-native infrastructure, advanced GitOps workflows (ArgoCD/Flux), and enterprise container orchestration. Masters EKS/AKS/GKE, service mesh (Istio/Linkerd), progressive delivery, multi-tenancy, and platform engineering. Handles security, observability, cost optimization, and developer experience. Use PROACTIVELY for K8s architecture, GitOps implementation, or cloud-native platform design.

0
mlops-engineer

Build comprehensive ML pipelines, experiment tracking, and model registries with MLflow, Kubeflow, and modern MLOps tools. Implements automated training, deployment, and monitoring across cloud platforms. Use PROACTIVELY for ML infrastructure, experiment management, or pipeline automation.

0
langchain-deploy-integration

Deploy LangChain integrations to production environments. Use when deploying to cloud platforms, configuring containers, or setting up production infrastructure for LangChain apps. Trigger with phrases like "deploy langchain", "langchain production deploy", "langchain cloud run", "langchain docker", "langchain kubernetes".

0
customerio-deploy-pipeline

Deploy Customer.io integrations to production. Use when deploying to cloud platforms, setting up production infrastructure, or automating deployments. Trigger with phrases like "deploy customer.io", "customer.io production", "customer.io cloud run", "customer.io kubernetes".

0
devops-iac-engineer

Implements infrastructure as code using Terraform, Kubernetes, and cloud platforms. Designs scalable architectures, CI/CD pipelines, and observability solutions. Provides security-first DevOps practices and site reliability engineering guidance.

0
deepgram-deploy-integration

Deploy Deepgram integrations to production environments. Use when deploying to cloud platforms, configuring production infrastructure, or setting up Deepgram in containerized environments. Trigger with phrases like "deploy deepgram", "deepgram docker", "deepgram kubernetes", "deepgram production deploy", "deepgram cloud".

0