Llama.cpp Bridge

Llama.cpp Bridge

openconstruct

Connects Claude Desktop to your local llama.cpp models, letting you chat with local LLMs directly through Claude's interface.

82,364 views5Local (stdio)

What it does

  • Chat with local llama.cpp models through Claude Desktop
  • Control generation parameters like temperature and max_tokens
  • Monitor llama-server health and status
  • Track performance metrics and token usage
  • Test model capabilities with built-in tools

Best for

AI researchers running local modelsPrivacy-focused users avoiding cloud APIsDevelopers integrating local LLMs with desktop workflows
No cloud API keys requiredFull conversation supportBuilt-in testing tools

Alternatives