
Galileo
OfficialConnects to Galileo's platform for managing LLM evaluation datasets, monitoring application performance, and running experiments on language models.
5142 views3Remote
What it does
- Create and manage evaluation datasets
- Set up LLM experiments and A/B tests
- Monitor model performance and observability metrics
- Analyze application logs and traces
- Manage prompt templates and versions
- Access step-by-step integration guides
Best for
ML engineers evaluating LLM applicationsTeams running production language model servicesDevelopers optimizing prompt performanceOrganizations monitoring AI application quality
Full evaluation and observability platformStreamable HTTP transport