Scorecard

Scorecard

Official
scorecard-ai

Tests and evaluates LLM applications by running automated test suites and collecting performance metrics. Helps developers measure accuracy, reliability, and quality of their AI systems.

Evaluate and optimize LLM systems with comprehensive testing and metrics

161 views2Remote

What it does

  • Run automated test suites against LLM applications
  • Collect performance and accuracy metrics
  • Generate evaluation reports with detailed analytics
  • Compare model performance across different versions
  • Track quality metrics over time
  • Export test results in multiple formats

Best for

AI developers building LLM applicationsTeams implementing continuous testing for AI systemsOrganizations measuring LLM performance in productionResearchers comparing different language models
Comprehensive LLM evaluation frameworkAutomated testing workflows

About Scorecard

Scorecard is an official MCP server published by scorecard-ai that provides AI assistants with tools and capabilities via the Model Context Protocol. Scorecard: Evaluate and optimize LLM systems with thorough testing, actionable metrics, and performance insights to impr It is categorized under developer tools.

How to install

You can install Scorecard in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server supports remote connections over HTTP, so no local installation is required.

License

Scorecard is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

Scorecard TypeScript API Library

NPM version npm bundle size

This library provides convenient access to the Scorecard REST API from server-side TypeScript or JavaScript.

The REST API documentation can be found on docs.scorecard.io. The full API of this library can be found in api.md.

It is generated with Stainless.

MCP Server

Use the Scorecard MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.

Add to Cursor Install in VS Code

Note: You may need to set environment variables in your MCP client.

Installation

npm install scorecard-ai

Usage

The full API of this library can be found in api.md.

import Scorecard, { runAndEvaluate } from 'scorecard-ai';

async function runSystem(testcaseInput) {
  // Replace with a call to your LLM system
  return { response: testcaseInput.original.toUpperCase() };
}

const client = new Scorecard({
  apiKey: process.env['SCORECARD_API_KEY'],
});

const run = await runAndEvaluate(
  client,
  {
    projectId: '314', // Scorecard Project
    testsetId: '246', // Scorecard Testset
    metricIds: ['789', '101'], // Scorecard Metrics
    system: runSystem, // Your LLM system
  }
);

console.log(`Go to ${run.url} to view your Run's scorecard.`);

Request & Response types

This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:

import Scorecard from 'scorecard-ai';

const client = new Scorecard({
  apiKey: process.env['SCORECARD_API_KEY'], // This is the default and can be omitted
});

const testset: Scorecard.Testset = await client.testsets.get('246');

Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.

Handling errors

When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of APIError will be thrown:

const testset = await client.testsets.get('246').catch(async (err) => {
  if (err instanceof Scorecard.APIError) {
    console.log(err.status); // 400
    console.log(err.name); // BadRequestError
    console.log(err.headers); // {server: 'nginx', ...}
  } else {
    throw err;
  }
});

Error codes are as follows:

Status CodeError Type
400BadRequestError
401AuthenticationError
403PermissionDeniedError
404NotFoundError
422UnprocessableEntityError
429RateLimitError
>=500InternalServerError
N/AAPIConnectionError

Retries

Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.

You can use the maxRetries option to configure or disable this:

// Configure the default for all requests:
const client = new Scorecard({
  maxRetries: 0, // default is 2
});

// Or, configure per-request:
await client.testsets.get('246', {
  maxRetries: 5,
});

Timeouts

Requests time out after 1 minute by default. You can configure this with a timeout option:

// Configure the default for all requests:
const client = new Scorecard({
  timeout: 20 * 1000, // 20 seconds (default is 1 minute)
});

// Override per-request:
await client.testsets.get('246', {
  timeout: 5 * 1000,
});

On timeout, an APIConnectionTimeoutError is thrown.

Note that requests which time out will be retried twice by default.

Auto-pagination

List methods in the Scorecard API are paginated. You can use the for await … of syntax to iterate through items across all pages:

async function fetchAllTestcases(params) {
  const allTestcases = [];
  // Automatically fetches more pages as needed.
  for await (const testcase of client.testcases.list('246', { limit: 30 })) {
    allTestcases.push(testcase);
  }
  return allTestcases;
}

Alternatively, you can request a single page at a time:

let page = await client.testcases.list('246', { limit: 30 });
for (const testcase of page.data) {
  console.log(testcase);
}

// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
  page = await page.getNextPage();
  // ...
}

Advanced Usage

Accessing raw Response data (e.g., headers)

The "raw" Response returned by fetch() can be accessed through the .asResponse() method on the APIPromise type that all methods return. This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.

You can also use the .withResponse() method to get the raw Response along with the parsed data. Unlike .asResponse() this method consumes the body, returning once it is parsed.

const client = new Scorecard();

const response = await client.testsets.get('246').asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object

const { data: testset, response: raw } = await client.testsets.get('246').withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(testset.id);

Logging

[!IMPORTANT] All log messages are intended for debugging only. The format and content of log messages may change between releases.

Log levels

The log level can be configured in two ways:

  1. Via the SCORECARD_LOG environment variable
  2. Using the logLevel client option (overrides the environment variable if set)
import Scorecard from 'scorecard-ai';

const client = new Scorecard({
  logLevel: 'debug', // Show all log messages
});

Available log levels, from most to least verbose:

  • 'debug' - Show debug messages, info, warnings, and errors
  • 'info' - Show info messages, warnings, and errors
  • 'warn' - Show warnings and errors (default)
  • 'error' - Show only errors
  • 'off' - Disable all logging

At the 'debug' level, all HTTP requests and responses are logged, including headers and bodies. Some authentication-related headers are redacted, but sensitive data in request and response bodies may still be visible.

Custom logger

By default, this library logs to globalThis.console. You can also provide a custom logger. Most logging libraries are supported, including pino, winston, bunyan, consola, signale, and @std/log. If your logger doesn't work, please open an issue.

When providing a custom logger, the logLevel option still controls which messages are emitted, messages below the configured level will not be sent to your logger.

import Scorecard from 'scorecard-ai';
import pino from 'pino';

const logger = pino();

const client = new Scorecard({
  logger: logger.child({ name: 'Scorecard' }),
  logLevel: 'debug', // Send all messages to pino, allowing it to filter
});

Making custom/undocumented requests

This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used.

Undocumented endpoints

To make requests to undocumented endpoints, you can use client.get, client.post, and other HTTP verbs. Options on the client, such as retries, will be respected when making these requests.

await client.post('/some/path', {
  body: { some_prop: 'foo' },
  query: { some_query_arg: 'bar' },
});

Undocumented request pa


README truncated. View full README on GitHub.

Alternatives

Related Skills

Browse all skills
ui-design-system

UI design system toolkit for Senior UI Designer including design token generation, component documentation, responsive design calculations, and developer handoff tools. Use for creating design systems, maintaining visual consistency, and facilitating design-dev collaboration.

18
ai-sdk

Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".

6
api-documenter

Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals. Use PROACTIVELY for API documentation or developer portal creation.

4
openai-knowledge

Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.

4
cli-builder

Guide for building TypeScript CLIs with Bun. Use when creating command-line tools, adding subcommands to existing CLIs, or building developer tooling. Covers argument parsing, subcommand patterns, output formatting, and distribution.

3
ydc-ai-sdk-integration

Integrate Vercel AI SDK applications with You.com tools (web search, AI agent, content extraction). Use when developer mentions AI SDK, Vercel AI SDK, generateText, streamText, or You.com integration with AI SDK.

2