
nothumansearch-mcp
Search engine for AI agents. Find websites and APIs ranked by agentic readiness score with llms.txt, OpenAPI, and MCP se
Search engine for AI agents that finds websites and APIs ranked by agentic readiness score (0-100). Uses remote HTTP endpoint for discovering agent-compatible services and tools.
About nothumansearch-mcp
nothumansearch-mcp is a community-built MCP server published by unitedideas that provides AI assistants with tools and capabilities via the Model Context Protocol. Search engine for AI agents. Find websites and APIs ranked by agentic readiness score with llms.txt, OpenAPI, and MCP se It is categorized under databases. This server exposes 3 tools that AI clients can invoke during conversations and coding sessions.
How to install
You can install nothumansearch-mcp in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
nothumansearch-mcp is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (3)
Find agent-ready websites, APIs, and services by keyword, category, or minimum readiness score
Get the full 7-signal readiness report for a specific domain
Get index statistics including total sites, average score, and top category
Not Human Search — MCP server
A remote MCP server for searching the agentic web. Discover APIs, services, and tools AI agents can actually call — ranked by agentic readiness score (0-100).
Live endpoint: https://nothumansearch.ai/mcp (streamable-http transport, no install)
Registry listing: ai.nothumansearch/search
Install in one command
claude mcp add --transport http nothumansearch https://nothumansearch.ai/mcp
Any MCP-compatible client (Claude Code, Cline, Cursor, Zed, Goose, etc.) can connect to the same endpoint.
Tools
| Tool | Purpose |
|---|---|
search_agents | Find agent-ready websites, APIs, and services by keyword, category, or minimum readiness score. |
get_site_details | Get the full 7-signal readiness report for a specific domain. |
get_stats | Index statistics: total sites, average score, top category. |
What "agentic readiness" means
Every site in the index is scored 0-100 based on 7 signals that determine how cleanly an AI agent can interact with it:
llms.txt(25 pts)ai-plugin.json(20 pts)- OpenAPI spec (20 pts)
- Structured API (15 pts)
- MCP server (10 pts)
robots.txtAI bot rules (5 pts)- Schema.org markup (5 pts)
Other interfaces
- Website: https://nothumansearch.ai
- REST search API:
GET https://nothumansearch.ai/api/v1/search?q=<query> - On-demand check (CI-friendly):
POST https://nothumansearch.ai/api/v1/check - Embeddable badge:
https://nothumansearch.ai/badge/{domain}.svg - Submit a site:
POST https://nothumansearch.ai/api/v1/submit
Manifest
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "ai.nothumansearch/search",
"title": "Not Human Search",
"description": "Search engine for AI agents. Find websites and APIs ranked by agentic readiness.",
"version": "1.0.0",
"websiteUrl": "https://nothumansearch.ai",
"remotes": [
{ "type": "streamable-http", "url": "https://nothumansearch.ai/mcp" }
]
}
Deeper docs
See USAGE.md for a practical guide: tool arguments, recipes ("find me an email API that agents can actually call"), scoring details, and design principles.
License
The MCP server source lives in a private repo. This README is MIT-licensed — use the claude mcp add command freely.
Alternatives
Related Skills
Browse all skillsConduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).
Comprehensive guide for PostgreSQL psql - the interactive terminal client for PostgreSQL. Use when connecting to PostgreSQL databases, executing queries, managing databases/tables, configuring connection options, formatting output, writing scripts, managing transactions, and using advanced psql features for database administration and development.
Notion workspace integration. Use when user wants to read/write Notion pages, search databases, create tasks, or sync content with Notion.
This skill should be used when the user requests to generate, create, or add Row-Level Security (RLS) policies for Supabase databases in multi-tenant or role-based applications. It generates comprehensive RLS policies using auth.uid(), auth.jwt() claims, and role-based access patterns. Trigger terms include RLS, row level security, supabase security, generate policies, auth policies, multi-tenant security, role-based access, database security policies, supabase permissions, tenant isolation.
Transforms conversations and discussions into structured documentation pages in Notion. Captures insights, decisions, and knowledge from chat context, formats appropriately, and saves to wikis or databases with proper organization and linking for easy discovery.
Autonomous biomedical AI agent framework for executing complex research tasks across genomics, drug discovery, molecular biology, and clinical analysis. Use this skill when conducting multi-step biomedical research including CRISPR screening design, single-cell RNA-seq analysis, ADMET prediction, GWAS interpretation, rare disease diagnosis, or lab protocol optimization. Leverages LLM reasoning with code execution and integrated biomedical databases.