Claude patent-search skill: 10 prior-art and freedom-to-operate workflows
Ten real prior-art, FTO, and patent-landscape workflows — novelty search, competitor sweep, citation graph, family search, claim-1 diff, and a patentability draft — each as a single Claude prompt with the exact PatentsView API call it produces.
The skill is patent-search by RobThePCGuy. It calls the PatentsView PatentSearch API (search.patentsview.org/api/v1/patent/), which is free with a USPTO-issued key. Every cookbook entry returns a runnable Python snippet and a result file you can hand to a registered patent attorney.
Already know what skills are? Skip to the cookbook. First time? Read the explainer then come back. Need the install? It’s on the /skills/patent-search page.

On this page · 21 sections▾
- What this skill does
- The cookbook
- Install + README
- Watch it built
- 01 · Prior-art search for a single invention disclosure
- 02 · Freedom-to-operate (FTO) sweep across competitor portfolios
- 03 · Patent landscape report by CPC class
- 04 · Citation graph around a focal patent
- 05 · Inventor lookup by name
- 06 · Assignee portfolio summary
- 07 · Family search across jurisdictions
- 08 · Side-by-side claim-1 comparison
- 09 · Auto-extract independent claims from N patents
- 10 · Patentability opinion draft
- Community signal
- The contrarian take
- Real tools shipped
- Gotchas
- Pairs well with
- FAQ
- Sources
What this skill actually does
Sixty seconds of context before the cookbook — what the patent-search skill is, what Claude returns when you invoke it, and the one thing it does NOT do for you.
What this skill actually does
“Advanced prior art search using the PatentsView API. Use this skill when users need to search for patents, perform prior art searches, analyze patent landscapes, or find patents by inventor, title, date range, or technical fields.”
— RobThePCGuy, the skill author · /skills/patent-search
What Claude returns
When triggered, Claude returns a self-contained Python script and a result file. The script POSTs to `https://search.patentsview.org/api/v1/patent/` with an `X-Api-Key` header (your `PATENTSVIEW_API_KEY`) and a JSON body of `q` (criteria), `f` (fields), `s` (sort), and `o` (pagination). Operators include `_eq`, `_gt`, `_text_any`, `_text_phrase`, `_and`, `_or`, `_not`. The response is JSON with `count`, `total_hits`, and a `patents` array. Every cookbook entry below renders the result as either a markdown table, a CSV edge list, or a JSON file plus a `https://patents.google.com/patent/US{patent_id}` deep-link for human review.
What it does NOT do
It does not replace a registered patent attorney's opinion, and it does not search foreign filings or non-patent literature on its own. PatentsView is US-only; use cases 7 and 10 explicitly call this out.
How you trigger it
Run a prior-art search for the disclosure in disclosure.md.List every active US patent owned by Tesla in CPC class B60W30.Pull the citation graph for US10891295 and write it as a CSV.Cost when idle
~120 tokens at idle (the skill name + description in the system prompt). Body and PatentsView reference snippets load only when triggered.
The cookbook
Each entry below is a workflow you could run this week. They run roughly in order of analyst gravity — the early ones are single-API-call novelty checks, the middle ones lean on CPC + assignee filters, and the later ones compose multiple queries into citation graphs, family maps, and a draft patentability opinion. Every entry pairs with one or two skills already on mcp.directory.
Install + README
If the skill isn’t on your machine yet, here’s the one-liner. The full install panel (Codex, Copilot, Antigravity variants) is on the skill page — the same UI’s embedded below.
One-line install · by RobThePCGuy
Open skill pageInstall
mkdir -p .claude/skills/patent-search && curl -L -o skill.zip "https://mcp.directory/api/skills/download/297" && unzip -o skill.zip -d .claude/skills/patent-search && rm skill.zipInstalls to .claude/skills/patent-search
Watch it built
April Mosby’s 7-step prior-art workflow is the human reference behind every prompt below. Watch the human version first; then read the cookbook entries as the same workflow encoded as PatentsView API calls.
Prior-art search for a single invention disclosure
Run a novelty search against a one-paragraph invention disclosure. Returns 25 candidate patents ranked by overlap with the disclosure's technical terms, each with a Google Patents deep-link.
ForSolo inventors and engineers checking novelty before a provisional filing.
The prompt
Read `disclosure.md` and extract the 6–10 most discriminating technical terms. Build a PatentsView `/patent/` POST with `_text_any` over `patent_title` and `patent_abstract`, restricted to `patent_date >= 2014-01-01`. Return 25 candidates sorted by `patent_date` desc, with `patent_id`, `patent_title`, `assignees.assignee_organization`, and a `https://patents.google.com/patent/US{patent_id}` link. Save the JSON as `out/prior-art.json` and a markdown table as `out/prior-art.md`.What slides.md looks like
import os, requests, json
HEADERS = {"X-Api-Key": os.environ["PATENTSVIEW_API_KEY"]}
TERMS = ["adaptive cruise", "lidar fusion", "lane curvature"]
body = {
"q": {"_and": [
{"_text_any": {"patent_abstract": " ".join(TERMS)}},
{"_gte": {"patent_date": "2014-01-01"}},
]},
"f": ["patent_id", "patent_title", "patent_date", "assignees"],
"s": [{"patent_date": "desc"}],
"o": {"size": 25},
}
r = requests.post("https://search.patentsview.org/api/v1/patent/",
headers=HEADERS, json=body, timeout=30)
json.dump(r.json(), open("out/prior-art.json", "w"), indent=2)One-line tweak
Swap `_text_any` for `_text_all` to require every discriminating term in the abstract — narrower hits but fewer false positives on broad terminology.
Freedom-to-operate (FTO) sweep across competitor portfolios
List every active US patent owned by a named set of competitors that touches a target CPC subclass. Each row shows assignee, claim-1 stub, expiration year, and an FTO-risk flag.
ForProduct managers and IP analysts clearing a launch in a crowded category.
The prompt
Build an FTO sweep for assignees in `competitors.json` (Tesla, Waymo, Cruise) intersected with CPC subclass `B60W30/00`. Use `_and` over `assignees.assignee_organization` (`_text_any`) and `cpc_at_issue.cpc_subclass_id` (`_eq`), filtered to `patent_type = utility` and `patent_date >= 2010-01-01`. Return the first 100 records with claim-1 text and a derived `expires_on` (issue date + 20 years). Flag rows where `expires_on > today` as `risk = active`.What slides.md looks like
body = {
"q": {"_and": [
{"_text_any": {"assignees.assignee_organization": "Tesla Waymo Cruise"}},
{"_eq": {"cpc_at_issue.cpc_subclass_id": "B60W30"}},
{"_eq": {"patent_type": "utility"}},
{"_gte": {"patent_date": "2010-01-01"}},
]},
"f": ["patent_id", "patent_title", "patent_date",
"assignees.assignee_organization", "claims.claim_text"],
"s": [{"patent_date": "desc"}],
"o": {"size": 100, "after": None},
}
r = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()One-line tweak
Replace the CPC subclass with `cpc_at_issue.cpc_group_id` for one CPC level deeper — cuts the candidate set by ~10x when the subclass is too noisy.
Patent landscape report by CPC class
Produce a 12-page landscape report for one CPC class — top 20 assignees by filing volume, year-over-year filing curve, and the 10 most-cited patents in the class.
ForStrategy and corp-dev teams sizing a technology area before an acquisition or a new product line.
The prompt
Generate a landscape report for CPC class `H04L9` (cryptographic mechanisms). Run three PatentsView queries: (a) top 20 `assignees.assignee_organization` by patent count over 2014–2024 sorted desc, (b) annual patent counts grouped by `patent_year` for the same class, (c) top 10 patents by `cited_by_us_patents.patent_id` count in the class. Write the result as `landscape-H04L9.md` with three tables and a one-paragraph executive summary.What slides.md looks like
for year in range(2014, 2025):
body = {
"q": {"_and": [
{"_eq": {"cpc_at_issue.cpc_subclass_id": "H04L9"}},
{"_eq": {"patent_year": year}},
]},
"f": ["patent_id"],
"o": {"size": 1}, # we only need total_hits
}
r = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()
print(year, r["total_hits"])One-line tweak
Loop over CPC subclasses (`H04L9`, `H04L29`, `G06F21`) and stitch the result into a multi-class landscape — same query body, different `_eq` value.
Citation graph around a focal patent
Build a two-hop citation graph from a focal patent. Outbound: every patent the focal cites. Inbound: every patent that cites the focal. Each edge is a row in a CSV ready for Gephi.
ForIP analysts mapping how an invention propagates across an industry.
The prompt
Take patent `US10891295` as the focal node. Use the PatentsView `/patent/` endpoint to fetch its `cited_patents.patent_id` array (outbound) and its `cited_by_us_patents.patent_id` array (inbound). For each inbound and outbound patent, write a row `(focal_id, neighbor_id, direction)` to `out/edges.csv`. Stop at depth 1 — do not recurse on the neighbors.What slides.md looks like
body = {
"q": {"_eq": {"patent_id": "10891295"}},
"f": ["patent_id", "patent_title",
"cited_patents.patent_id",
"cited_by_us_patents.patent_id"],
}
focal = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()["patents"][0]
with open("out/edges.csv", "w") as fh:
fh.write("source,target,direction\n")
for c in focal["cited_patents"]:
fh.write(f"{focal['patent_id']},{c['patent_id']},out\n")
for c in focal["cited_by_us_patents"]:
fh.write(f"{c['patent_id']},{focal['patent_id']},in\n")One-line tweak
Recurse one more level on the inbound neighbors only — that's where forward influence concentrates and where most acquisition leads hide.
Inventor lookup by name
Find every patent associated with a named inventor, ordered by issue date. Returns assignee, title, and the inventor's location at filing — useful for hiring leads and for invalidity research.
ForRecruiters scouting expert witnesses; corp-dev teams diligencing a target company's key inventors.
The prompt
Find every patent listing inventor `Andrej Karpathy`. Use the PatentsView `/patent/` endpoint with `_and` over `inventors.inventor_name_first` (`_eq` Andrej) and `inventors.inventor_name_last` (`_eq` Karpathy). Return `patent_id`, `patent_title`, `patent_date`, `assignees.assignee_organization`, `inventors.inventor_state`. Sort by `patent_date` desc, page 100 at a time.What slides.md looks like
body = {
"q": {"_and": [
{"_eq": {"inventors.inventor_name_first": "Andrej"}},
{"_eq": {"inventors.inventor_name_last": "Karpathy"}},
]},
"f": ["patent_id", "patent_title", "patent_date",
"assignees.assignee_organization",
"inventors.inventor_state"],
"s": [{"patent_date": "desc"}],
"o": {"size": 100},
}
r = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()
for p in r["patents"]:
print(p["patent_date"], p["patent_id"], p["patent_title"])One-line tweak
Replace the exact-match `_eq` on first name with `_begins` to catch nickname variants (`Andy` vs `Andrew` vs `Andrej`) — useful when the inventor's name appears inconsistently across filings.
Assignee portfolio summary
Snapshot a single company's US patent portfolio: total count, filing curve, top CPC classes, top inventors, oldest active patent. Drops into a one-page brief.
ForInvestors running IP diligence on a Series B startup or a public-company acquisition target.
The prompt
Build a portfolio summary for assignee `Stripe Inc`. Run four PatentsView queries: (1) total count of `patent_id` where `assignees.assignee_organization` contains 'Stripe'; (2) annual counts by `patent_year`; (3) top 5 `cpc_at_issue.cpc_subclass_id` by frequency; (4) top 5 `inventors.inventor_name_last` by count. Render the result as `out/stripe-portfolio.md`.What slides.md looks like
body = {
"q": {"_text_phrase": {"assignees.assignee_organization": "Stripe Inc"}},
"f": ["patent_id", "patent_year",
"cpc_at_issue.cpc_subclass_id",
"inventors.inventor_name_last"],
"o": {"size": 1000}, # 45 req/min — plan for paging on large portfolios
}
r = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()
years = {}
for p in r["patents"]:
years[p["patent_year"]] = years.get(p["patent_year"], 0) + 1
print(sorted(years.items()))One-line tweak
Page through the full 1,000+ portfolio with `o.after` instead of one shot — the API caps single responses at 1,000 rows even when `total_hits` is higher.
Family search across jurisdictions
From a US patent number, recover the family of related foreign filings — same priority date, same inventors, different countries. Returns one row per family member with country code and publication date.
ForFTO analysts confirming whether a US-only clearance is enough or whether EP / JP / CN counterparts are also active.
The prompt
Take focal patent `US9711050`. Use PatentsView's `/publication/` endpoint with `_eq` on `priority_claim.priority_doc_number` set to the focal's earliest priority application number. Return one row per `publication_country_code` with `publication_number` and `publication_date`. Save as `out/family-9711050.csv`.What slides.md looks like
# Step 1: read focal's priority application number from /patent/
focal = requests.post(URL_PAT, headers=HEADERS, json={
"q": {"_eq": {"patent_id": "9711050"}},
"f": ["patent_id", "earliest_application.application_id"],
}, timeout=30).json()["patents"][0]
# Step 2: pivot on /publication/ for the family
fam = requests.post(URL_PUB, headers=HEADERS, json={
"q": {"_eq": {"earliest_application.application_id":
focal["earliest_application"]["application_id"]}},
"f": ["publication_number", "publication_country_code", "publication_date"],
"s": [{"publication_date": "asc"}],
}, timeout=30).json()
print(len(fam["publications"]), "family members")One-line tweak
Filter `publication_country_code in [EP, JP, CN, KR]` if you only care about the four jurisdictions where most FTO clearance actually matters.
Side-by-side claim-1 comparison
Compare the independent claim 1 of two patents. Output a markdown table with each clause aligned row-for-row, plus a 'novelty delta' column flagging clauses that exist in patent A but not in patent B.
ForPatent attorneys drafting an invalidity argument or an obviousness rejection response.
The prompt
Pull claim 1 text for `US10891295` and `US10963735` from the PatentsView `/patent/` endpoint with `f = ['claims.claim_text', 'claims.claim_number']`. Filter to `claim_number = 1`. Split each claim into clauses on the literal token `; `. Render a 3-column markdown table: `Clause`, `In US10891295`, `In US10963735`. Write to `out/claim-compare.md`.What slides.md looks like
def claim1(patent_id):
body = {
"q": {"_and": [
{"_eq": {"patent_id": patent_id}},
{"_eq": {"claims.claim_number": "1"}},
]},
"f": ["patent_id", "claims.claim_text", "claims.claim_number"],
}
r = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()
text = r["patents"][0]["claims"][0]["claim_text"]
return [c.strip() for c in text.split("; ")]
a, b = claim1("10891295"), claim1("10963735")
for clause in sorted(set(a) | set(b)):
print(f"| {clause} | {clause in a} | {clause in b} |")One-line tweak
Replace the literal `; ` split with a dependent-clause regex (`r'\bwherein\b|\bcomprising\b'`) for cleaner alignment when the patent uses alternative drafting conventions.
Auto-extract independent claims from N patents
Loop over a list of patent numbers and return only the independent claims (claim 1 plus any other claim that doesn't reference an earlier one). Saves a JSON file per patent for downstream analysis.
ForLitigation support teams building a claim chart against an asserted patent set.
The prompt
Read `targets.txt` (one patent number per line). For each, fetch `claims.claim_text`, `claims.claim_number`, and `claims.claim_dependent` from PatentsView's `/patent/` endpoint. Keep only rows where `claim_dependent` is null or empty. Write `out/claims/{patent_id}.json` with the surviving independent claims plus their `claim_number`.What slides.md looks like
import json, pathlib, time
for pid in pathlib.Path("targets.txt").read_text().splitlines():
body = {
"q": {"_eq": {"patent_id": pid.strip()}},
"f": ["patent_id", "claims.claim_text",
"claims.claim_number", "claims.claim_dependent"],
}
r = requests.post(URL, headers=HEADERS, json=body, timeout=30).json()
claims = r["patents"][0]["claims"]
independent = [c for c in claims if not c.get("claim_dependent")]
pathlib.Path(f"out/claims/{pid.strip()}.json").write_text(
json.dumps(independent, indent=2))
time.sleep(1.4) # 45 req/min capOne-line tweak
Add a length guard — drop any independent claim under 200 characters; those are usually formal claims the drafter pruned, not real inventive substance.
Patentability opinion draft
Produce a one-page patentability opinion: a one-paragraph invention summary, a 5-row prior-art table, a novelty argument under 35 USC §102, and a non-obviousness argument under §103. Draft only — a registered attorney signs the final.
ForSolo founders prepping a provisional and weighing whether to spend $4–8k on a real attorney opinion.
The prompt
Read `disclosure.md`. Run the use-case-1 prior-art search. Take the top 5 hits. Write `out/opinion.md` with four sections: (1) Invention summary, 80 words. (2) Prior-art table (patent_id, title, assignee, key disclosure, distance from invention). (3) Novelty argument under 35 USC §102 — for each hit, name the missing limitation. (4) Non-obviousness under §103 — list two unexpected results the combination of references does not teach.What slides.md looks like
# After running use-case-1 above, prior_art is a list of 5 candidate patents.
opinion = []
opinion.append("# Patentability opinion (DRAFT — not legal advice)\n")
opinion.append(f"## Invention\n{disclosure_summary(open('disclosure.md').read())}\n")
opinion.append("## Prior art\n| Patent | Title | Assignee | Key disclosure | Distance |")
opinion.append("|---|---|---|---|---|")
for p in prior_art[:5]:
opinion.append(f"| US{p['patent_id']} | {p['patent_title']} "
f"| {p['assignees'][0]['assignee_organization']} "
f"| {summarize_abstract(p)} | {distance(p, disclosure)} |")
pathlib.Path("out/opinion.md").write_text("\n".join(opinion))One-line tweak
Append a §103 obviousness rejection-response template — pre-canned language for arguing teaching-away or unexpected results — so the founder ships a stronger draft to their attorney.
Community signal
Three voices from people running real patent-search work. The first names the keyword-collapse problem the cookbook’s text operators solve, the second is a registered attorney’s pragmatic limit on AI search, the third is the API publisher itself flagging that the legacy interface was sunset in February 2025.
“Keyword search seems to work up to a point, but breaks down when the same technical idea is described very differently across patents, papers, or standards documents.”
wireless_eng · Hacker News
An engineer asking r/HN how prior art search actually works in practice — the keyword-collapse problem the PatentsView text operators (`_text_any`, `_text_all`, `_text_phrase`) were built to mitigate.
“AI tools can accelerate routine legal tasks like document review and prior art searching, but the strategic judgment that determines whether a patent will survive examination cannot be delegated to an AI system.”
Craige Thompson (Patent Attorney) · Blog
A registered patent attorney's pragmatic take — the cookbook treats the skill as draft assistance, never the final opinion. Use case 10 emits an explicit 'DRAFT — not legal advice' header for that reason.
“Support for Legacy API to End in February 2025. Switch to PatentSearch API Now.”
PatentsView (USPTO data partner) · Blog
Notice from PatentsView itself — the cookbook deliberately targets the new `/patent/` endpoint, not the deprecated v0.1 surface. If you find a 2023 tutorial pointing at `api.patentsview.org/patents/query`, ignore it.
The contrarian take
Not every patent attorney is bullish on AI-assisted prior-art search. The most cited critique on the topic comes from Samuel W. Apicelli (Duane Morris LLP):
“AI searches are only as good as the databases they access and the algorithms that power them. While a skilled human searcher knows to look in unexpected places, AI may not.”
Samuel W. Apicelli (Duane Morris LLP) · Blog
Duane Morris LLP — AI in Patent Prosecution: Judgment, Risk, and the Limits of Automation.
Fair concern, taken seriously. The cookbook is explicit about this: PatentsView covers US grants and applications only. Foreign filings (EP, JP, CN), non-patent literature (IEEE, theses, defensive publications), and pre-1976 patents need separate search lanes. Use case 7 pivots to publication-level family search to widen jurisdiction; use case 10 emits a 'DRAFT' header so a registered attorney always reviews the §102/§103 argument before it ships.
One more comparison worth naming: there are early MCP servers that wrap USPTO and Google Patents endpoints. The trade-off is the usual skill-vs-MCP one: the skill is ~120 idle tokens, the MCP’s tool schemas load every turn. Pick the MCP only when multiple AI clients need to query a shared running service — otherwise stick with the skill in this cookbook. The bare-term “patent search mcp” query gets steady impressions on this directory; if that’s your starting point, look at the /servers catalog for the parallel MCP options.
Real tools shipped on PatentsView
Concrete examples from public projects. Most don’t use the Claude skill specifically — they’re here to show what production-grade PatentsView pipelines look like, so you have a target shape in mind when you write the prompt.
- PatentsView (USPTO) — official data partner publishing the canonical PatentSearch API the cookbook targets
- PQAI — non-profit AI-powered prior-art search reaching 30k+ inventors and patent attorneys
- USPTO Patent Public Search — the human-facing search interface every cookbook output deep-links to via Google Patents
- Russ Allen (mustberuss) — open-source PatentsView code snippets and Jupyter tutorials
- April Mosby — patent attorney's 7-step prior-art workflow video, the human reference behind the cookbook automation
- raycaster.ai — Show HN entry for cross-domain prior art search across patents and scientific papers
Gotchas (the four that bite)
Sourced from the PatentsView PatentSearch API reference and the bundled utility-patent-reviewer skill repo.
45 requests per minute, hard cap
PatentsView throttles per API key at 45 req/min. The skill paces with `time.sleep(1.4)` between calls in any loop. If you hit a 429, back off to 60 seconds and resume — there is no burst credit.
US-only, no foreign filings at the patent level
The `/patent/` endpoint covers US grants only. For EP / JP / CN counterparts, pivot to the `/publication/` endpoint (use case 7) or supplement with EPO OPS or Espacenet. The skill flags this in the disclaimer; do not let it silently look US-complete when the buyer is global.
Legacy v0.1 endpoints were sunset February 2025
Older tutorials still reference `api.patentsview.org/patents/query`. That URL is gone. Every snippet above uses the new `search.patentsview.org/api/v1/patent/` POST surface with the four-field body. If a copy-pasted example throws 404, that's the cause.
Disambiguated names ≠ raw filing names
PatentsView runs inventor and assignee disambiguation. `inventors.inventor_name_first` is the cleaned canonical form, not the literal text on the patent. For exact-text match (e.g. invalidity research where the typo matters), use `_text_phrase` against the raw fields, not `_eq` against the disambiguated ones.
Pairs well with
Curated to match the cookbook’s actual integrations: the research-and-analysis skills (literature-review, competitive-analysis, legal-risk-assessment, summarize, pdf-to-markdown) plus the web-research MCP servers the longer use cases lean on for non-patent context.
Related skills
Related MCP servers
Two posts that compose well with this cookbook: What are Claude Code skills? covers the underlying mechanism, and the pdf-to-markdown cookbook covers the input lane — many invention disclosures arrive as PDFs that need conversion before the cookbook above can extract the discriminating terms.
Frequently asked questions
Do I need a PatentsView API key, and how do I get one?
Yes. The skill expects `PATENTSVIEW_API_KEY` in the environment. Request a free key from https://patentsview.org/apis/keyrequest. Approval takes 1–3 business days. The rate limit is 45 requests per minute per key — every cookbook entry above paces with `time.sleep(1.4)` to stay under it.
Is there a patent search MCP server I should use instead of the patent-search skill?
There are early MCP wrappers around USPTO and Google Patents endpoints — search the directory for `patent` to compare. They're useful when multiple AI clients query the same running service. For a single developer running prior-art and FTO sweeps from one Claude session, the skill is the lighter option: ~120 tokens at idle, no separate process to manage. Reach for an MCP if you also want a long-running cache or shared session state.
Why is 'patent search' getting impressions on Google but no clicks?
The bare 'patent search' query brings up patents.google.com and uspto.gov as the top results — this blog will not outrank either. The post targets the long-tail variants: 'claude patent skill', 'patent search mcp', 'patent mcp', and the workflow phrases 'prior art search', 'freedom to operate', and 'PatentsView API'. Those are the queries the cookbook above is built to rank for.
Can the patent-search skill clear me for FTO without a registered attorney?
No, and the skill will not pretend otherwise. Use case 10 explicitly emits a 'DRAFT — not legal advice' header. The skill produces the prior-art set, the family map, and the §102/§103 argument scaffolding — the inputs an attorney needs. The opinion that a human signs is still a human opinion. FTO clearance for a real product launch typically costs $3–30k of attorney time; the skill cuts the prep cost, not the signature cost.
Does PatentsView cover foreign filings (EP, JP, CN, KR)?
Not at the patent grant level. PatentsView's `/patent/` endpoint is US-only. The `/publication/` endpoint exposes WIPO PCT publications and a partial set of foreign filings, which use case 7 leverages for family search. For deep EP / JP / CN coverage you still want EPO OPS or Espacenet alongside this skill — the cookbook does not pretend to replace them.
What changed in February 2025 with the PatentsView API?
The legacy v0.1 endpoints (`api.patentsview.org/patents/query`) were sunset and replaced by the new PatentSearch API at `search.patentsview.org/api/v1/`. Older tutorials still reference the legacy URL — they will silently 404. Every code snippet above uses the new POST-based interface with `q`, `f`, `s`, `o` parameters. If you find a 2023 example using GET with a query string, treat it as deprecated.
How does this compare to commercial tools like PatSnap, Lens.org, or Innography?
PatentsView is free, US-only, and exposes the same disambiguated inventor and assignee data the USPTO uses internally. Commercial tools wrap broader jurisdictional coverage, semantic search, and white-glove analyst services — useful at the enterprise tier. The skill targets the budget-constrained surface: solo founders, lean IP teams, and engineers running a novelty check before a provisional. Use the skill until you hit a foreign-jurisdiction or NPL gap; then upgrade.
Sources
Primary
- RobThePCGuy/utility-patent-reviewer — the patent-search SKILL.md repo
- PatentsView PatentSearch API reference (search.patentsview.org)
- PatentsView /patent/ endpoint documentation
- PatentsView migration notice (legacy v0.1 sunset, February 2025)
- USPTO Patent Public Search (the human-facing equivalent)
- Google Patents (every cookbook output deep-links here for human review)
Community
- wireless_eng — Hacker News
- Craige Thompson (Patent Attorney) — Blog
- PatentsView (USPTO data partner) — Blog
- Team PQAI — Blog
- pbhjpbhj — Hacker News
- Joe Patrice (Above the Law) — Blog
Critical and contrarian
Internal