MCP Apps vs OpenAI Apps SDK: AI App Standards 2026
Two standards for putting a real UI inside an AI chat. One is open and Linux-Foundation-governed; one is a vendor SDK with a store attached. They started as rivals and ended up layered. This post walks through what each actually is, how they differ, when to pick which, and where the lock-in actually bites. Every fact is sourced from the spec, the SDK docs, or a primary launch post.

On this page · 11 sections▾
TL;DR + decision tree
- If you need reach across every modern AI client — Claude, ChatGPT, Goose, VS Code Insiders, Copilot Studio, and the long tail — build to MCP Apps. It’s the open Linux-Foundation-governed standard, the one every serious host implements, and the only path that doesn’t bind you to a single vendor’s roadmap.
- If your distribution strategy is the ChatGPT app store and you need instant checkout, host modals, or first-class file pickers, build on the OpenAI Apps SDK. It sits on top of the MCP Apps spec and adds
window.openaiextensions that are ChatGPT-only — includingrequestCheckout(), which is the only built-in payment rail in the entire AI-app surface right now.
Two extra heuristics that resolve most of the remaining edge cases. Build to the open fields first, always. Even if you ship the SDK build to ChatGPT’s store, keep the _meta.ui.resourceUrisurface clean so the same server ports to Claude without a rewrite. Treat window.openai as progressive enhancement. Feature-detect it, degrade gracefully, never make it the only path to a result. Those two rules cover roughly 90% of portability bugs we’ve seen since launch.
Why UI-in-chat became a fight
Three years into the LLM-chat era it stopped being controversial to say plain text was a constraint, not a feature. The web ran on richer surfaces — forms, dashboards, video, canvases — and AI clients were the last consumer-grade interface stuck in a single column of bubbles. Approvals lived in a hand-rolled string template; charts arrived as base64 images that the model then had to describe; deployment forms got reduced to a thirty-line transcript of back-and-forth until either the human gave up or the model picked the wrong region. Anyone who’d shipped a real internal tool knew the chat surface was the bottleneck, not the model.
OpenAI moved first. At DevDay on October 6, 2025 it launched the Apps SDK — a way for MCP servers to ship interactive HTML widgets that ChatGPT would render inside the conversation. Apps lived in ChatGPT, used a window.openai bridge for vendor-specific affordances, and shipped with a submission flow into a future app store. It was a real product, and for the first time MCP servers had an obvious commercial distribution path. The catch was that everything ran only inside ChatGPT.
In parallel, a community project called MCP-UI was cooking the same idea as an open extension to the MCP protocol. Same architecture — sandboxed iframe, JSON-RPC bridge, declarative resources — but designed from day one to be vendor-neutral. By November 2025, the MCP steering group merged the two design lineages into SEP-1865 — the proposal that became the official MCP Apps spec, ratified as the dated specification 2026-01-26. The launch post said it directly: “MCP-UI and the OpenAI Apps SDK pioneered the patterns that MCP Apps now standardizes. We were excited to partner with both OpenAI and MCP-UI to create a shared open standard.”
That history is why the “competing standards” framing is misleading by 2026. On paper the SDK and the open spec are siblings: the SDK rides on top of MCP Apps; OpenAI’s docs explicitly tell developers to “build around the MCP Apps standard for portability, then layer on ChatGPT extensions where they improve the ChatGPT experience.” In practice the two specs solve different problems — one is the protocol contract for putting UI in chat, the other is a vendor-specific SDK plus store plus commerce rail stacked on top. The interesting questions are about where that stack helps you and where it locks you in.
Most of the confusion in 2026 came from people asking the wrong binary: which standard wins? The useful question is which layer do you build to? If you’re used to thinking in MCP-protocol terms, our What is MCP primer covers the JSON-RPC wire format both specs sit on, and the sibling MCP Apps Spec post drills into the field-by-field details of the open standard we’ll reference repeatedly below.
Side-by-side matrix
The dimensions that actually matter when picking a target — not just feature counts. Cells sourced from the official spec, OpenAI’s developer docs, and the Linux Foundation announcement.
| Dimension | MCP Apps standard | OpenAI Apps SDK |
|---|---|---|
| Governance | Linux Foundation (LF Projects, LLC); SEP process; multi-vendor steering committee | OpenAI (single-vendor); roadmap controlled internally |
| License model | Open spec, no royalties; reference impls under MIT/Apache-2 | SDK is OpenAI-published; standard fields underneath are open |
| Distribution | Install URL / MCP config — no centralised store | ChatGPT app store (approval-gated; self-serve publishing 'coming soon') |
| Hosts that support it (May 2026) | Claude (web + desktop), ChatGPT, Goose, VS Code Insiders, Copilot Studio, Cursor (partial), others | ChatGPT only (and Codex for plugin auto-creation) |
| Iframe + bridge architecture | Standard: sandboxed iframe, ui/* JSON-RPC over postMessage | Identical — SDK runs on the same architecture |
| UI declaration field | _meta.ui.resourceUri pointing at ui:// resource | _meta.ui.resourceUri (alias: _meta['openai/outputTemplate']) |
| Payments / checkout | Not in the spec; bring your own (Stripe etc.) | window.openai.requestCheckout() — OpenAI commerce rails |
| File pickers / modals | Standard ui/* methods; no host-modal primitive | window.openai.selectFiles, uploadFile, getFileDownloadUrl, requestModal |
| Security model | Host MUST sandbox iframe + enforce CSP from _meta.ui.csp | Same — sits on the open security model |
| When to choose | You want reach across clients, no vendor lock | You want ChatGPT store distribution + commerce |
Three takeaways from the matrix. First, the protocol surface is identical — iframe, postMessage, CSP, the ui:// scheme, the JSON-RPC method names. If you only look at the wire format, there is no fight. Second, distribution is where the two diverge most — ChatGPT has a discovery surface the open spec hasn’t reproduced, and that’s the single biggest reason commerce-shaped apps lean SDK-first. Third, payments are a real spec gap — window.openai.requestCheckout() is vendor-only, and there is currently no open equivalent. If your app monetises through one-click purchase, that’s a forcing function toward the SDK whether you like it or not.
MCP Apps standard — full breakdown
AI app spec
MCP Apps
Linux Foundation · Open spec (SEP-1865, ratified 2026-01-26)
What it does best
MCP Apps is the right answer when you want the same UI surface to render in every credible AI client and you care more about reach than about vendor-specific features. The spec gives you a clean architecture — a sandboxed iframe, a CSP you declare in metadata, and a small set of ui/* JSON-RPC methods for the iframe and host to talk over postMessage. Because the contract sits in the protocol rather than in any one client’s SDK, your server doesn’t care which host loads it. The launch post puts it bluntly: “for the first time, an MCP tool developer can ship an interactive experience that works across a broad range of widely-adopted clients without writing a single line of client-specific code.”
Pick this if you...
- Want your app to render in Claude, ChatGPT, Goose, VS Code Insiders, Copilot Studio and the rest of the open ecosystem with one codebase
- Need a governance model that won’t change unilaterally — the SEP process and the LF steering committee mean breaking changes get telegraphed
- Are shipping a workflow tool (dashboards, forms, approval surfaces, design canvases) where commerce isn’t the headline feature
- Want to make distribution decisions client-by-client later (your own install card, an enterprise IDE config, a curated directory entry) rather than committing to one store today
Recipe: returning UI from a tool
The minimal contract is a tool that announces it has UI by including _meta.ui.resourceUri, and a resource served under the ui:// scheme with the right MIME type. Here’s the smallest useful shape — a server that returns a chart resource whenever the plot_metrics tool fires.
// Tool definition (returned in tools/list)
{
"name": "plot_metrics",
"description": "Plot a metrics chart for a given time window",
"_meta": {
"ui": {
"resourceUri": "ui://my-server/chart-widget",
"visibility": ["model", "app"]
}
}
}
// UI resource (returned in resources/list)
{
"uri": "ui://my-server/chart-widget",
"mimeType": "text/html;profile=mcp-app",
"name": "Metrics chart",
"_meta": {
"ui": {
"csp": {
"connectDomains": ["api.example.com"],
"resourceDomains": ["cdn.example.com"]
},
"permissions": ["clipboardWrite"]
}
}
}When the model calls plot_metrics, the host fetches the resource, mounts it in a sandboxed iframe, pipes the tool input to it via ui/notifications/tool-input, and the iframe draws. Cross-host portability comes for free as long as you stick to the standard fields. The same server shows up identically in Claude and ChatGPT.
Skip it if...
Your distribution plan is the ChatGPT app store and your monetisation depends on instant checkout. The open spec doesn’t have a checkout primitive today, and there’s no centralised discovery surface comparable to ChatGPT’s store. You can still build to MCP Apps and submit a wrapper to OpenAI later, but if commerce is critical from day one, start with the SDK and back-port to the open spec when reach matters.
OpenAI Apps SDK — full breakdown
AI app spec
OpenAI Apps SDK
OpenAI (single-vendor) · SDK on MCP Apps (Oct 2025 launch)
What it does best
The OpenAI Apps SDK is the right answer when ChatGPT is your primary surface and you want first-class access to ChatGPT-only affordances — particularly requestCheckout(), which is the only built-in commerce rail across either spec. The SDK wraps the open MCP Apps contract and adds a window.openai object that exposes ChatGPT-specific extensions: instant checkout, host modals, file pickers, file uploads, signed download URLs, and alternative callTool/sendFollowUpMessage APIs. It also gives you the path into the ChatGPT app store, which is currently the only discovery surface in the AI-app world with consumer reach.
Pick this if you...
- Build a consumer-facing or commerce-shaped app where ChatGPT’s install base is your distribution thesis
- Need a real payment rail inside the conversation —
window.openai.requestCheckout()is the only one shipping in 2026 - Want host modals, the system file picker, and the signed-URL download flow without rebuilding them in your own iframe
- Are okay with ChatGPT as the primary host and would treat support for Claude/Goose/VS Code as a nice-to-have rather than a launch requirement
Recipe: ChatGPT app with vendor extensions
The trick is to write your UI against the open MCP Apps contract first, then feature-detect window.openai for the ChatGPT-only affordances. Here’s the smallest useful shape — a widget that renders a checkout button and falls back gracefully when the host isn’t ChatGPT.
// Inside the iframe HTML for your tool's UI resource
// (served under ui://my-server/checkout-widget)
window.addEventListener('DOMContentLoaded', () => {
const btn = document.getElementById('buy');
btn.addEventListener('click', async () => {
// Feature-detect the vendor extension
if (window.openai && typeof window.openai.requestCheckout === 'function') {
const result = await window.openai.requestCheckout({
items: [{ sku: 'pro-monthly', quantity: 1 }],
currency: 'USD'
});
// Update the iframe based on the receipt
renderReceipt(result);
} else {
// Open-spec fallback: open external checkout via the standard
// ui/open-link method exposed on the postMessage bridge
postRPC('ui/open-link', { url: 'https://pay.example.com/pro' });
}
});
});That single pattern is the whole portability story. Standard fields for the surface; window.openai only behind a feature check. Stay disciplined and the same server ships to ChatGPT’s store and to every open-spec host without a rewrite.
Skip it if...
Your app needs to render in Claude, Goose, Copilot, or any IDE today and ChatGPT is just one of several targets. The SDK extensions don’t port; lean on them and you’ll either break elsewhere or write two codepaths. For multi-host apps, build to the open MCP Apps spec directly and consider the SDK only when you also want to ship into ChatGPT’s store as a secondary distribution.
How they actually differ
The wire format is shared. The interesting differences are at four boundaries: distribution, security model, where state lives, and payment rails. Each one shapes the kind of app you can credibly build on either side.
Distribution model
MCP Apps has no centralised distribution. You ship a server, you publish an install card (or a curated directory entry — that’s our category here), and clients install it via the standard MCP config. There is no equivalent of the App Store or Play Store; reach is a function of how many hosts implement the open spec and how visible your install path is. The OpenAI Apps SDK pairs the spec with a real store: ChatGPT’s app directory, which OpenAI’s docs describe as approval-gated today with “self-serve plugin publishing coming soon.” That difference is marginal in protocol terms and enormous in business terms — discoverability inside ChatGPT is the single biggest distribution lever in consumer AI right now.
Security boundary
Both specs run the iframe sandbox model described in the MCP Apps spec — a sandboxed iframe with CSP enforced from _meta.ui.csp, validated postMessage traffic, restrictive defaults when metadata is omitted, and on web hosts a sandbox-proxy iframe on a separate origin for double isolation. The vendor extensions in window.openai sit inside that same sandbox — they don’t let you escape the iframe; they expose host-mediated actions (checkout, modal, file-picker) that the host runs on the iframe’s behalf. So the security boundary is the same in both directions; what differs is the surface of host capabilities the iframe can request.
Where state lives
The spec is intentionally pragmatic about state: it doesn’t mandate persistence. Iframes are re-rendered on each model turn under the current ratification (HN user sam256 flagged this directly on his backgammon demo: “the board gets redrawn each time the AI takes a turn”), and persistent views are on the roadmap but not part of 2026-01-26. The OpenAI SDK adds nothing here that the open spec doesn’t — both punt on durable iframe state today. If your app needs persistence across turns, you push state to your server (via the MCP server itself, or a side channel) and rehydrate the iframe from there.
Payment rails
This is the single biggest functional gap. ChatGPT’s window.openai.requestCheckout() exposes OpenAI’s commerce flow — credit cards on file, Apple Pay / Google Pay where supported, a host-managed consent surface, and a receipt event posted back to the iframe. It’s a real payment primitive at the protocol layer, and the open MCP Apps spec has no equivalent. If you need in-conversation purchase, you either build on the SDK and accept ChatGPT-only commerce, or you fall back to the standard ui/open-link method and bounce the user to your own checkout page on a side tab. Both work; neither is a substitute for the other. The community is debating whether a future open-spec extension should cover payments; nothing has been merged as of May 2026.
Add up the four differences and the pattern is clear: the open spec gives you the platform and the SDK gives you the value-added surface. Apps that don’t need commerce or ChatGPT distribution gain almost nothing from the SDK. Apps that do, gain a lot. Treat the choice as a layering decision rather than a fork-vs-fork war.
Where Skills, Artifacts and Anthropic Apps fit
One source of confusion in 2026 is the proliferation of Anthropic-specific concepts that sound app-shaped but aren’t the same thing as MCP Apps. Three to keep straight: Skills, Artifacts, and the “interactive work tools in Claude” bundle Anthropic shipped on MCP Apps launch week.
Skills are a Claude packaging primitive — bundled instructions, optional resources, and a manifest that tell the model how to perform a recurring task. They live inside Claude (and Claude Code), they don’t render UI inside chat, and they predate MCP Apps by months. Skills compete with prompt templates and slash commands, not with the open spec. Our Claude Skills vs MCP vs Subagents post unpacks the boundaries between them.
Artifacts are a Claude rendering primitive — the model produces an HTML/React/Markdown artifact, Claude renders it in a side panel, and the user can iterate on it. Artifacts are model-generated surfaces (the model writes the HTML); MCP Apps are server-generated surfaces (the server ships the UI resource and the host renders it). They look similar in the chat window. They’re fundamentally different at the trust boundary: Artifacts inherit the model’s output stream; MCP Apps inherit a signed resource fetched from a known server origin.
Anthropic’s “Apps in Claude” bundle — Slack drafting, Figma diagrams, Asana timelines, Canva mockups, the ten launch partners The Register listed on MCP Apps day — is just Claude consuming the open MCP Apps spec. Same architecture, same wire format, no Anthropic-only fields. Anthropic isn’t shipping a parallel SDK with window.anthropic extensions; they’re consuming the open standard and the partner apps are portable to ChatGPT if the partners feature-detect. That asymmetry — OpenAI ships a vendor SDK on top of the open spec, Anthropic just ships the open spec — is the cleanest competitive signal in the space, and the reason MCP Apps is on track to be the default open contract for the rest of 2026.
Will one win?
The honest answer is neither, in the binary sense. What’s happening in 2026 is a layering: the open MCP Apps spec is the protocol contract every credible host implements, and the OpenAI Apps SDK is the dominant value-added stack sitting on top of it for the ChatGPT surface. Predicting one to vanish misreads the incentives — OpenAI keeps the SDK because it’s the only host with a built-in commerce rail and a store; everyone else uses the open spec because they need cross-vendor reach. Both survive because they answer different questions.
The more useful framing is by use case. For utility / workflow apps (dashboards, design canvases, deployment forms, approval surfaces), the open spec wins. Reach beats store presence; nobody’s paying for a deployment widget in a checkout flow. For consumer / commerce apps (booking, shopping, subscription onboarding), the SDK wins. The store is real distribution; instant checkout is a real conversion lever. For developer tools, the open spec wins by default — Cursor, VS Code, Claude Code, Goose, JetBrains-shaped IDEs all read it; the SDK doesn’t render in any of them. For enterprise / Copilot-shaped surfaces, Microsoft is already on the open spec; the SDK has no role there today.
One forcing function we’ll be watching: whether the open spec gets a payments extension. If SEP-xxxx for an open payment primitive lands in 2026 or 2027, a lot of the SDK’s gravitational pull evaporates, and the choice for commerce apps becomes “ChatGPT store distribution” rather than “ChatGPT store distribution and the only payments method in town.” If it doesn’t land, the SDK keeps its strongest moat. Either way, the open spec is the contract every serious server should target first; the SDK is the layer you add when ChatGPT distribution is critical and you can absorb the cost of vendor extensions.
The third scenario — a third standard emerging — looks unlikely from here. Microsoft is on the open spec. Anthropic is on the open spec. Block (Goose) is on the open spec. The community tooling sits on the open spec. A challenger would need to convince an ecosystem that already coordinated once that it should coordinate again. Not impossible. Not the way to bet today.
Common pitfalls
Treating window.openai as a required dependency
The single most common portability bug. A developer builds against the SDK, ships, then learns the iframe silently breaks in Claude because window.openai is undefined. Feature-detect every call site. Treat the SDK as progressive enhancement, not a baseline. OpenAI’s own docs say it plainly: “build around the MCP Apps standard for portability, then layer on ChatGPT extensions where they improve the ChatGPT experience.”
Assuming SDK distribution = open-spec distribution
Listing in the ChatGPT app store does not get you onto Claude or Goose. The SDK’s distribution layer is ChatGPT-only. If reach across hosts is part of the pitch, you still need to publish an install card per non-ChatGPT host (an MCP config block, an entry in a directory like this one, a docs page in your README). Plan for two distribution surfaces from day one if you’re targeting both.
Ignoring the governance asymmetry
OpenAI controls the SDK’s roadmap unilaterally; the open spec changes through the SEP process under LF governance. That matters more than usual because UI surfaces are sticky — once you ship a widget, breaking changes are expensive. The conservative play is to build to fields the LF process governs and keep vendor extensions thin, so a single OpenAI deprecation doesn’t reach into your core surface.
Forgetting the text fallback
Both specs encourage shipping text content alongside UI. If a host doesn’t support iframes (some IDEs, some CI pipelines, some agent runners), the spec tells the host to fall back to text. Servers that return UI without a text alternative go dark in those environments. The MCP Apps spec explicitly instructs servers to “return text-only results when UI support is unavailable.” Do it.
Over-rendering on every model turn
The current ratification re-renders the iframe each turn the host considers the tool result relevant. Long-lived UI state (a game board, a multi-step wizard) is best-effort right now. Either keep the iframe stateless and rehydrate from the server, or accept a re-paint per turn. Persistent views are on the roadmap but not in the dated spec; both the open standard and the SDK inherit the limit.
Schema bloat in tool definitions
Every tool that declares a UI carries _meta.ui blocks, CSP entries, permissions arrays, visibility scopes. That metadata travels in every tools/list response, which the model sees on every turn. Stack ten apps and you can silently double your tool-description token cost. The mitigation is the same as for any context bloat problem: progressive disclosure (covered in our MCP context bloat post). UI metadata isn’t free; treat it like any other prompt budget item.
Forgetting accessibility inside the iframe
Chat bubbles inherit Claude’s or ChatGPT’s a11y conventions. Iframes don’t. Tab order, focus management, screen-reader announcements, contrast — all your problem inside the sandbox. Test with VoiceOver and a keyboard before you ship. This is independent of which spec you pick; both inherit the iframe-isolation model.
Frequently asked questions
Are MCP Apps and the OpenAI Apps SDK competing standards or the same thing?
They started as separate efforts, then converged. The OpenAI Apps SDK launched at DevDay on October 6, 2025 as a ChatGPT-specific way to ship interactive UI from MCP servers. MCP-UI was a parallel community project. SEP-1865 merged the two designs into the open MCP Apps spec on November 21, 2025, ratified as the dated specification 2026-01-26. ChatGPT now supports the open standard plus optional window.openai extensions; Claude, Goose, and VS Code Insiders speak the open standard only. So today they're not really competing standards — they're one standard with one vendor's optional extensions on top.
Will an MCP App built for Claude render inside ChatGPT?
Yes, if you build to the standard fields. OpenAI's docs are explicit: ChatGPT supports _meta.ui.resourceUri, the ui:// URI scheme, and ui/* JSON-RPC over postMessage. The portability guidance from OpenAI itself is to 'build around the MCP Apps standard for portability, then layer on ChatGPT extensions where they improve the ChatGPT experience.' Anything that reaches into window.openai — requestCheckout, uploadFile, requestModal, the alternative callTool API — is ChatGPT-only and will silently fail on other hosts. Feature-detect those extensions and degrade gracefully.
Why does OpenAI ship a separate SDK if MCP Apps is the open standard?
The SDK predates the open spec — DevDay 2025 was three months before MCP Apps was ratified. OpenAI's Apps SDK now sits on top of the MCP Apps standard and adds ChatGPT-only affordances: instant checkout via requestCheckout, host modals via requestModal, file selection via selectFiles, and a self-service app store submission flow. Think of it as 'MCP Apps plus a layer for distribution and ChatGPT-specific UX.' The SDK is the cleanest way to ship an MCP App that also lives in ChatGPT's app store; the open spec is the cleanest way to ship one that lives everywhere else.
If I'm targeting only ChatGPT, should I use the OpenAI Apps SDK or the open MCP Apps spec?
Use the SDK. You'll get the ChatGPT app store submission flow, the window.openai extensions (instant checkout, file uploads, host modals), and OpenAI's tooling for templates and widget runtime. The SDK is built on top of MCP Apps, so you're not picking a fork — you're picking a layer. If you start with the SDK and later want to ship to Claude or Goose, the underlying MCP Apps surface should port cleanly as long as you didn't lean too hard on window.openai.
Does MCP Apps support payments the way Apps SDK does?
No, not as a standard primitive. ChatGPT's window.openai.requestCheckout() is a ChatGPT-only feature backed by OpenAI's own commerce rails. The open MCP Apps spec does not currently include a payment method; if your app needs to charge money outside ChatGPT, you handle it in your own backend (Stripe, etc.) and surface the result through the standard ui/* bridge. That's a real gap for commerce-shaped apps and one of the few places where the OpenAI SDK still has a feature the open spec hasn't matched.
How is each spec governed?
MCP Apps is governed under the Linux Foundation — Model Context Protocol joined the LF in February 2026 as a Series of LF Projects, LLC entity, with steering-committee seats including Anthropic, OpenAI, Block, and others (covered in our LF/AAIF post linked at the bottom of this page). Spec changes go through the SEP process (Standards Enhancement Proposals); SEP-1865 is the audit trail for MCP Apps. The OpenAI Apps SDK is governed by OpenAI alone. They publish breaking-change notes; they own the roadmap. That governance difference is the headline reason a serious app developer should default to the open spec and treat SDK-only features as ChatGPT extras.
Which spec has the biggest distribution today?
ChatGPT is larger by raw users (consumer accounts in the hundreds of millions versus tens of millions for Claude). But raw users is the wrong measure for an MCP app: the question is which spec gets your UI in front of the user who'd pay for it. ChatGPT's app store has a discovery surface (browse, search, recommend) the open spec hasn't replicated yet. Claude, Goose, and VS Code Insiders all support the open spec but distribute by install URL — no centralised store. If discovery matters for your app, OpenAI's SDK is currently the cleaner path. If reach across coding tools, agent platforms, and enterprise IDEs matters, the open spec wins.
What about Microsoft Copilot — does it support MCP Apps?
Copilot's MCP support shipped in 2025, and Copilot Studio added MCP Apps surface in early 2026, with the open spec as its compatibility target. There is no separate Microsoft Apps SDK in the picture today — Copilot reads the same _meta.ui.resourceUri and ui/* bridge other open-spec hosts read. That makes the open MCP Apps standard, not the OpenAI SDK, the surface that's currently in front of Microsoft 365's installed base via Copilot.
Is one spec going to win?
Not in the binary sense. The likely steady state for 2026 and 2027 is: the open MCP Apps spec is the lingua franca every serious host supports (Claude, ChatGPT, Goose, VS Code, Copilot, Cursor, Antigravity over time); OpenAI keeps shipping ChatGPT-specific extensions through window.openai because they're the only host with a commerce surface and an app store; the rest of the ecosystem treats those extensions as nice-to-haves, not requirements. The 'win' is the open spec; OpenAI's SDK is the dominant value-add on top.
What's the migration story if I built on the OpenAI Apps SDK and want to ship to Claude?
Audit your code for window.openai usage and feature-detect or strip each call site. The fields and the JSON-RPC bridge port directly — _meta.ui.resourceUri, the ui:// scheme, the ui/* methods are identical across hosts. The painful bits are requestCheckout (no open equivalent), requestModal (work around with your own iframe-rendered modal), selectFiles (use the host's standard file-input affordances), and the openai/outputTemplate alias (just rename to _meta.ui.resourceUri). Run the result against Claude's web client and VS Code Insiders before declaring portability; the spec is stable but host implementations have small quirks.
Sources
MCP Apps standard
- blog.modelcontextprotocol.io/posts/2026-01-26-mcp-apps — official launch post (Den Delimarsky); names launch clients (Claude, Goose, VS Code Insiders, ChatGPT) and credits the MCP-UI + OpenAI Apps SDK lineage
- ext-apps/specification/2026-01-26/apps.mdx — the dated spec;
_meta.ui, theui://scheme, sandbox/CSP MUSTs, JSON-RPC methods - SEP-1865: MCP Apps PR (modelcontextprotocol#1865) — merged 2025-11-21; convergence of MCP-UI and Apps SDK
- modelcontextprotocol.io/extensions/apps/overview — overview docs
OpenAI Apps SDK
- developers.openai.com/apps-sdk — “Our framework to build apps for ChatGPT”, submission flow, Codex plugin auto-creation
- developers.openai.com/apps-sdk/mcp-apps-in-chatgpt — supported open-spec fields,
window.openaiextensions (requestCheckout,uploadFile,requestModal,selectFiles, alternative tool APIs); the “build around the MCP Apps standard for portability” quote - openai.com/index/introducing-apps-in-chatgpt — DevDay 2025 Apps SDK launch announcement (October 6, 2025)
Press + analysis
- The Register — Claude supports MCP Apps, presents UI within chat window (Thomas Claburn, 26 Jan 2026); names the 10 launch partners
- WorkOS — MCP Apps are here: Rendering interactive UIs in AI clients
- Steve Kinney — MCP Apps and the Missing Middle of AI Tooling
- HN 46020502 — MCP Apps proposal thread (the convergence discussion)
Related on mcp.directory
- /blog/mcp-apps-spec-2026-when-should-your-server-render-ui — the field-by-field MCP Apps deep dive
- /blog/oauth-21-for-remote-mcp-servers-streamable-http-explained-2026 — auth model for remote MCP, which both specs inherit
- /blog/mcp-foundation-linux-foundation-aaif-2026-explained — the governance side of MCP under the LF
- /blog/what-is-mcp — protocol primer
- /best-mcp-servers — curated directory roundup