OpenAI launches the Apps SDK: the birth of a ready-to-use AI app ecosystem

OpenAI launches the Apps SDK: the birth of a ready-to-use AI app ecosystem

OpenAI launches the Apps SDK: the birth of a ready-to-use AI app ecosystem

Title:

OpenAI launches the Apps SDK: the birth of a ready-to-use AI app ecosystem

Read:

4 min

Date:

Oct 9, 2025

Author:

Massimo Falvo

Share this on:

Title:

OpenAI launches the Apps SDK: the birth of a ready-to-use AI app ecosystem

Read:

4 min

Date:

Oct 9, 2025

Author:

Massimo Falvo

Share this on:

OpenAI has announced applications that live inside ChatGPT conversations: you can search for accommodation on Booking.com, create slides with Canva, or browse homes on Zillow’s interactive map — all without leaving the chat. It’s a leap beyond “link-out”: interactive UIs directly within the conversation. At launch, partners include Booking.com, Canva, Coursera, Figma, Expedia, Spotify, and Zillow; others (DoorDash, OpenTable, Target, Uber, etc.) will join soon. Users can invoke an app by name or let ChatGPT suggest the right one based on context - for example, while discussing buying a house.

Why it matters
This isn’t a gadget - it’s a shift in where action happens. Action no longer takes place elsewhere (website/desktop/app) but here, in the chat. It’s the same movement the App Store triggered in 2008, except now the container isn’t a mobile OS: it’s a conversational interface aiming to reach over 800 million weekly users.

How it works

Behind the scenes is an Apps SDK that extends the Model Context Protocol (MCP), an open standard that allows AI to talk with external data and tools. The innovation is that the SDK defines not only the logic but also how results are presented to the user: the output becomes a mini web app (HTML/CSS/JS, often React) running inside an iframe within the chat. It communicates with the “container” through a host API and can call tools on your backend (which remains on the developer’s servers, not OpenAI’s).

From a security standpoint, the widget runs in a separate, controlled space. It can only do what it’s allowed to and communicate externally within clear limits, while the app’s server follows standard security best practices. In short: deep integration with well-defined boundaries.

User experience: “native” components inside the chat

OpenAI focuses on consistency, simplicity, and accessibility. The design guidelines propose three formats: inline cards for quick actions, carousels for comparing multiple results, and fullscreen for rich flows (maps, editors). The goal is to reduce friction - clear action, minimal but useful info. For designers, it’s fertile ground: UX becomes conversation + component, no longer a separate interface.

Practical examples:

  • Inline cards for confirming a booking or showing an order summary, with at most two CTAs.

  • Carousels to scroll through playlists, restaurants, or hotels.

  • Fullscreen for exploreable maps or editing canvases — with the chat composer always within reach to keep “talking” to the app.

Strategy: start strong, build network effects

Apps are already available for users outside the EEA, Switzerland, and the UK, across all plans (Free, Go, Plus, Pro). The goal is to expand to the EU “soon.” For developers, the SDK is in preview, and submissions will open later this year, with a dedicated directory and clear design and quality standards. The value is twofold: visibility (ChatGPT’s massive audience) and integration (the assistant suggests the right app at the right moment).

Conversational commerce: from “seeing” to “buying” in chat

Recently, OpenAI introduced Instant Checkout and is formalizing an Agentic Commerce Protocol - an open standard enabling payments directly within chat. This closes the last mile: discovery → decision → transaction, all inside the conversational flow. The impact? Buying behavior moves inside the assistant.

The right questions (to ask now)

  • How much context can the app read? Only the last message or a broader window of the conversation? The official page explains consent at first connection and future granular controls, but the amount of shared context is crucial for both privacy and UX - it must be designed and communicated transparently.

  • Who wins among competing apps? If multiple apps cover the same task (e.g., food delivery), how does ChatGPT decide which to suggest? OpenAI promises to prioritize user experience, but this is worth watching closely.

  • Portability vs. integration: MCP and SDK are “open” but designed for ChatGPT. The classic tension remains: write once, run everywhere - or fully leverage the dominant platform? In 2008, native experience won. Today, “native-in-chat” might repeat that story.

    ma dominante? Nel 2008 vinse l’esperienza nativa. Oggi l’esperienza “nativa-in-chat” potrebbe ripetere la storia.

From “chatbot” to application platformIf you’re a consumer or B2B service, the question isn’t if to integrate but how. The immediate advantages are distribution (ChatGPT reports 800M+ weekly users) and contextual discovery (the app appears when relevant). But quality of experience will decide winners and losers - the guidelines reward those who cut complexity and deliver value in a single card.

In Europe, apps aren’t available yet; rollout in the EU is expected “soon.” It’s worth prototyping now - when the feature launches, teams with polished experiences will hold a competitive edge.

Conclusion

This move signals a clear shift: the interface returns to conversation, and conversation becomes the place of action. Maps, playlists, editors, checkout - all happen here, through dialogue. If this model extends beyond ChatGPT, we may find ourselves chatting with our applications far more often than we open them. It’s the embryo of a Conversational OS - an environment where AI orchestrates tools and mini-apps in real time, with us at the center.o.

Share this on: