Teams hear phrases like MCP server, AI app, plugin, or skill and often collapse them into one idea. That creates bad architecture. A web chat interface like ChatGPT, a desktop agent like OpenAI Codex, and a project-based client like Anthropic Claude Code may all touch the same outside system, but they do not expose the same transports, permission models, or local execution boundaries.
For legal, investigative, research, and media teams, that difference matters. The question is not just whether an AI assistant can reach GitHub, Notion, a database, or an internal case system. The real question is whether that connection belongs in a remote web client, a repo-scoped desktop workflow, or a reusable skill that can travel between both.
1. Why this subject gets muddled so quickly
MCP is a protocol. A skill is a workflow definition. A client such as ChatGPT, OpenAI Codex, or Claude Code is the surface that decides what that protocol and workflow are actually allowed to do. Once those layers are mixed together, teams start making false assumptions about what can run locally, what has to stay remote, and what should be reviewed before it touches a production system.
That confusion gets worse because vendors often talk about tools in outcome language rather than systems language. A feature may sound like "connect your tools to AI," but the operational reality is narrower. Some clients are built around approved remote apps. Others are built around local projects, version-controlled config, and desktop approvals.
2. What MCP actually is, and what it is not
The official Model Context Protocol documentation describes MCP as an open-source standard for connecting AI applications to external systems. In plain terms, MCP gives an AI client a standardized way to call tools, read data sources, and work against outside services without every product inventing its own one-off connector format.
That does not mean MCP tells the model how to do the work. MCP provides access. It does not define the review process, the output standard, the reporting template, or the threshold for human signoff. Those responsibilities belong elsewhere, which is why teams that care about reliability still need a separate workflow layer.
3. How ChatGPT web handles MCP apps, and why that matters
In OpenAI's current documentation, ChatGPT web treats full MCP support as a managed app workflow rather than a casual local integration. Developer mode lets authorized users configure a remote MCP endpoint, test it, and publish it to the workspace after review. OpenAI also documents explicit confirmation before write or modify actions, admin control over which actions remain enabled, and a frozen tool snapshot that must be refreshed before later server-side changes go live.
That is a very specific operating model. It is good for vetted internal tools, SaaS systems, and web-facing workflows where an organization wants central review and controlled access. It is not the same thing as giving a local desktop agent direct access to your machine. OpenAI's current ChatGPT help docs are also explicit that ChatGPT does not presently connect to local MCP servers, only remote ones.
If your workflow is mostly source-cited web research, document retrieval, and managed read access inside a browser-based interface, the design questions are close to the ones discussed in Advanced ChatGPT Deep Research Workflows for Source-Cited Briefings.
4. Where reusable skills fit in ChatGPT and OpenAI Codex
OpenAI's skills documentation draws a useful line here. A skill is a reusable, shareable workflow that tells the model how to perform a task more consistently. It can bundle instructions, examples, and even code. OpenAI also notes that skills are supported across ChatGPT, Codex, and the API, but they do not automatically sync across products yet.
That distinction matters because skills are not connectors. They are operating procedures. A strong skill can tell ChatGPT or Codex how to triage an intake folder, summarize a research packet, draft an evidence log, or prepare a client-safe briefing. The underlying tool access may differ by client, but the reasoning pattern and output standard can stay stable.
OpenAI's Codex developer documentation pushes this further by defining skills as the authoring format for reusable workflows, while plugins are the installable distribution layer. In other words, a team can keep the workflow logic in a skill, then package that skill with optional app mappings or MCP configuration when it wants a broader rollout.
5. Why desktop clients like OpenAI Codex and Claude Code support deeper project workflows
This is where desktop-oriented agents start to differ materially from a web chat window. OpenAI's Codex documentation describes direct MCP configuration for the CLI and IDE extension through `config.toml`, with support for both local STDIO servers and streamable HTTP servers. That allows a project-level agent workflow to sit next to files, scripts, templates, and private case materials in a way that a remote chat interface usually cannot.
Anthropic's Claude Code documentation describes a similar but distinct model. Claude Code supports remote HTTP servers, local STDIO servers, and project-scoped `.mcp.json` configuration that can be checked into version control for a team. Anthropic also documents approval prompts before project-scoped servers are used, which is the right kind of friction for shared workflows.
For serious operations, this is the practical divide. ChatGPT web is strongest when the organization wants managed remote connectors and workspace publication. OpenAI Codex and Claude Code become more useful when the work needs to live close to a repository, local files, repeatable scripts, or team-owned project configuration. That is also why Using OpenAI Codex Desktop for Research Ingestion, Case Management, and Custom Reports matters more as an operating model than as a generic product review.
6. The clean way to structure cross-client workflows
The most stable design is to separate the reusable workflow from the client-specific connection details. In practice, that usually means keeping three layers distinct:
- the MCP layer: server endpoint, transport, authentication, and allowed actions,
- the skill layer: task instructions, examples, output schema, and review rules,
- the client layer: which surfaces can use the workflow, whether local tools are allowed, and what approvals or admin controls apply.
That separation prevents a common failure mode where a team builds one clever demo in ChatGPT or Codex and then assumes the same connector behavior, auth path, or write permissions will exist everywhere else. Usually they will not. The portable part is the workflow logic. The non-portable part is the trust boundary.
This is also why repo-scoped knowledge and file discipline still matter. A reusable skill is much more valuable when it can rely on structured notes, consistent folder names, and predictable artifacts, as described in Building an AI Knowledge Base with Obsidian Notes.
7. The real risk is not setup complexity. It is governance failure.
Both OpenAI and Anthropic explicitly warn about the risk of connecting unsafe or untrusted MCP servers, especially where outside content can introduce prompt injection. That warning should be taken literally. Every new connector expands the instructions and data an AI client can consume, and some of those systems may also be able to write, modify, or trigger external actions.
For research and casework teams, the controls are straightforward even if the implementation is not. Trust the server operator. Separate read access from write access. Require human approval before any external update or client delivery. Keep local project scopes narrow. Log which tools were used when a report or briefing matters. If the workflow touches sensitive materials, the safeguards described in Prompt Injection: The Attack That Rewrites Your AI's Instructions, Private AI Infrastructure for Sensitive Casework, and Confidence Labels and Evidence Logs for Defensible AI Research still apply.
8. Bottom line
MCP servers and reusable skills are powerful together, but they solve different problems. MCP gives ChatGPT, Claude Code, or OpenAI Codex a standard way to reach tools and systems. Skills tell those clients how to apply that access in a repeatable way. The client surface decides whether the workflow stays remote, becomes repo-scoped, or is allowed to touch local files and project assets.
Teams get the best results when they stop looking for one universal AI integration and start designing deliberately for each surface. Use ChatGPT web when you need managed remote apps and workspace controls. Use OpenAI Codex or Claude Code when the work belongs near local files, repo config, and advanced desktop workflows. Keep the workflow logic portable, keep the permissions narrow, and keep the review layer human.
If you want to design a practical MCP and skills strategy across ChatGPT, OpenAI Codex, and desktop agent workflows without losing control of research quality, case files, or output standards, Daniel Powell can help define the tooling model, review gates, and deployment boundaries. Book an initial strategy call.
Sources
- Model Context Protocol: What is MCP?
- OpenAI Help: Developer mode, and MCP apps in ChatGPT [beta]
- OpenAI Help: Skills in ChatGPT
- OpenAI Help: Using Codex with your ChatGPT plan
- OpenAI Developers: Agent Skills for Codex
- OpenAI Developers: Model Context Protocol for Codex
- Anthropic Claude Code Docs: Connect Claude Code to tools via MCP