In April 2026, something unexpected happened: OpenAI published an official plugin that lets you run Codex directly inside Claude Code. Two of the biggest names in AI, historically competitors, now interoperate in the same agentic coding workflow.
This is worth paying attention to. Here's what it means and how to use it.
What Happened
Claude Code supports MCP (Model Context Protocol), Anthropic's open standard for extending AI agents with tools and integrations. OpenAI built a Claude Code plugin — officially, through their own channels — that exposes Codex capabilities as a tool within Claude Code sessions.
This means you can be running a Claude Code session, hit a task where you want Codex's specific strengths, and invoke it without leaving your workflow. The models collaborate on the same task.
Why This Is Significant
For the past few years, the AI tool market has operated on the assumption that you pick a provider and stay in their ecosystem. You use Claude or GPT-4, not both. You use Anthropic's tools or OpenAI's, not both.
That assumption is breaking down. MCP is a big part of why.
When tools expose capabilities through an open protocol, integration becomes easy. OpenAI building a Claude Code plugin isn't a sign of surrender — it's a recognition that developers work in multi-model environments and that ecosystem openness is a competitive advantage, not a liability.
Anthropic has taken the same position. Claude Code can already invoke multiple models, use external tools, browse the web, and execute code. It's designed to be a platform, not a closed product.
What Codex Is Good At
Codex (the model underlying GitHub Copilot and OpenAI's coding tools) has specific strengths that complement Claude:
Code completion and synthesis. Codex was trained extensively on code from GitHub and excels at pattern completion — given a partial function or a signature, it fills in the implementation efficiently.
Specific language idioms. For certain languages and frameworks, Codex has absorbed enough training data to generate idiomatic code that follows community conventions naturally.
Speed. Codex's smaller model variants are fast. For tasks where you want a quick, reasonable implementation rather than a deeply reasoned one, the speed-quality trade-off can be worth it.
What Claude Excels At
Claude's strengths as a coding agent are different:
- Complex reasoning about multi-file architecture
- Explaining why something should be done a certain way, not just how
- Following detailed instructions across long contexts
- Debugging with full context of the codebase
- Tasks that require judgment, trade-off analysis, or nuanced decision-making
The two models genuinely complement each other. Using Codex for fast code synthesis and Claude for architecture, review, and complex reasoning is a sensible split.
How to Use It
The plugin installs through Claude Code's standard plugin mechanism. Once installed, you can invoke Codex capabilities directly in your Claude Code session via tool calls.
A typical workflow might look like:
- Use Claude Code to plan the architecture of a new feature
- Use the Codex plugin to generate boilerplate implementations quickly
- Return to Claude Code for review, refinement, and integration
You can also let Claude Code decide when to invoke Codex — setting up the agent to use Codex for certain classes of tasks automatically.
The Broader Trend
This isn't isolated. The same month, early adopters reported running Cursor, Claude Code, and Codex in coordinated workflows. The "agentic coding stack" is becoming multi-model by default.
This mirrors how professional developers have always worked — using the best tool for each part of the job. A good craftsperson doesn't use only one tool. The same principle applies to AI models.
The implication for developers: stop thinking about AI coding tools as an either/or choice. The question is which models, in which combination, for which tasks in your workflow. The tooling is now flexible enough to support whatever answer makes sense for you.
What to Watch
OpenAI's plugin is a signal that model interoperability is becoming an expected feature, not a differentiator. Expect more of this — more cross-provider integrations, more open protocols, more workflows that mix models based on task requirements rather than vendor loyalty.
If you're heavily invested in one provider's ecosystem, it's worth thinking about how locked-in you actually are. The answer, increasingly, is "not very" — which is good for developers and good for the industry.