🎧 LISTEN TO THIS ARTICLE

Three protocols. Three big companies. One question everyone building agents keeps asking: which one do I actually need?

MCP (Anthropic), A2A (Google), and ACP (IBM) launched within months of each other in 2025. The tech press called it a protocol war. Builders on the ground saw something different: three tools solving three distinct problems, awkwardly marketed as competitors.

Here's the thing. The "war" framing was mostly wrong. But it wasn't completely wrong. There are real tensions, real overlaps, and real choices you need to make. This guide breaks down what each protocol does, where they genuinely compete, and what stack you should pick for your use case.

At a Glance

MCP A2A ACP
Full name Model Context Protocol Agent-to-Agent Protocol Agent Communication Protocol
Created by Anthropic (2024) Google (April 2025) IBM Research (March 2025)
Primary purpose Connect agents to tools and data Agent-to-agent task delegation Agent-to-agent messaging
Transport JSON-RPC over stdio / Streamable HTTP JSON-RPC 2.0 over HTTPS REST (HTTP) + WebSockets
Discovery Server registry (2026) Agent Cards (JSON metadata) Agent descriptors via REST
Spec version 2025-11-25 0.3.0 (RC1, Feb 2026) Merged into A2A (Sep 2025)
Governance Linux Foundation (AAIF) Linux Foundation (LF AI & Data) Linux Foundation (merged)
Monthly SDK downloads 97M+ (Python + TypeScript) Not publicly reported N/A (merged)
Active servers/implementations 10,000+ 150+ contributing orgs Folded into A2A
Key backers Anthropic, OpenAI, Microsoft, Google, AWS Google, IBM, Salesforce, SAP, Cisco IBM (now contributing to A2A)
Quote: Three protocols. Three big companies. One question...

MCP: The Tool Layer

MCP is the oldest of the three, and it shows. Not in a bad way. It's the most mature, most adopted, and most battle-tested protocol in this comparison.

What It Actually Does

MCP standardizes how an AI model connects to external tools, data sources, and prompts. Think of it as a universal adapter. Before MCP, every tool integration was custom: your Slack connector talked to Claude differently than your database connector. MCP gives them all the same interface.

A model host (Claude Desktop, Cursor, your custom app) connects to MCP servers. Each server exposes tools, resources, and prompt templates through a standard JSON-RPC interface. The model calls tools. The server returns results. Simple.

Adoption Numbers

The scale is hard to argue with. As of early 2026, MCP has crossed 97 million monthly SDK downloads across Python and TypeScript. Over 10,000 active MCP servers are registered in the community. Every major AI provider has adopted it: Anthropic, OpenAI, Google, Microsoft, and Amazon.

In December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation under the Linux Foundation. The AAIF launched with six co-founders (OpenAI, Anthropic, Google, Microsoft, AWS, and Block) and now has 146 members as of February 2026.

The Real Problems

MCP isn't without friction. The biggest complaint from production teams is context window bloat. Tool schemas, descriptions, and metadata all consume tokens. One documented deployment showed three MCP servers consuming 143,000 of 200,000 available tokens, leaving just 57,000 for actual conversation and reasoning. That's 72% of the context gone before the agent does anything useful.

At Ask 2026 in March, Perplexity's CTO Denis Yarats announced the company is moving away from MCP internally, citing this exact problem. Cloudflare found that exposing 2,500 API endpoints as individual MCP tools would require roughly 244,000 tokens. Their alternative approach uses about 1,000 tokens. That's a 244x reduction.

The MCP roadmap for 2026 acknowledges these issues. Priority areas include transport scalability, enterprise auth, and gateway behavior. The protocol is evolving, but it's evolving under pressure.

For a deeper look at MCP's architecture and implementation: MCP: Model Context Protocol.

Quote: One deployment showed three MCP servers consuming ...

A2A: The Agent Collaboration Layer

If MCP gives agents hands (tools), A2A gives them voices. It's the protocol for agents talking to other agents.

What It Actually Does

A2A handles inter-agent communication. One agent discovers another through an Agent Card (a JSON metadata document describing capabilities, authentication, and endpoints). The first agent sends a task. The receiving agent processes it and reports back through a defined lifecycle: submitted, working, input-required, completed, failed, canceled, or rejected.

This matters because real multi-agent systems need more than tool calls. A research agent doesn't call a "summarize" tool on another agent. It delegates a task, waits for results, handles failures, and may need to provide additional input mid-stream. A2A formalizes that entire workflow.

The Spec Today

A2A reached version 0.3.0 (Release Candidate 1) in February 2026. Google introduced the protocol in April 2025 and donated it to the Linux Foundation's LF AI & Data group. Over 150 organizations are contributing to the spec.

Key features in the current spec:

  • Agent Cards: JSON documents describing identity, capabilities, skills, endpoints, and auth requirements. Every A2A server must publish one.
  • Task lifecycle: Eight defined states (submitted through rejected) with streaming updates via TaskStatusUpdateEvent and TaskArtifactUpdateEvent.
  • Capability negotiation: Agents declare what they can do. Clients pick the right agent for the job.
  • Streaming: Server-sent events for long-running tasks, with the stream closing when the task hits a terminal state.

Where It Stands

A2A's adoption is earlier-stage than MCP's. Few production systems run it today. But the backing is serious: Google, IBM, Microsoft, AWS, Cisco, Salesforce, ServiceNow, and SAP all sit on the Technical Steering Committee. The protocol is well-positioned for enterprise multi-agent workflows, even if the tooling hasn't caught up yet.

For more on how MCP and A2A fit together: MCP and A2A Convergence.

Quote: MCP provides the hands. A2A provides the voice....

ACP: The Protocol That Merged

ACP's story is short but important. IBM Research launched it in March 2025 as part of the BeeAI platform, an open-source system for agent interoperability.

What It Did

ACP took a different approach than A2A. It used plain REST endpoints instead of JSON-RPC, supported any MIME type for content (text, images, audio, video), and was deliberately simple enough to test with curl or Postman. Where A2A felt enterprise-grade from day one, ACP prioritized developer ergonomics.

ACP also had strong observability hooks. Security teams could stream logs, enrich threat intelligence, and route indicators through agent pipelines with full audit trails. The protocol was built with IBM's enterprise customers in mind, but the developer experience came first.

The Merger

In September 2025, ACP officially merged with A2A under the Linux Foundation. IBM's Blair joined the A2A Technical Steering Committee. The ACP team began winding down independent development and contributing directly to A2A.

The merger preserved ACP's RESTful simplicity in parts of the A2A spec while incorporating A2A's Agent Cards and task lifecycle management. Migration paths were published for existing ACP users.

This matters for one reason: if you were evaluating ACP independently, stop. It's A2A now. The ideas survived. The brand didn't.

Quote: Not every problem needs a protocol — if you contro...

When to Choose What

The framing most articles get wrong is treating this as a one-or-the-other decision. In most real architectures, you'll use both MCP and A2A. They operate at different layers.

Use MCP when your agent needs to connect to external tools, APIs, and data sources. It's the integration layer. If you're building an IDE assistant that queries a database, reads from a CRM, and posts to Slack, MCP standardizes all of those connections.

Use A2A when you have multiple autonomous agents that need to coordinate. A research agent delegates to a summarization agent. A planning agent farms out subtasks to specialist agents. A customer support system routes between triage, billing, and technical agents. A2A handles the delegation, lifecycle management, and capability discovery.

Use both when you're building a production multi-agent system. Your individual agents connect to tools via MCP. Your agents talk to each other via A2A. MCP provides the hands. A2A provides the voice.

Use neither when you're building a single-agent system with a small, fixed set of tools. Perplexity's move is instructive here. If you know exactly what tools your agent needs and you control both sides, direct API integration is simpler, faster, and burns fewer tokens. Not every problem needs a protocol.

What's Actually Competing

The "complementary layers" story is mostly right. But there are real tensions worth understanding.

MCP vs Direct APIs

This is the real competition right now. MCP's value proposition is standardization: write one integration, use it everywhere. The counterargument (from Perplexity, Cloudflare, and others) is that standardization has a token cost. When you know your tools, a direct API call is cheaper, faster, and gives you more context budget for actual work.

The MCP roadmap addresses this. Better registries, lazy-loading tool schemas, and gateway-level caching could cut the bloat significantly. But today, the trade-off is real. Teams running 10+ MCP servers in a single agent pipeline are feeling it.

A2A's Agent Cards vs Existing Discovery

A2A's Agent Card system is clean, but it competes with how teams already handle service discovery. If you're running agents on Kubernetes, you have service meshes. If you're on cloud functions, you have API gateways. A2A asks you to adopt another discovery mechanism on top of what you already use.

The counterargument is that Agent Cards carry richer metadata than a health check endpoint. They describe capabilities, not just availability. But the integration tax is nonzero, and teams building on existing infrastructure will weigh that.

The Governance Question

Both MCP and A2A are now under Linux Foundation governance, but through different foundations. MCP sits under the Agentic AI Foundation (AAIF). A2A sits under LF AI & Data. Their member lists overlap heavily (Google, Microsoft, IBM, AWS are in both), but they're governed separately.

This creates a convergence question. Will MCP's experimental "Tasks" primitive grow into something that overlaps with A2A's task lifecycle? Will A2A's agent descriptions start including tool schemas that overlap with MCP servers? The specs are converging at the edges, and nobody has drawn a clean boundary yet.

We covered this tension earlier: Protocol Wars: Nobody's Winning.

FAQ

Can I use MCP and A2A together?

Yes, and you probably should for multi-agent systems. MCP connects each agent to its tools. A2A connects agents to each other. They're designed for different layers of the stack. The AAIF and LF AI & Data foundations have overlapping membership specifically because the protocols are meant to coexist.

Is ACP dead?

As an independent protocol, yes. In September 2025, IBM merged ACP into A2A under the Linux Foundation. ACP's design ideas (RESTful simplicity, MIME type support, observability hooks) influenced the A2A spec. If you were using ACP, follow IBM's published migration guides to A2A.

Should I wait for the protocols to stabilize before adopting?

MCP is stable enough for production. 97 million monthly SDK downloads and 10,000+ servers say so. A2A at version 0.3.0 RC1 is still early. If you need multi-agent orchestration today, A2A gives you a head start on what's coming, but expect breaking changes. For tool integration, there's no reason to wait on MCP.

What about the context window bloat problem?

It's real but solvable. Perplexity reported 72% context consumption from just three MCP servers. The 2026 MCP roadmap includes transport scalability improvements, registry-based schema caching, and gateway optimizations that should reduce token overhead. In the meantime, practical workarounds include lazy-loading tool schemas, using fewer servers per session, and falling back to direct APIs for high-frequency tools.

Sources