▶️ LISTEN TO THIS ARTICLE

Enkrypt AI scanned 1,000 MCP servers last year and found that 33% had at least one critical vulnerability. The average was 5.2 vulnerabilities per server. One popular server had 26 flaws, including 13 command injection bugs rated CVSS 9.8. This is the protocol that won. This is the standard the entire AI industry rallied behind. And it shipped without authentication.

Meanwhile, the agent protocol count keeps climbing: MCP, A2A, ACP, ANP, UTCP, NLIP, A2UI, AG-UI, UCP, AP2. Ten abbreviations, at minimum. Agents still can't reliably talk to each other. Every company in the space celebrates "interoperability" while enterprise teams sit frozen, wondering which acronym to bet their architecture on. The alphabet soup isn't competition. It's a coordination failure dressed up as innovation.

The Acronym Factory

Here's where things stand. MCP (Model Context Protocol), built by Anthropic and released in late 2024, handles how a single agent connects to tools and data sources. Think of it as the plumbing between an LLM and your database, your code repo, your CRM. It hit 97 million monthly SDK downloads and over 10,000 public servers by early 2026. OpenAI adopted it. Google adopted it. Microsoft adopted it. MCP won the tool-calling layer, and it wasn't close.

A2A (Agent-to-Agent Protocol) came from Google in April 2025, targeting a different problem: how multiple agents coordinate tasks between themselves. IBM had its own version called ACP (Agent Communication Protocol), built for the BeeAI platform. By September 2025, ACP merged into A2A under the Linux Foundation. Over 150 organizations signed on, including Adobe, Salesforce, and AWS. On paper, A2A had serious momentum.

Then there's ANP (Agent Network Protocol), which tries to solve peer-to-peer discovery across the open internet using decentralized identifiers. And UTCP (Universal Tool Calling Protocol), a scrappy challenger arguing that MCP's proxy architecture adds needless overhead when agents could just call tools directly through native APIs. And NLIP from Ecma International. And Google's UCP for commerce. And AP2 for agent payments.

Each protocol has a pitch deck. Each has backers. Each solves a slightly different slice of the problem. But here's the thing: the slices don't cleanly compose. Enterprise teams don't want to pick four protocols for four layers. They want one answer, or at most two, and nobody's giving them that.

MCP Won and Then Got Hacked

MCP's dominance deserves scrutiny, not just celebration. The protocol grew so fast that security became an afterthought. Authentication and authorization were originally optional. OAuth 2.0 support arrived in March 2025, refined to OAuth 2.1 by June. But thousands of servers deployed during those early months are still running in production without any auth at all.

The attack surface is genuinely alarming. Pynt's research found that deploying just ten MCP plugins gives attackers a 92% exploit probability. At three servers, risk exceeds 50%. Seventy-two percent of MCP servers expose sensitive capabilities like dynamic code execution and file system access. Thirteen percent accept untrusted inputs from sources like Slack messages, emails, and web scraping. When those two categories overlap, which happens 9% of the time, attackers get direct paths to prompt injection and data exfiltration with zero human approval required.

The specific attack vectors read like a security team's nightmare. Tool poisoning lets malicious metadata hijack LLM behavior. Supply chain backdoors persist through CI/CD pipelines. Name collisions let bad actors impersonate trusted servers. The Postmark MCP npm package was trojanized to BCC every outbound email to an attacker's domain, silently siphoning invoices and password resets. The Smithery supply chain attack in October 2025 hit 3,000 hosted applications. CVE-2025-6514 in the mcp-remote package, downloaded over 500,000 times, allowed arbitrary OS command execution.

OWASP responded by publishing both an MCP Top 10 and a separate Top 10 for Agentic Applications. That's how bad it got: the security community needed two new vulnerability frameworks just to categorize the mess.

33% of MCP servers had at least one critical vulnerability.

Anthropic, to their credit, donated MCP to the Linux Foundation's Agentic AI Foundation (AAIF) in December 2025. OpenAI and Block co-founded it. The governance is now vendor-neutral, at least structurally. But governance doesn't patch the thousands of unpatched servers already deployed. And it doesn't undo the precedent: ship first, secure later.

A2A's Quiet Fade

Google's A2A tells a different story. Where MCP won through developer accessibility, A2A struggled with the opposite problem. It tried to solve every possible agent communication scenario from day one. The specification was thorough but complex. Building a useful tool integration with MCP took an afternoon. Building an A2A implementation required understanding Agent Cards, task lifecycle management, JSON-RPC patterns, and security card configurations.

By September 2025, A2A had "quietly faded into the background," as one analysis put it, even as 150 organizations claimed support. Real enterprise use cases existed (Tyson Foods and Gordon Food Service used A2A for supply chain coordination), but developer adoption lagged far behind MCP. The merger with ACP added IBM's weight but also added confusion: which parts of ACP survived? What's the migration path? Where does BeeAI fit now?

The core issue was timing. By the time A2A reached v0.3, MCP had already captured developer mindshare. And in protocol adoption, mindshare is everything. OSI learned this lesson against TCP/IP in the 1980s. FireWire learned it against USB in the 2000s. The technically superior option doesn't win. The one developers actually use does.

The Coordination Tax, Applied to Protocols

We've written before about the coordination tax in multi-agent systems: adding more agents doesn't automatically mean better outcomes, because coordination overhead eats the gains. The same pattern applies to protocols themselves.

Every new protocol creates integration work. Every integration creates a new attack surface. Every attack surface requires security tooling. Every security tool needs its own maintenance. The cost compounds. Enterprises that adopted MCP for tool-calling now face a second decision for agent-to-agent communication, and a third for discovery, and a fourth for payments.

A Gartner prediction from August 2025 says 40% of enterprise applications will feature task-specific AI agents by 2026, up from under 5% in 2025. That's an eight-fold increase in agent deployments colliding with an unresolved protocol stack. UiPath found that 87% of IT executives call interoperability "very important" or "crucial." Yet fewer than one in four organizations have scaled agents to production. The gap between ambition and execution is partly a protocol problem.

Gartner didn't say "pick MCP and A2A and you'll be fine." Nobody's saying that, because nobody knows if that's true. The AAIF exists to create vendor-neutral oversight, but its founding members include companies with competing commercial interests. Anthropic wants MCP everywhere. Google needs A2A to justify its investment. Both sit on the same foundation board.

The Counterargument Is Weak

The standard rebuttal goes like this: MCP and A2A aren't competitors, they're complementary. MCP handles tool integration. A2A handles agent coordination. Different layers, different jobs. This framing is popular in blog posts and protocol documentation. It's also incomplete.

The technically superior option doesn't win.

The layers don't have clean boundaries. MCP's "sampling" capability lets servers request LLM completions, which starts to overlap with agent-to-agent delegation. A2A's task model can invoke tools, which edges into MCP territory. ANP's decentralized discovery competes with A2A's Agent Card system. UTCP argues the entire MCP proxy layer is unnecessary overhead. The "complementary layers" story falls apart as soon as you look at the edges.

And even if the layers were perfectly clean, enterprises still need to implement, secure, monitor, and maintain multiple protocol stacks simultaneously. The coordination failures that emerge when agents lie to each other multiply when those agents are speaking different protocol dialects across organizational boundaries.

What Would Actually Fix This

The history of protocol wars offers one consistent lesson: convergence happens, but it takes longer than anyone wants and costs more than anyone budgets. TCP/IP beat OSI not because it was technically superior but because it had running code and rough consensus. USB beat FireWire because it was cheap and ubiquitous, not because it was faster. Winners emerge through adoption velocity, not specification quality.

Right now, MCP has the adoption velocity. If A2A wants to survive, it needs the same developer-first simplicity that made MCP stick. Not Agent Cards and task lifecycle specs. Quick starts and working demos. The AAIF could help by publishing reference architectures that actually compose MCP + A2A into a single coherent stack, with a single security model, rather than maintaining them as separate projects that theoretically interoperate.

The security problem needs solving before expansion. A protocol that's 92% exploitable at scale isn't a standard. It's a liability. The OWASP frameworks are a start, but what's missing is enforcement. Mandatory authentication. Signed server manifests. Automated vulnerability scanning as a prerequisite for registry listing.

But here's the prediction that matters: by the end of 2026, most of these protocols will be irrelevant. Not because they'll be formally deprecated, but because developers will have voted with their implementations. MCP will own tool-calling. A2A will either simplify enough to own agent coordination or get absorbed into MCP's expanding scope. Everything else will be a footnote. The alphabet soup will reduce to two or three letters, the same way the web eventually settled on HTTP, HTML, and TCP/IP.

The question for enterprise teams frozen in protocol paralysis right now is whether they can afford to wait for that convergence. Most can't. The practical answer, ugly as it is: pick MCP for tools, A2A for agent coordination, build abstraction layers over both, and budget for ripping them out when the real standard arrives. It's not elegant. But neither is the alternative, which is doing nothing while the protocol wars play out above your head.

Sources

Research Papers:

Industry / Case Studies:

Commentary:

Related Swarm Signal Coverage: