▶️ LISTEN TO THIS ARTICLE
In twelve months, Model Context Protocol went from an internal Anthropic experiment to 97 million monthly SDK downloads, 10,000 community-built servers, and first-class support from every major AI provider on the planet. OpenAI adopted it. Google adopted it. Microsoft wired it into Copilot. By December 2025, Anthropic donated the spec to the Linux Foundation's newly formed Agentic AI Foundation, with Amazon, Bloomberg, Cloudflare, and Block as platinum members. The adoption curve isn't fast. It's vertical. And it happened because MCP solved a problem that had been quietly strangling AI integration work for years: the N-times-M connector problem, where every AI application needed bespoke code for every external tool, and nobody wanted to maintain any of it.
The analogy that stuck is USB. Before USB, every peripheral needed its own port: serial for the modem, PS/2 for the keyboard, parallel for the printer, SCSI for the hard drive. Each device required a dedicated driver, a dedicated cable, and a dedicated prayer that it wouldn't conflict with everything else. USB collapsed all of that into a single standard interface. MCP aims to do the same for AI-to-tool integration. One protocol, any model, any tool. The analogy is good enough to explain what MCP does. It's also incomplete enough to hide what MCP gets wrong, and the gaps are exactly where the security problems live.
The Connector Problem Nobody Talks About
Before MCP, connecting an AI model to external tools meant writing custom integration code for every combination. Want Claude to query GitHub? Write a GitHub integration. Want GPT-4 to search Jira? Write a Jira integration. Want Gemini to access a Postgres database? Write a Postgres integration. Each integration had its own authentication flow, its own error handling, its own data formatting logic. LangChain accumulated over 800 tool integrations, each slightly different and brittle in its own way. If you were building an AI application that needed ten tools across three models, you were maintaining thirty custom connectors that nobody enjoyed debugging.
This is the N-times-M problem. N models multiplied by M tools equals N-times-M integrations. MCP reduces it to N-plus-M: each model implements one MCP client, each tool builds one MCP server, and any client can talk to any server through the shared protocol. A GitHub MCP server works with Claude, ChatGPT, Gemini, or any future model that speaks MCP. A Postgres MCP server does the same. Build the connector once, use it everywhere.
The economic incentive is obvious. Before MCP, tool providers had to build separate integrations for every AI platform that mattered. After MCP, they build one server. Developers using agent frameworks like AutoGen, CrewAI, or LangGraph had been dealing with this friction for years: different APIs, different schemas, different auth patterns. MCP doesn't eliminate complexity. It centralizes it behind a single interface and makes each tool provider responsible for its own server instead of expecting every AI application to implement every integration from scratch.
How MCP Actually Works
MCP runs on JSON-RPC 2.0, the same lightweight request-response protocol that powers the Language Server Protocol used by every major code editor. The architecture has three roles. The Host is the AI application: Claude Desktop, VS Code, Cursor, Windsurf, or whatever IDE you're working in. The Client is a protocol handler that lives inside the host, maintaining a one-to-one connection with each MCP server. The Server is a lightweight program that exposes capabilities for the AI to use.
Those capabilities break down into three primitives. Tools are actions the model can execute: run a database query, create a GitHub issue, send a Slack message. Resources are data the model can read: file contents, API responses, documentation pages. Prompts are reusable templates that structure how the model interacts with specific tools or data sources. Tools let the AI do things. Resources let the AI know things. Prompts standardize how it asks.
When a client connects to a server, they negotiate capabilities. The server declares what it offers, the client declares what it supports, and both sides agree on the interaction surface. Tool definitions include JSON Schema for parameters, so the model knows exactly what inputs each tool expects. If a tool call takes time, the server can send progress notifications. If a call needs to stop, the protocol supports cancellation. There's even a reverse channel called sampling, where servers can request the LLM to generate text, which inverts the typical direction of control.
Two transport layers carry these messages. Stdio handles local connections, running MCP servers as child processes that communicate through standard input and output. This is how most desktop setups work: Claude Desktop launches a local MCP server, they talk over stdio, and nothing touches the network. Streamable HTTP handles remote connections, using standard HTTP POST and GET requests with optional server-sent events for streaming. This replaced the earlier SSE-only transport in March 2025 and plays nicely with existing proxies, load balancers, and OAuth headers.
The practical result is that tools become first-class citizens in the AI workflow. A model doesn't just generate text about what it would do. It calls the tool, gets the result, and incorporates it into its reasoning. MCP standardizes how that handshake works so that every tool doesn't need to reinvent the pattern.

Who's Using It and Why It Spread So Fast
The adoption timeline tells the story. Anthropic released MCP in November 2024. By February 2025, over a thousand open-source MCP servers had appeared on GitHub. In March 2025, OpenAI adopted MCP across the Agents SDK, Responses API, and ChatGPT desktop. Sam Altman said publicly that "MCP seems like it will become a standard." In April 2025, Google DeepMind confirmed MCP support for Gemini. Microsoft integrated it into Copilot Studio. AWS wired it into Bedrock. By the end of 2025, the community counted 10,000-plus servers and the protocol had crossed 97 million monthly SDK downloads across npm and PyPI.
The developer tool space moved fastest. Cursor, Windsurf, Cline, Zed, and Sourcegraph all added MCP support. Claude Code uses MCP servers as its primary mechanism for tool integration, connecting to file systems, databases, and APIs through the protocol. Replit's agent uses MCP for code execution and deployment. A React Native engineer at Preply described using the Atlassian MCP server to automatically generate documentation from pull requests, with the AI reading code changes via GitHub MCP and writing docs to Confluence. Developers using MCP servers report 40% fewer tool switches during coding sessions, according to Anthropic's 2025 usage data.
Enterprise adoption followed. Block built internal tooling on MCP. Bloomberg joined the governance structure. Apollo and n8n integrated MCP for workflow automation across hundreds of services. The attraction for enterprises isn't the protocol itself. It's the maintenance reduction: instead of a team maintaining custom integrations for every AI tool combination, they maintain MCP servers for their services and let any AI client connect. The N-plus-M math alone justifies the migration cost for companies running more than a handful of AI-powered tools.
The Security Model Is Not Ready
Here's where the USB analogy falls apart. USB devices don't try to inject malicious instructions into your operating system through their device descriptions. MCP servers can and do.
In February 2025, Invariant Labs published research on what they called Tool Poisoning Attacks. The mechanism: an MCP server embeds malicious instructions in its tool descriptions. These descriptions are invisible to the user in most client interfaces but fully visible to the AI model. The model reads the poisoned description, follows the embedded instructions, and executes actions the user never authorized. Invariant demonstrated that a malicious "random fact of the day" MCP server could silently exfiltrate a user's entire WhatsApp message history by hijacking a legitimate WhatsApp MCP server running in the same client. The attack success rate across tested configurations hit 84.2% when AI agents auto-approved tool calls. Even with human-in-the-loop approval, the attack surface remained because the poisoned descriptions don't appear in the approval dialogs of most implementations.
The numbers from academic research are worse. Evaluation across 20 prominent LLM agents found widespread vulnerability to tool poisoning, with attack success rates as high as 72.8% on o1-mini. Agents rarely refused these attacks. Claude 3.7 Sonnet, the model with the highest refusal rate, still refused less than 3% of the time. Existing safety alignment doesn't help here because the malicious actions use legitimate tools for unauthorized operations. The model isn't jailbroken. It's following instructions that look, to it, like normal tool documentation.
Then there are the infrastructure vulnerabilities. CVE-2025-6514 hit mcp-remote, a popular OAuth proxy used by local MCP clients to connect to remote servers. The bug was straightforward: mcp-remote passed the authorization endpoint URL from the server directly into the system shell without sanitization. A malicious MCP server could craft a URL that achieved remote code execution on the client machine. CVSS score: 9.6 out of 10. The package had over 437,000 downloads before the fix shipped in version 0.1.16.
The structural problem runs deeper than individual CVEs. Most MCP implementations provide no sandboxing between servers. A compromised server can observe and interfere with other servers running in the same client. There's no standard mechanism for verifying server identity before connecting. The "rug pull" attack pattern, where a server behaves normally during initial approval then changes its tool behavior afterward, exploits the assumption that initial trust extends indefinitely. Trail of Bits put it directly: "The security model assumes MCP servers are trusted, but the marketplace incentivizes installing untrusted servers."
This is the npm problem transplanted to AI tool infrastructure. Anyone can publish an MCP server. There's no vetting process. The marketplace rewards breadth over security, and developers install servers the same way they install npm packages, with optimistic trust and minimal review.
What MCP Isn't: Sorting Out the Alternatives
MCP occupies a specific niche: agent-to-tool communication. Understanding what it doesn't do clarifies where the boundaries are.
Function calling, the approach OpenAI pioneered, defines tools at prompt time. You describe available functions in the system message, the model generates structured calls, and your application executes them. This works but it's model-specific. OpenAI's function calling format differs from Anthropic's tool use format, which differs from Google's function calling spec. MCP sits underneath all of these, providing a standard way for any model's function calling mechanism to discover and invoke tools. They're different layers of the same stack, not competitors.
Google's Agent-to-Agent protocol, announced in April 2025 with over 50 launch partners including Atlassian, Salesforce, and PayPal, solves a different problem entirely. A2A handles agent-to-agent communication: how a scheduling agent talks to a calendar agent, how a purchasing agent negotiates with a vendor agent. MCP handles agent-to-tool communication: how an agent accesses a database or calls an API. A2A and MCP are complementary, not competing. A2A uses Agent Cards for discovery, JSON-RPC for transport, and a task lifecycle model where work moves through defined states. In a production system, MCP connects each agent to its tools while A2A handles coordination between agents. Both protocols will likely coexist in any serious multi-agent deployment.
LangChain's tool abstraction is Python-specific and framework-locked. MCP is language-agnostic and framework-independent. OpenAPI describes REST APIs. MCP wraps any capability, not just HTTP endpoints, and adds bidirectional communication that REST can't express. The comparison that matters most is this: LangChain tools, OpenAI function calling, and custom integrations all solve the problem at the application layer. MCP solves it at the protocol layer, which means the solution transfers across applications, models, and languages without reimplementation.
Building Your First MCP Server
The barrier to entry is intentionally low. An MCP server is a program that speaks JSON-RPC 2.0 over one of the supported transports. The official SDKs exist for TypeScript, Python, Java, Kotlin, C#, Swift, and Go. A minimal server that exposes a single tool, say a weather lookup, is under fifty lines of code in Python.
You define your tools with names, descriptions, and JSON Schema parameter definitions. You register handlers that execute when those tools are called. You start the server on a transport layer. That's it. The client discovers your tools through the capability negotiation handshake and presents them to the model as available actions.
The simplicity is deliberate. If building an MCP server required heavy infrastructure, adoption wouldn't have hit 10,000 servers in a year. The tradeoff is that simplicity in server creation means simplicity in malicious server creation too, and the community hasn't built the filtering and verification mechanisms to tell the difference.
For developers building their first AI agent, MCP provides a clean separation of concerns. The agent logic lives in the host application. The tool capabilities live in MCP servers. The protocol handles the plumbing between them. This separation means you can swap tools without touching agent code, swap models without touching tool code, and test each layer independently. It's good software engineering applied to a space that badly needed it.

Discovery, Trust, and the Missing Pieces
The biggest gap in MCP today isn't the protocol itself. It's everything around it.
There's no built-in discovery mechanism. If you want to find MCP servers for a specific capability, you search GitHub, browse community directories, or check Anthropic's registry. There's no standard equivalent of a package manager's search index. A2A solved this with Agent Cards, machine-readable metadata documents that describe what an agent can do and how to connect to it. MCP has nothing comparable in the spec, which means finding and evaluating servers remains a manual process.
Server identity verification doesn't exist as a standard. When you connect to an MCP server, you trust that it is what it claims to be. There's no certificate chain, no signing mechanism, no way to verify that the server binary you downloaded matches the source code in the repository. OAuth 2.1, added in the March 2025 spec revision, handles authorization for remote connections but doesn't address the identity problem. You can authenticate yourself to a server, but you can't authenticate the server to yourself.
Performance overhead is real but manageable. JSON-RPC adds latency compared to direct function calls, measured in single-digit milliseconds for local stdio connections and tens of milliseconds for remote HTTP connections. For most AI workflows, where the LLM inference itself takes hundreds of milliseconds to seconds, the protocol overhead is noise. For latency-sensitive applications calling tools thousands of times per session, it adds up.
Stateful connections create scaling challenges. MCP servers maintain state for each client connection, tracking capability negotiation results and session context. This complicates horizontal scaling: you can't simply load-balance across server instances without session affinity or shared state management. The protocol doesn't define standard rate limiting or quota mechanisms either, leaving each server to implement its own. Getting from lab to production with MCP means solving these operational problems yourself.
Governance and What Comes Next
In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation. OpenAI co-founded the organization. The platinum members read like a who's who of AI infrastructure: AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. Three founding projects anchored the effort: Anthropic's MCP, Block's Goose, and OpenAI's AGENTS.md.
The governance structure separates strategic decisions from technical direction. The AAIF Governing Board handles budget, membership, and project approval. Individual projects like MCP retain full autonomy over their technical roadmap. The Linux Foundation provides neutral infrastructure without dictating engineering choices. This mirrors successful open-source governance models like the Cloud Native Computing Foundation and the Apache Software Foundation.
The spec itself continues to evolve. The June 2025 revision added structured tool outputs, OAuth-based authorization, elicitation for server-initiated user interactions, and improved security guidance. Remote MCP server deployments grew nearly 4x since May 2025, with enterprises investing in server-side implementations for customer-facing AI products. Working groups for security, transport, and discovery are actively shaping the next revision.
The trajectory is clear enough. MCP will become the standard protocol for AI-to-tool integration, the way HTTP became the standard for web communication. Not because it's perfect, but because standardization compounds: every new server makes the protocol more valuable for clients, and every new client makes the protocol more valuable for servers. Network effects are already entrenched. MCP has already won as the connector standard. What remains unresolved is whether the security and governance infrastructure can mature fast enough to match the community's growth.
Where the Analogy Breaks
USB worked because the devices on the other end were predictable. A keyboard sends keystrokes. A printer accepts print jobs. The protocol handles data transfer, not adversarial intent. MCP operates in a fundamentally different environment. The "devices" on the other end are programs that return natural language, language that gets fed directly into a model's reasoning process. A malicious USB device can crash a driver. A malicious MCP server can hijack an agent's behavior, exfiltrate data through another server's tools, or execute arbitrary code on the host machine.
The protocol needs guardrails that USB never did. Server sandboxing so that one compromised connection can't infect others. Tool description transparency so users see exactly what models see. Cryptographic identity verification so servers can't impersonate trusted providers. Behavioral monitoring so "rug pull" attacks trigger alerts when tool behavior changes post-approval. None of these exist in the current spec. The community is building toward them, but the gap between adoption growth and security maturity is the defining risk of MCP's current moment.
I've been using MCP daily through Claude Code, connecting to GitHub, file systems, and documentation servers. The productivity gain is real. The convenience of saying "look up the docs for this library" and having the model pull live documentation through an MCP server, instead of working from stale training data, changes how you build software. That convenience is exactly what makes the security posture dangerous. The protocol is good enough that people use it without thinking, and "without thinking" is where the attack surface lives.
MCP is genuinely important. It's the first credible standard for how AI connects to the external world, and the industry has rallied behind it faster than almost any protocol in recent memory. The spec is solid. The governance is in the right hands. The adoption numbers are staggering. What's missing is the immune system: the tooling, processes, and verification mechanisms that turn a fast-growing protocol into a trustworthy one. The types of agents we're building today will only grow more autonomous and more connected. The protocol that wires them into the world needs to be ready for that, and right now, it's running ahead of its own safety infrastructure.
Sources
Research Papers:
- MCP Security Notification: Tool Poisoning Attacks — Invariant Labs (2025)
- A Benchmark for Tool Poisoning Attack on Real-World MCP Servers (MCPTox) — arXiv (2025)
- An Automated Framework for Implicit Tool Poisoning in MCP (MCP-ITP) — arXiv (2026)
- Systematic Analysis of MCP Security — arXiv (2025)
- Securing the Model Context Protocol: Risks, Controls, and Governance — arXiv (2025)
Industry / Case Studies:
- Critical RCE Vulnerability in mcp-remote: CVE-2025-6514 — JFrog (2025)
- Announces Formation of the Agentic AI Foundation (AAIF) — Linux Foundation (2025)
- Donating the Model Context Protocol — Anthropic (2025)
- One Year of MCP: November 2025 Spec Release — Model Context Protocol Blog (2025)
- The State of MCP: Adoption, Security & Production Readiness — Zuplo (2025)
- Model Context Protocol has prompt injection security problems — Simon Willison (2025)
Commentary:
- A Year of MCP: From Internal Experiment to Industry Standard — Pento (2025)
- The Model Context Protocol's Impact on 2025 — Thoughtworks (2025)
- Building MCP Servers in the Real World — The Pragmatic Engineer (2025)
- MCP Tools: Attack Vectors and Defense Recommendations — Elastic Security Labs (2025)
Related Swarm Signal Coverage: