/

what-is-mcp

Why Was MCP Created? The Problem It Solves

Before MCP existed, connecting an AI application to an external tool meant writing a custom integration from scratch. If you wanted Claude to access your company's database, you built a connector for that. If you also wanted it to pull data from Slack, that was a separate integration. And if you then wanted ChatGPT to do the same things, you started from zero again.

This is known as the N x M integration problem. With N AI applications and M external tools, you end up with N times M custom connectors to build and maintain. Every new tool or AI platform multiplies the complexity.

MCP eliminates this by introducing a standard protocol layer. Tool providers build one MCP server, and it works with any MCP-compatible AI application. AI application developers build one MCP client, and it can connect to any MCP server. The result: N plus M integrations instead of N times M.

For enterprises, this shift is significant. It means less engineering time spent on plumbing, faster time-to-value for AI deployments, and a common interface that security and operations teams can actually govern.

How Does MCP Work?

MCP follows a client-server architecture built on top of JSON-RPC 2.0, a lightweight remote procedure call protocol. The interaction between an MCP client and server follows a structured lifecycle:

1. Initialization. The client sends an initialize request to the server. Both sides exchange their supported capabilities (which primitives they handle, which features they support) and agree on a protocol version. This capability negotiation ensures that the client and server can communicate effectively before any data is exchanged.

2. Discovery. Once the connection is established, the client queries the server to learn what it offers. This typically starts with listing available tools, resources, and prompts. Discovery is dynamic, meaning the set of available tools can change over time, and the server can notify the client when it does.

3. Execution. The client sends requests to the server to call tools, read resources, or use prompt templates. The server processes those requests and returns structured responses. Each request-response pair is correlated by a unique ID, following standard JSON-RPC conventions.

4. Notifications. Either party can send one-way notifications that don't require a response. These are used for real-time updates, like when a server's tool list changes or when progress needs to be reported on a long-running operation.

This lifecycle makes MCP a stateful protocol. The connection between a client and server is maintained over time, and both sides are aware of each other's capabilities throughout the session.

MCP Architecture and Core Components

MCP's architecture has three main participants and two protocol layers. Understanding these is essential for anyone building with or evaluating MCP.

MCP Hosts

The host is the AI application that the end user interacts with. Claude Desktop, Visual Studio Code, ChatGPT, and Cursor are all examples of MCP hosts. The host is responsible for managing one or more MCP clients and coordinating their interactions with servers.

When you open Claude Desktop and it connects to a filesystem server and a Slack server, Claude Desktop is the host managing both of those connections.

MCP Clients

Each MCP client is a component within the host that maintains a dedicated connection to a single MCP server. If the host connects to three different MCP servers, it creates three separate client instances. The client handles protocol-level communication: sending requests, receiving responses, and managing the connection lifecycle.

Clients are the bridge between the host application's logic and the external capabilities that MCP servers provide.

MCP Servers

An MCP server is a program that exposes tools, resources, and prompts to MCP clients. Servers can run locally on the same machine as the host (using the stdio transport) or remotely on a separate server (using the Streamable HTTP transport).

For example, the official Sentry MCP server runs remotely on Sentry's platform. A filesystem MCP server typically runs locally, reading and writing files on the user's own machine.

The growing ecosystem of MCP servers includes connectors for databases, cloud services, developer tools, CRMs, project management platforms, and much more.

Transport Layer

MCP supports two transport mechanisms that determine how clients and servers communicate:

  • Stdio (Standard Input/Output). Used for local MCP servers that run on the same machine as the host. Communication happens through standard input and output streams, with no network overhead. This is the simplest and fastest transport option.

  • Streamable HTTP. Used for remote MCP servers. The client sends HTTP POST requests to the server, and the server can stream responses back using Server-Sent Events (SSE). This transport supports standard HTTP authentication methods, including OAuth, bearer tokens, and API keys.

The transport layer is abstracted from the data layer, so the same JSON-RPC messages work identically regardless of whether the server is local or remote.

Key MCP Concepts

MCP defines a set of primitives that structure how context and capabilities flow between clients and servers. These are the building blocks that developers work with when building MCP servers or integrating MCP into AI applications.

Tools

Tools are executable functions that an MCP server exposes to AI applications. When an AI agent decides it needs to take an action, like querying a database, sending a message, or creating a file, it invokes a tool.

Each tool has a name, a description (which the AI model uses to decide when and how to call it), and an input schema that defines the expected parameters. Tools are the most commonly used MCP primitive and the one that gives AI agents the ability to do things in the real world, not just answer questions.

Examples of MCP tools include running a SQL query, searching a codebase, creating a calendar event, or calling an external API.

Resources

Resources provide contextual data to the AI application without executing an action. They are read-only data sources that the AI model can use to inform its responses.

A resource might be the contents of a file, the schema of a database, a list of records from a CRM, or the current state of a configuration. Resources are identified by URIs and can be listed and read by MCP clients.

The key difference between tools and resources: tools perform actions, resources provide information.

Prompts

Prompts are reusable interaction templates that MCP servers can expose to clients. They help structure how a user or AI model interacts with the server's capabilities.

For example, an MCP server for a database might provide a prompt template that includes a system message explaining the schema, a few-shot example of how to construct queries, and placeholders for the user's question. Prompts make it easier for AI applications to use tools and resources effectively.

Sampling

Sampling is a client-side primitive that allows MCP servers to request language model completions from the host application. This is useful when a server needs to generate text but wants to remain model-agnostic and avoid bundling its own LLM SDK.

For example, an MCP server that generates documentation could use sampling to ask the host's language model to write a summary, rather than calling a specific model API directly.

MCP vs. RAG: What is the Difference?

MCP and Retrieval-Augmented Generation (RAG) are often mentioned together, but they solve different problems and operate at different layers.

RAG is a technique for enriching an LLM's context by retrieving relevant documents from a vector database or knowledge base before generating a response. It is primarily focused on improving the accuracy and relevance of text generation by grounding it in specific data.

MCP is a protocol for connecting AI applications to external systems. It covers not just data retrieval but also tool execution, prompt management, and real-time interaction with live services.

These two approaches are complementary. An MCP server could use RAG internally to retrieve relevant documents and expose them as resources. Or an AI application could use RAG for knowledge retrieval and MCP for taking actions based on that knowledge.

Benefits of Using MCP

Reduced Hallucinations Through Real-Time Data Access

One of the biggest limitations of LLMs is that they can only work with the context they're given. When that context is incomplete or outdated, the model fills in the gaps, often incorrectly. MCP lets AI applications pull real-time data from authoritative sources at the moment it's needed, which directly reduces the risk of hallucinated responses.

Simplified Integration for Developers

Before MCP, every AI-to-tool integration was a bespoke project. MCP provides a standard interface that developers can build against once. This reduces development time, lowers maintenance burden, and makes it practical to connect AI applications to dozens of tools without writing dozens of custom integrations.

Increased AI Utility and Automation

MCP transforms AI from a system that only generates text into one that can take action. With MCP tools, an AI agent can create tickets in Jira, update records in Salesforce, commit code to GitHub, send messages in Slack, and much more. This is the foundation of agentic AI, where AI systems don't just advise but execute. Platforms like Agen.co provide a secure MCP gateway that lets organizations enable this level of automation while maintaining full control over which tools agents can access.

Vendor-Agnostic and Open Source

MCP is an open protocol, not a proprietary product. It was created by Anthropic but is now supported across the industry, including by OpenAI, Google, Microsoft, and a wide range of developer tool companies. Building on MCP means you're not locked into any single AI vendor.

MCP Security Considerations

MCP opens a powerful new surface area for AI applications, and that power comes with real security implications. When an AI agent can call tools that modify databases, access internal systems, or send messages on behalf of users, the stakes are high.

Authentication and Authorization

MCP supports OAuth 2.0 and bearer token authentication for remote servers. But authentication alone doesn't solve the governance challenge. Organizations need to control which agents can access which tools, for which users, and under what conditions.

This is where traditional access management models fall short. They weren't designed for a world where AI agents act on behalf of users, making delegated access decisions at machine speed. Solving this requires a purpose-built MCP security layer that understands agent identity, delegation, and context.

Common Security Risks

Several security risks are specific to MCP deployments:

  • Tool poisoning. A malicious MCP server could expose tools with descriptions designed to manipulate the AI model's behavior, effectively injecting instructions through tool metadata.

  • Excessive permissions. Without fine-grained authorization, an MCP server might expose tools that grant broader access than intended.

  • Data exfiltration. AI agents connected to multiple MCP servers could inadvertently leak data between systems if access boundaries aren't enforced.

  • Prompt injection via tool results. A compromised tool could return results containing instructions that manipulate the AI model's subsequent behavior.

Best Practices for Securing MCP

Security teams evaluating MCP deployments should prioritize:

  • Least-privilege access. Scope tool access to the minimum permissions required for each agent and user.

  • Audit logging. Log every tool call, including who initiated it, what parameters were passed, and what was returned. Real-time observability across agent interactions is essential for detecting anomalous behavior before it causes damage.

  • Server verification. Only connect to MCP servers from trusted sources, and validate their behavior before granting production access.

  • Centralized governance. Use a control plane that gives security and operations teams visibility and policy enforcement across all MCP connections.

Agen.co was built specifically to address this governance gap. It sits between AI agents and the tools they connect to, enforcing identity-aware access control, fine-grained tool authorization, data governance policies, and full audit trails across every MCP interaction. For enterprises scaling AI agent deployments, a governance layer for MCP is not optional.

Real-World MCP Use Cases

MCP is already powering production workflows across a range of industries and use cases:

  • AI coding assistants. Tools like Claude Code and Cursor use MCP to connect to Git repositories, file systems, and CI/CD pipelines. Developers can ask their AI assistant to read code, create branches, run tests, and deploy changes, all through MCP.

  • Enterprise chatbots. Organizations are building internal chatbots that connect to CRMs, databases, and ticketing systems through MCP servers. An employee can ask a question and the agent pulls live data from Salesforce, queries a Postgres database, or creates a Jira ticket, all in one conversation.

  • Workflow automation. AI agents use MCP to orchestrate multi-step workflows across tools. For example, an agent could read a customer support ticket, look up the customer's account in a CRM, check their recent orders in a database, and draft a response, all by calling different MCP tools in sequence.

  • Data analysis. MCP makes it possible for AI applications to connect to multiple data sources across an organization and run queries on behalf of users. Business analysts can use natural language to explore data without writing SQL or switching between dashboards.

  • Design-to-code. Some development teams use MCP to connect AI models to design tools like Figma, enabling the AI to read design files and generate corresponding frontend code.

MCP Ecosystem and Adoption

MCP adoption has accelerated rapidly since its release. What started as an Anthropic project has become an industry-wide standard.

Major platform support. Claude, ChatGPT, Gemini, Visual Studio Code, Cursor, Windsurf, Replit, and many other AI platforms now support MCP as clients. This means MCP servers built for one platform work across all of them.

Growing server ecosystem. There are now thousands of community-built MCP servers covering everything from cloud infrastructure (AWS, GCP, Azure) to productivity tools (Slack, Notion, Google Calendar) to developer tools (GitHub, Sentry, Datadog). The official MCP server registry catalogs many of these.

Enterprise adoption. Companies like Block, Salesforce, and Apollo have integrated MCP into their systems. Development tool companies including Zed, Replit, Codeium, and Sourcegraph have adopted MCP to enhance their platforms.

Industry backing. OpenAI, Google DeepMind, and Microsoft have all added MCP support to their platforms, signaling that MCP has crossed the threshold from a single-vendor initiative to a broadly accepted standard.

How to Get Started with MCP

Getting started with MCP depends on your role in the ecosystem.

If you want to use MCP tools as an end user: The fastest path is through an AI application that already supports MCP, like Claude Desktop, VS Code with Copilot, or Cursor. You can install pre-built MCP servers and immediately start using tools, resources, and prompts within your AI workflows.

If you want to build an MCP server: The official MCP SDKs are available for TypeScript, Python, Java, and other languages. The SDKs abstract away protocol-level details and let you focus on defining tools, resources, and prompts for your service.

A basic MCP server can be built in under 100 lines of code. You define the tools you want to expose, implement the handlers for each tool, and the SDK handles all the JSON-RPC communication, capability negotiation, and transport management.

If you want to build an MCP client: The SDKs also support client development for teams building AI applications that need to connect to MCP servers. The client SDK manages connection lifecycle, tool discovery, and execution.

Key resources for developers:

The Future of MCP

MCP is still evolving, and several developments are shaping its trajectory:

Remote server maturity. Early MCP adoption was heavily focused on local servers running on developer machines. The ecosystem is now shifting toward remote, cloud-hosted MCP servers that can serve entire organizations. This brings new requirements around authentication, multi-tenancy, and scalability.

Authentication standardization. The MCP specification is converging on OAuth 2.0 as the recommended authentication mechanism for remote servers. This is critical for enterprise adoption, where identity and access management are non-negotiable.

Governance infrastructure. As MCP deployments scale from individual developer tools to organization-wide AI agent networks, the need for centralized governance becomes urgent. Organizations need to control which agents can access which tools, enforce data policies, and maintain compliance. This is the layer that platforms like Agen.co are building, providing the security and governance infrastructure for AI agents that makes it safe to deploy MCP at enterprise scale.

Agent-to-agent communication. While MCP handles how AI applications connect to tools and data, complementary protocols like Google's Agent-to-Agent (A2A) protocol are emerging to handle communication between AI agents. MCP and A2A are expected to work together, with MCP providing tool connectivity and A2A enabling agent collaboration.

Expanding primitive types. The MCP specification continues to evolve, with experimental features like Tasks (for long-running operations) and Elicitation (for requesting user input) being added. These primitives are making MCP capable of handling increasingly complex, multi-step workflows.

Frequently Asked Questions About MCP

What does MCP stand for?
MCP stands for Model Context Protocol. It is an open-source standard for connecting AI applications to external tools, data sources, and services.

Who created MCP?
MCP was created by Anthropic and open-sourced in November 2024. It has since been adopted by major technology companies including OpenAI, Google, and Microsoft.

Is MCP free to use?
Yes. MCP is an open-source protocol released under a permissive license. The specification, SDKs, and reference implementations are all freely available.

What programming languages support MCP?
Official MCP SDKs are available for TypeScript, Python, Java, Kotlin, and C#. Community SDKs exist for additional languages including Go, Rust, Ruby, and Swift.

How is MCP different from an API?
An API is a specific interface for accessing a particular service. MCP is a protocol that standardizes how AI applications discover, connect to, and interact with many different services through a common interface. MCP servers often wrap existing APIs and expose them in a format that AI applications can use natively.

Is MCP secure?
MCP supports authentication via OAuth 2.0 and bearer tokens, but the protocol itself is only part of the security picture. Organizations deploying MCP at scale need governance infrastructure to manage access control, audit agent behavior, and enforce data policies across all MCP connections.

Can MCP and RAG work together?
Yes. MCP and RAG are complementary. RAG is a technique for improving response accuracy by retrieving relevant documents, while MCP is a protocol for connecting AI apps to external systems. An MCP server can use RAG internally, and an AI application can use both approaches simultaneously.

Why Was MCP Created? The Problem It Solves

Before MCP existed, connecting an AI application to an external tool meant writing a custom integration from scratch. If you wanted Claude to access your company's database, you built a connector for that. If you also wanted it to pull data from Slack, that was a separate integration. And if you then wanted ChatGPT to do the same things, you started from zero again.

This is known as the N x M integration problem. With N AI applications and M external tools, you end up with N times M custom connectors to build and maintain. Every new tool or AI platform multiplies the complexity.

MCP eliminates this by introducing a standard protocol layer. Tool providers build one MCP server, and it works with any MCP-compatible AI application. AI application developers build one MCP client, and it can connect to any MCP server. The result: N plus M integrations instead of N times M.

For enterprises, this shift is significant. It means less engineering time spent on plumbing, faster time-to-value for AI deployments, and a common interface that security and operations teams can actually govern.

How Does MCP Work?

MCP follows a client-server architecture built on top of JSON-RPC 2.0, a lightweight remote procedure call protocol. The interaction between an MCP client and server follows a structured lifecycle:

1. Initialization. The client sends an initialize request to the server. Both sides exchange their supported capabilities (which primitives they handle, which features they support) and agree on a protocol version. This capability negotiation ensures that the client and server can communicate effectively before any data is exchanged.

2. Discovery. Once the connection is established, the client queries the server to learn what it offers. This typically starts with listing available tools, resources, and prompts. Discovery is dynamic, meaning the set of available tools can change over time, and the server can notify the client when it does.

3. Execution. The client sends requests to the server to call tools, read resources, or use prompt templates. The server processes those requests and returns structured responses. Each request-response pair is correlated by a unique ID, following standard JSON-RPC conventions.

4. Notifications. Either party can send one-way notifications that don't require a response. These are used for real-time updates, like when a server's tool list changes or when progress needs to be reported on a long-running operation.

This lifecycle makes MCP a stateful protocol. The connection between a client and server is maintained over time, and both sides are aware of each other's capabilities throughout the session.

MCP Architecture and Core Components

MCP's architecture has three main participants and two protocol layers. Understanding these is essential for anyone building with or evaluating MCP.

MCP Hosts

The host is the AI application that the end user interacts with. Claude Desktop, Visual Studio Code, ChatGPT, and Cursor are all examples of MCP hosts. The host is responsible for managing one or more MCP clients and coordinating their interactions with servers.

When you open Claude Desktop and it connects to a filesystem server and a Slack server, Claude Desktop is the host managing both of those connections.

MCP Clients

Each MCP client is a component within the host that maintains a dedicated connection to a single MCP server. If the host connects to three different MCP servers, it creates three separate client instances. The client handles protocol-level communication: sending requests, receiving responses, and managing the connection lifecycle.

Clients are the bridge between the host application's logic and the external capabilities that MCP servers provide.

MCP Servers

An MCP server is a program that exposes tools, resources, and prompts to MCP clients. Servers can run locally on the same machine as the host (using the stdio transport) or remotely on a separate server (using the Streamable HTTP transport).

For example, the official Sentry MCP server runs remotely on Sentry's platform. A filesystem MCP server typically runs locally, reading and writing files on the user's own machine.

The growing ecosystem of MCP servers includes connectors for databases, cloud services, developer tools, CRMs, project management platforms, and much more.

Transport Layer

MCP supports two transport mechanisms that determine how clients and servers communicate:

  • Stdio (Standard Input/Output). Used for local MCP servers that run on the same machine as the host. Communication happens through standard input and output streams, with no network overhead. This is the simplest and fastest transport option.

  • Streamable HTTP. Used for remote MCP servers. The client sends HTTP POST requests to the server, and the server can stream responses back using Server-Sent Events (SSE). This transport supports standard HTTP authentication methods, including OAuth, bearer tokens, and API keys.

The transport layer is abstracted from the data layer, so the same JSON-RPC messages work identically regardless of whether the server is local or remote.

Key MCP Concepts

MCP defines a set of primitives that structure how context and capabilities flow between clients and servers. These are the building blocks that developers work with when building MCP servers or integrating MCP into AI applications.

Tools

Tools are executable functions that an MCP server exposes to AI applications. When an AI agent decides it needs to take an action, like querying a database, sending a message, or creating a file, it invokes a tool.

Each tool has a name, a description (which the AI model uses to decide when and how to call it), and an input schema that defines the expected parameters. Tools are the most commonly used MCP primitive and the one that gives AI agents the ability to do things in the real world, not just answer questions.

Examples of MCP tools include running a SQL query, searching a codebase, creating a calendar event, or calling an external API.

Resources

Resources provide contextual data to the AI application without executing an action. They are read-only data sources that the AI model can use to inform its responses.

A resource might be the contents of a file, the schema of a database, a list of records from a CRM, or the current state of a configuration. Resources are identified by URIs and can be listed and read by MCP clients.

The key difference between tools and resources: tools perform actions, resources provide information.

Prompts

Prompts are reusable interaction templates that MCP servers can expose to clients. They help structure how a user or AI model interacts with the server's capabilities.

For example, an MCP server for a database might provide a prompt template that includes a system message explaining the schema, a few-shot example of how to construct queries, and placeholders for the user's question. Prompts make it easier for AI applications to use tools and resources effectively.

Sampling

Sampling is a client-side primitive that allows MCP servers to request language model completions from the host application. This is useful when a server needs to generate text but wants to remain model-agnostic and avoid bundling its own LLM SDK.

For example, an MCP server that generates documentation could use sampling to ask the host's language model to write a summary, rather than calling a specific model API directly.

MCP vs. RAG: What is the Difference?

MCP and Retrieval-Augmented Generation (RAG) are often mentioned together, but they solve different problems and operate at different layers.

RAG is a technique for enriching an LLM's context by retrieving relevant documents from a vector database or knowledge base before generating a response. It is primarily focused on improving the accuracy and relevance of text generation by grounding it in specific data.

MCP is a protocol for connecting AI applications to external systems. It covers not just data retrieval but also tool execution, prompt management, and real-time interaction with live services.

These two approaches are complementary. An MCP server could use RAG internally to retrieve relevant documents and expose them as resources. Or an AI application could use RAG for knowledge retrieval and MCP for taking actions based on that knowledge.

Benefits of Using MCP

Reduced Hallucinations Through Real-Time Data Access

One of the biggest limitations of LLMs is that they can only work with the context they're given. When that context is incomplete or outdated, the model fills in the gaps, often incorrectly. MCP lets AI applications pull real-time data from authoritative sources at the moment it's needed, which directly reduces the risk of hallucinated responses.

Simplified Integration for Developers

Before MCP, every AI-to-tool integration was a bespoke project. MCP provides a standard interface that developers can build against once. This reduces development time, lowers maintenance burden, and makes it practical to connect AI applications to dozens of tools without writing dozens of custom integrations.

Increased AI Utility and Automation

MCP transforms AI from a system that only generates text into one that can take action. With MCP tools, an AI agent can create tickets in Jira, update records in Salesforce, commit code to GitHub, send messages in Slack, and much more. This is the foundation of agentic AI, where AI systems don't just advise but execute. Platforms like Agen.co provide a secure MCP gateway that lets organizations enable this level of automation while maintaining full control over which tools agents can access.

Vendor-Agnostic and Open Source

MCP is an open protocol, not a proprietary product. It was created by Anthropic but is now supported across the industry, including by OpenAI, Google, Microsoft, and a wide range of developer tool companies. Building on MCP means you're not locked into any single AI vendor.

MCP Security Considerations

MCP opens a powerful new surface area for AI applications, and that power comes with real security implications. When an AI agent can call tools that modify databases, access internal systems, or send messages on behalf of users, the stakes are high.

Authentication and Authorization

MCP supports OAuth 2.0 and bearer token authentication for remote servers. But authentication alone doesn't solve the governance challenge. Organizations need to control which agents can access which tools, for which users, and under what conditions.

This is where traditional access management models fall short. They weren't designed for a world where AI agents act on behalf of users, making delegated access decisions at machine speed. Solving this requires a purpose-built MCP security layer that understands agent identity, delegation, and context.

Common Security Risks

Several security risks are specific to MCP deployments:

  • Tool poisoning. A malicious MCP server could expose tools with descriptions designed to manipulate the AI model's behavior, effectively injecting instructions through tool metadata.

  • Excessive permissions. Without fine-grained authorization, an MCP server might expose tools that grant broader access than intended.

  • Data exfiltration. AI agents connected to multiple MCP servers could inadvertently leak data between systems if access boundaries aren't enforced.

  • Prompt injection via tool results. A compromised tool could return results containing instructions that manipulate the AI model's subsequent behavior.

Best Practices for Securing MCP

Security teams evaluating MCP deployments should prioritize:

  • Least-privilege access. Scope tool access to the minimum permissions required for each agent and user.

  • Audit logging. Log every tool call, including who initiated it, what parameters were passed, and what was returned. Real-time observability across agent interactions is essential for detecting anomalous behavior before it causes damage.

  • Server verification. Only connect to MCP servers from trusted sources, and validate their behavior before granting production access.

  • Centralized governance. Use a control plane that gives security and operations teams visibility and policy enforcement across all MCP connections.

Agen.co was built specifically to address this governance gap. It sits between AI agents and the tools they connect to, enforcing identity-aware access control, fine-grained tool authorization, data governance policies, and full audit trails across every MCP interaction. For enterprises scaling AI agent deployments, a governance layer for MCP is not optional.

Real-World MCP Use Cases

MCP is already powering production workflows across a range of industries and use cases:

  • AI coding assistants. Tools like Claude Code and Cursor use MCP to connect to Git repositories, file systems, and CI/CD pipelines. Developers can ask their AI assistant to read code, create branches, run tests, and deploy changes, all through MCP.

  • Enterprise chatbots. Organizations are building internal chatbots that connect to CRMs, databases, and ticketing systems through MCP servers. An employee can ask a question and the agent pulls live data from Salesforce, queries a Postgres database, or creates a Jira ticket, all in one conversation.

  • Workflow automation. AI agents use MCP to orchestrate multi-step workflows across tools. For example, an agent could read a customer support ticket, look up the customer's account in a CRM, check their recent orders in a database, and draft a response, all by calling different MCP tools in sequence.

  • Data analysis. MCP makes it possible for AI applications to connect to multiple data sources across an organization and run queries on behalf of users. Business analysts can use natural language to explore data without writing SQL or switching between dashboards.

  • Design-to-code. Some development teams use MCP to connect AI models to design tools like Figma, enabling the AI to read design files and generate corresponding frontend code.

MCP Ecosystem and Adoption

MCP adoption has accelerated rapidly since its release. What started as an Anthropic project has become an industry-wide standard.

Major platform support. Claude, ChatGPT, Gemini, Visual Studio Code, Cursor, Windsurf, Replit, and many other AI platforms now support MCP as clients. This means MCP servers built for one platform work across all of them.

Growing server ecosystem. There are now thousands of community-built MCP servers covering everything from cloud infrastructure (AWS, GCP, Azure) to productivity tools (Slack, Notion, Google Calendar) to developer tools (GitHub, Sentry, Datadog). The official MCP server registry catalogs many of these.

Enterprise adoption. Companies like Block, Salesforce, and Apollo have integrated MCP into their systems. Development tool companies including Zed, Replit, Codeium, and Sourcegraph have adopted MCP to enhance their platforms.

Industry backing. OpenAI, Google DeepMind, and Microsoft have all added MCP support to their platforms, signaling that MCP has crossed the threshold from a single-vendor initiative to a broadly accepted standard.

How to Get Started with MCP

Getting started with MCP depends on your role in the ecosystem.

If you want to use MCP tools as an end user: The fastest path is through an AI application that already supports MCP, like Claude Desktop, VS Code with Copilot, or Cursor. You can install pre-built MCP servers and immediately start using tools, resources, and prompts within your AI workflows.

If you want to build an MCP server: The official MCP SDKs are available for TypeScript, Python, Java, and other languages. The SDKs abstract away protocol-level details and let you focus on defining tools, resources, and prompts for your service.

A basic MCP server can be built in under 100 lines of code. You define the tools you want to expose, implement the handlers for each tool, and the SDK handles all the JSON-RPC communication, capability negotiation, and transport management.

If you want to build an MCP client: The SDKs also support client development for teams building AI applications that need to connect to MCP servers. The client SDK manages connection lifecycle, tool discovery, and execution.

Key resources for developers:

The Future of MCP

MCP is still evolving, and several developments are shaping its trajectory:

Remote server maturity. Early MCP adoption was heavily focused on local servers running on developer machines. The ecosystem is now shifting toward remote, cloud-hosted MCP servers that can serve entire organizations. This brings new requirements around authentication, multi-tenancy, and scalability.

Authentication standardization. The MCP specification is converging on OAuth 2.0 as the recommended authentication mechanism for remote servers. This is critical for enterprise adoption, where identity and access management are non-negotiable.

Governance infrastructure. As MCP deployments scale from individual developer tools to organization-wide AI agent networks, the need for centralized governance becomes urgent. Organizations need to control which agents can access which tools, enforce data policies, and maintain compliance. This is the layer that platforms like Agen.co are building, providing the security and governance infrastructure for AI agents that makes it safe to deploy MCP at enterprise scale.

Agent-to-agent communication. While MCP handles how AI applications connect to tools and data, complementary protocols like Google's Agent-to-Agent (A2A) protocol are emerging to handle communication between AI agents. MCP and A2A are expected to work together, with MCP providing tool connectivity and A2A enabling agent collaboration.

Expanding primitive types. The MCP specification continues to evolve, with experimental features like Tasks (for long-running operations) and Elicitation (for requesting user input) being added. These primitives are making MCP capable of handling increasingly complex, multi-step workflows.

Frequently Asked Questions About MCP

What does MCP stand for?
MCP stands for Model Context Protocol. It is an open-source standard for connecting AI applications to external tools, data sources, and services.

Who created MCP?
MCP was created by Anthropic and open-sourced in November 2024. It has since been adopted by major technology companies including OpenAI, Google, and Microsoft.

Is MCP free to use?
Yes. MCP is an open-source protocol released under a permissive license. The specification, SDKs, and reference implementations are all freely available.

What programming languages support MCP?
Official MCP SDKs are available for TypeScript, Python, Java, Kotlin, and C#. Community SDKs exist for additional languages including Go, Rust, Ruby, and Swift.

How is MCP different from an API?
An API is a specific interface for accessing a particular service. MCP is a protocol that standardizes how AI applications discover, connect to, and interact with many different services through a common interface. MCP servers often wrap existing APIs and expose them in a format that AI applications can use natively.

Is MCP secure?
MCP supports authentication via OAuth 2.0 and bearer tokens, but the protocol itself is only part of the security picture. Organizations deploying MCP at scale need governance infrastructure to manage access control, audit agent behavior, and enforce data policies across all MCP connections.

Can MCP and RAG work together?
Yes. MCP and RAG are complementary. RAG is a technique for improving response accuracy by retrieving relevant documents, while MCP is a protocol for connecting AI apps to external systems. An MCP server can use RAG internally, and an AI application can use both approaches simultaneously.

Empower your workforce with secure agents

© 2026 Agen™ | All rights reserved.

Empower your workforce with secure agents

© 2026 Agen™ | All rights reserved.