/

what-are-mcp-tools

What are MCP Tools? How They Work & How to Use Them

What are MCP Tools? How They Work & How to Use Them

MCP Tools: The Complete Guide to AI Agent Tool Integration

AI agents are only as capable as the tools they can reach. An agent that can reason but can't query a database, create a ticket, or send a message is just a chatbot with ambition.

MCP tools are how that gap gets closed. They are executable functions exposed by MCP servers through the Model Context Protocol — the open standard that gives AI agents a structured, secure way to interact with external systems. Instead of building custom integrations for every AI platform, you define your tools once and every MCP-compatible client can discover and invoke them.

The ecosystem has grown fast. Over 3,000 MCP servers are now indexed across public registries. The official MCP servers repository has surpassed 80,000 GitHub stars. And every major AI platform — Claude, ChatGPT, Gemini, GitHub Copilot, Cursor — now supports MCP natively.

This guide covers what MCP tools are, how they work at the protocol level, which tools are worth using, and how to build, secure, and deploy your own.

What Are MCP Tools?

MCP tools are executable functions that MCP servers expose to AI agents. When an AI model needs to take action — search a database, create a file, send a message, call an API — it invokes an MCP tool.

Each tool has a name, a description, and a schema that defines its inputs. The AI model reads the description, decides whether the tool is relevant to the user's request, and calls it with the appropriate parameters. The server executes the function and returns a result.

This is different from an AI model generating text about what it would do. An MCP tool call is real execution — a database query that runs, an API request that fires, a file that gets written.

The "USB-C for AI" Analogy

Before MCP, connecting an AI agent to an external system meant building a custom integration for each AI platform. If you wanted your tool to work with Claude, ChatGPT, and a custom agent framework, you were building three separate integrations that did the same thing.

MCP eliminates this M×N problem. You build one MCP server that exposes your tools, and any MCP-compatible client can discover and use them. One protocol, any tool, any AI platform — the same way USB-C standardized how devices connect regardless of manufacturer.

MCP Tools vs. Traditional API Integrations

Traditional API integrations require explicit coding for every connection. A developer writes the HTTP request, handles authentication, parses the response, and manages errors — all hardcoded for a specific use case.

MCP tools are discovered dynamically. The AI agent queries the server for available tools, reads their schemas, and decides at runtime which tools to call and how to use them. This makes the integration adaptive rather than fixed. When a server adds a new tool, clients pick it up automatically without any code changes.

The other critical difference is who makes the decision. With traditional integrations, the developer predetermines the workflow. With MCP, the AI model evaluates the user's intent and selects the right tool in context. This is what makes agentic workflows possible — the agent can chain together tools it has never been explicitly programmed to combine.

MCP Tools vs. Native LLM Function Calling

Most major LLMs support some form of function calling — you define a function schema, the model generates a structured call, and your application executes it. MCP tools may sound similar, but they solve a different problem.

Native function calling is application-scoped. You define functions within your application code, and the model can only call what you've registered in that session. There is no standard for discovering functions across servers, no protocol for a model to connect to external tool providers, and no way for tools to update themselves dynamically.

MCP tools are protocol-scoped. They live on external servers that any client can connect to. They support dynamic discovery (tools can be added, removed, or updated at runtime), pagination (for servers with hundreds of tools), and structured lifecycle management. MCP tools are also transport-agnostic — they work the same whether the server runs locally on your machine or remotely in the cloud.

Think of it this way: native function calling is how a model talks to your code. MCP is how a model talks to any tool, anywhere.

The Three MCP Primitives: Tools, Resources, and Prompts

MCP defines three server primitives, each controlled by a different actor:

Primitive

What It Does

Who Controls It

Example

Tools

Executable functions that perform actions

The AI model

Search flights, send a message, create a record

Resources

Read-only data sources that provide context

The application

Document contents, database schemas, API documentation

Prompts

Structured interaction templates

The user

"Summarize my meetings," "Draft an email"

Tools are the action layer. They are model-controlled — the AI decides when to call them based on the conversation context. Resources provide passive context that helps the model make better decisions. Prompts are user-invoked templates that guide how the model uses tools and resources together.

This separation matters architecturally. Tools have side effects (they can change state in external systems), so they require stronger security controls, human approval mechanisms, and audit logging than read-only resources.

How MCP Tools Work

MCP tools follow a structured lifecycle: the client discovers what tools are available, the AI model selects the right tool, the client invokes it, and the server returns a result. Every step runs over JSON-RPC 2.0, the same lightweight protocol used by language servers and blockchain nodes.

Client-Server Architecture Overview

An MCP deployment has three layers:

  • MCP Host — The AI application the user interacts with (Claude Desktop, VS Code, a custom agent). The host coordinates one or more MCP client connections.

  • MCP Client — A protocol-level component within the host that maintains a dedicated 1:1 connection to a single MCP server. Each server gets its own client instance.

  • MCP Server — A lightweight program that exposes tools, resources, and prompts. The server connects to external systems (databases, APIs, SaaS platforms) on behalf of the AI.

The client and server negotiate capabilities during initialization. The server declares which primitives it supports (tools, resources, prompts) and whether it will emit change notifications. The client then discovers the available tools and registers them with the AI model.

Tool Discovery: The tools/list Protocol

Before an AI model can use a tool, the client must discover it. This happens through the tools/list message:

// Client request
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list"
}

// Server response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "inputSchema": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "City name or zip code"
            }
          },
          "required": ["location"]
        }
      }
    ]
  }
}

The response contains an array of tool definitions, each with a name, description, and JSON Schema for inputs. The AI model reads these descriptions to understand what each tool does and when to use it — which is why writing clear, specific descriptions is one of the most impactful things you can do when building MCP tools.

For servers with many tools, tools/list supports cursor-based pagination. The response includes a nextCursor field, and the client can pass it in subsequent requests to fetch additional pages.

Tool Invocation: The tools/call Protocol

When the AI model decides to use a tool, the client sends a tools/call message:

// Client request
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "get_weather",
    "arguments": {
      "location": "New York"
    }
  }
}

// Server response
{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
      }
    ],
    "isError": false
  }
}

The result is returned as an array of content items. The AI model processes this content and incorporates it into its response to the user. If the tool execution fails, the server returns the result with isError: true so the model can acknowledge the failure and potentially try a different approach.

Dynamic Updates: list_changed Notifications

MCP tools aren't static. Servers can add, remove, or modify tools at runtime and notify connected clients:

{
  "jsonrpc": "2.0",
  "method": "notifications/tools/list_changed"
}

When a client receives this notification, it re-fetches the tool list and updates the AI model's available capabilities. This is how MCP supports dynamic environments — a server managing Kubernetes resources might expose different tools based on what clusters are connected, or a SaaS integration might add tools when new features are enabled.

The server must declare listChanged: true in its capabilities during initialization for clients to expect these notifications.

Visual Architecture: Host → Client → Server → External Service

┌──────────────────────────────────────┐
MCP Host (AI App)          

┌──────────┐    ┌──────────┐       
MCP      MCP      
Client A Client B 
└────┬─────┘    └────┬─────┘       

└───────┼───────────────┼─────────────┘
        
   ┌────▼─────┐    ┌────▼─────┐
   MCP      MCP      
   Server A Server B 
   └────┬─────┘    └────┬─────┘
        
   ┌────▼─────┐    ┌────▼─────┐
   Database SaaS API 
   └──────────┘    └──────────┘

Each MCP client maintains a dedicated connection to a single server. The host can connect to multiple servers simultaneously, giving the AI model access to tools across different systems within a single conversation.

MCP Tool Definition Structure

Every MCP tool is defined by a set of fields that tell AI models what the tool does, what inputs it expects, and what outputs it returns.

Name, Description, and Input Schema

The three required fields for every tool:

  • name — A unique identifier for the tool within the server. Must be a string with no spaces (e.g., get_weather, create_issue, search_documents).

  • description — A human-readable explanation of what the tool does. This is what the AI model reads to decide whether to use the tool, so clarity matters more than brevity.

  • inputSchema — A JSON Schema object that defines the parameters the tool accepts, including types, descriptions, and which parameters are required.

{
  "name": "search_issues",
  "description": "Search for issues in a project tracker by keyword, status, or assignee. Returns matching issues with their title, status, and URL.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "Search keyword to match against issue titles and descriptions"
      },
      "status": {
        "type": "string",
        "enum": ["open", "closed", "in_progress"],
        "description": "Filter by issue status"
      },
      "assignee": {
        "type": "string",
        "description": "Filter by assigned user's email or username"
      }
    },
    "required": ["query"]
  }
}

The inputSchema uses standard JSON Schema, which means you can define string enums, nested objects, arrays, numeric ranges, pattern validation, and conditional requirements. The AI model uses the schema to generate valid arguments, and the server validates incoming parameters against it.

Output Schema and Structured Content

The latest version of the MCP specification (2025-06-18) introduced outputSchema — a JSON Schema that defines the structure of the tool's return value:

{
  "name": "get_weather_data",
  "description": "Get current weather data for a location",
  "inputSchema": {
    "type": "object",
    "properties": {
      "location": { "type": "string" }
    },
    "required": ["location"]
  },
  "outputSchema": {
    "type": "object",
    "properties": {
      "temperature": { "type": "number" },
      "conditions": { "type": "string" },
      "humidity": { "type": "number" }
    },
    "required": ["temperature", "conditions", "humidity"]
  }
}

When an outputSchema is defined, the server returns structured data in a structuredContent field alongside the traditional content array. This gives programmatic consumers typed, validated output while maintaining backwards compatibility with clients that only read unstructured content.

Tool Annotations and Metadata

Tools can include annotations that describe behavioral characteristics:

  • readOnlyHint — The tool doesn't modify any state (default: false)

  • destructiveHint — The tool may perform destructive operations like deletes (default: true)

  • idempotentHint — Calling the tool multiple times with the same arguments has no additional effect (default: false)

  • openWorldHint — The tool interacts with external entities beyond the server's control (default: true)

These are hints, not guarantees. Clients should treat annotations as untrusted unless they come from a verified server. But they are useful for UX — a client might show a confirmation dialog before executing a tool marked as destructive, or batch multiple calls to an idempotent tool without concern.

Result Types: Text, Image, Audio, Resource Links, Embedded Resources

Tool results are returned as an array of typed content items, supporting multiple formats:

Type

Use Case

Example

Text

Plain text responses

Query results, status messages, generated content

Image

Visual content (base64-encoded)

Charts, screenshots, diagrams

Audio

Audio content (base64-encoded)

Generated speech, audio clips

Resource Link

Reference to a resource by URI

Link to a file, document, or API endpoint

Embedded Resource

Inline resource content

Full file contents included in the response

Each content item can also include annotations that specify the intended audience (user, assistant, or both) and a priority value (0 to 1) to help clients decide how to present the results.

Transport Protocols for MCP Tools

The transport layer determines how MCP clients and servers exchange messages. MCP currently defines two standard transports, with the older HTTP+SSE transport deprecated in favor of Streamable HTTP.

Stdio (Local Servers)

The simplest transport. The MCP client launches the server as a subprocess and communicates through standard input/output:

  • Client → Server: JSON-RPC messages written to the server's stdin

  • Server → Client: JSON-RPC responses written to stdout

  • Logging: The server can write diagnostic logs to stderr

Messages are newline-delimited and must not contain embedded newlines. The connection lifecycle is tied to the subprocess — when the client terminates the process, the connection ends.

Stdio is ideal for local integrations: CLI tools, desktop applications, IDE extensions, and development environments. It requires no networking, no authentication configuration, and no port management. The official MCP specification recommends that clients support stdio whenever possible.

The limitation is that stdio only works when the client can spawn the server as a child process on the same machine. For remote or multi-client scenarios, you need HTTP.

HTTP with Server-Sent Events (SSE) — Deprecated

The original remote transport for MCP used two separate HTTP endpoints: one for the client to send messages (POST) and one for the server to push messages (SSE via GET). The server would send an endpoint event on the SSE connection telling the client where to POST its messages.

This transport is deprecated as of the 2025-06-18 specification and replaced by Streamable HTTP. Existing servers using HTTP+SSE should plan to migrate, though clients may maintain backwards compatibility by falling back to the old transport if the Streamable HTTP handshake fails.

Streamable HTTP — The Recommended Modern Transport

Streamable HTTP consolidates everything into a single endpoint. The client sends all messages as HTTP POST requests, and the server can respond with either:

  • Plain JSON (Content-Type: application/json) — for simple request/response patterns

  • SSE stream (Content-Type: text/event-stream) — for streaming responses that may include intermediate notifications before the final result

The client can also open a persistent SSE connection via GET to receive server-initiated messages (like tools/list_changed notifications).

Key capabilities that the older transport lacked:

  • Session management — The server can assign a session ID (Mcp-Session-Id header) during initialization, enabling stateful interactions across multiple requests.

  • Resumability — SSE events can include IDs. If a connection drops, the client reconnects with a Last-Event-ID header, and the server replays missed messages.

  • Flexible response modes — Simple tools can return plain JSON without SSE overhead. Complex tools can stream progress updates before delivering the final result.

Streamable HTTP is the right choice for remote servers, cloud deployments, multi-tenant platforms, and any scenario where the server needs to handle multiple clients simultaneously.

When to Use Which Transport

Scenario

Recommended Transport

Local CLI tools and IDE extensions

Stdio

Desktop applications (same machine)

Stdio

Remote/cloud-hosted MCP servers

Streamable HTTP

Multi-tenant SaaS integrations

Streamable HTTP

Development and testing

Stdio (simplest setup)

Production enterprise deployments

Streamable HTTP (with session management)

Popular MCP Tools and Servers

The MCP ecosystem has grown to over 3,000 indexed servers across multiple registries. Here are the most widely used ones, organized by category.

Developer Tools

  • GitHub MCP Server — Manage commits, pull requests, issues, and branches through AI agents. One of the most widely adopted MCP servers.

  • Git MCP Server — Read, search, and manipulate Git repositories directly. Useful for code analysis workflows.

  • Filesystem MCP Server — Secure file operations with configurable access controls. Supports read, write, move, and search with directory-level permissions.

  • Docker MCP Server — Build, run, and inspect containers. Integrates container management directly into AI workflows.

  • Playwright MCP Server — Browser automation for testing, scraping, and end-to-end workflows. The most-starred MCP server by GitHub activity.

Productivity Tools

Infrastructure and Observability

  • New Relic MCP Server — 35 tools for entity management, NRQL queries, alerting, incident response, and performance analytics. Supports dynamic tool filtering via tag-based headers.

  • AWS MCP Servers — Separate servers for EC2, S3, IAM, CloudWatch, CDK, and cost analysis.

  • Cloudflare MCP Server — Manage Workers, KV stores, R2 buckets, D1 databases, and DNS records.

  • Kubernetes MCP Servers — Cluster management, pod inspection, and deployment automation.

Browser and Testing

Data and AI

  • PostgreSQL MCP Server — Natural-language SQL queries against Postgres databases.

  • SQLite MCP Server — Lightweight database operations for local data analysis.

  • Supabase MCP Server — 20+ tools covering table design, migrations, SQL queries, and TypeScript type generation.

  • Vector database serversChroma, Qdrant, and Milvus all offer MCP server implementations for semantic search and retrieval.

Where to Find MCP Servers

Registry

Size

Description

registry.modelcontextprotocol.io

Official

Anthropic's official MCP server registry

MCP.so

3,000+

Comprehensive index with quality ratings

Smithery

2,200+

Automated installation guides and server discovery

mcp-awesome.com

1,200+

Quality-verified directory with tutorials

awesome-mcp-servers

Community

Community-curated GitHub list (3,700+ stars)

Agen Marketplace

100+ apps

Enterprise-governed MCP connections with identity-aware access

Using MCP Tools Across Platforms

MCP's value comes from universality — the same server works across every compatible platform. Here's how MCP tools integrate with the major AI development environments.

Claude Desktop

Claude Desktop was the first AI application to support MCP natively, announced alongside the original MCP protocol launch in November 2024. Configuration is done through a JSON file (claude_desktop_config.json) where you specify MCP servers and their launch commands:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"]
    }
  }
}

Once configured, Claude can discover and invoke all tools exposed by the connected servers. Claude Desktop supports local MCP servers via stdio, and Claude's paid plans support remote MCP servers via Streamable HTTP.

VS Code and GitHub Copilot

VS Code integrates MCP through a .vscode/mcp.json file in your workspace or a global mcp.json in your user settings. The VS Code MCP documentation covers configuration, sandboxing, trust management, and enterprise governance:

{
  "servers": {
    "playwright": {
      "command": "npx",
      "args": ["@anthropic-ai/mcp-playwright"]
    }
  }
}

VS Code supports server installation from the Extensions Gallery and CLI, sandboxes MCP servers on macOS and Linux, and offers centralized access control through GitHub organization policies for enterprise teams.

Cursor

Cursor supports MCP servers through its settings UI. Navigate to Settings → MCP and add server configurations. Cursor uses MCP tools within its AI-powered coding features, allowing agents to interact with external services during code generation and editing workflows.

Google Agent Development Kit (ADK)

Google's ADK integrates MCP through the McpToolset class, which wraps MCP server tools so they appear as native ADK tools to an LlmAgent:

from google.adk.tools.mcp import McpToolset, StdioServerParameters

tools, cleanup = await McpToolset.from_server(
    connection_params=StdioServerParameters(
        command="npx",
        args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
    )
)
agent = LlmAgent(name="fs_agent", tools=tools)

ADK supports both patterns: consuming MCP server tools in ADK agents, and exposing ADK agents as MCP servers. Google provides deployment guidance for Cloud Run, GKE, and Vertex AI.

Microsoft Agent Framework

Microsoft's Agent Framework (formerly Semantic Kernel) provides MCP tool integration in both C# and Python. Three transport types are supported:

  • MCPStdioTool — Local servers via standard I/O

  • MCPStreamableHTTPTool — Remote servers via Streamable HTTP

  • MCPWebsocketTool — Real-time bidirectional communication via WebSocket

The framework also supports exposing an agent as an MCP server, allowing other MCP clients to invoke the agent's capabilities as tools.

LangChain and LangGraph

LangChain provides MCP tool adapters that convert MCP server tools into LangChain-compatible tools. This lets you use any MCP server within LangChain agents, chains, and LangGraph workflows without modifying the MCP server itself.

Building Your Own MCP Tools

Building an MCP server is how you make your application's capabilities available to every AI agent in the ecosystem. The official MCP SDKs handle the protocol plumbing — you focus on defining tools and implementing their logic.

Defining Tool Schema

Start with the tool definition. A good schema has three qualities: a clear name, a description that explains when to use the tool (not just what it does), and input parameters with descriptive fields.

{
  "name": "create_support_ticket",
  "description": "Create a new support ticket when a user reports a problem that needs tracking. Use this when the issue cannot be resolved immediately and requires follow-up from the support team.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "title": {
        "type": "string",
        "description": "Short summary of the issue (max 200 characters)"
      },
      "description": {
        "type": "string",
        "description": "Detailed description of the problem, including steps to reproduce"
      },
      "priority": {
        "type": "string",
        "enum": ["low", "medium", "high", "critical"],
        "description": "Issue severity: low (cosmetic), medium (degraded functionality), high (major feature broken), critical (system down)"
      }
    },
    "required": ["title", "description"]
  }
}

The description should tell the AI model when to reach for this tool, not just restate the name. Enum descriptions help the model select the right value without guessing.

Implementing in Python

The MCP Python SDK provides decorators for defining tools:

from mcp.server import Server
from mcp.types import TextContent
import mcp.server.stdio

server = Server("my-server")

@server.tool()
async def search_documents(query: str, limit: int = 10) -> list[TextContent]:
    """Search the document index by keyword.

    Args:
        query: Search terms to match against document titles and content
        limit: Maximum number of results to return (default: 10)
    """
    results = await document_index.search(query, max_results=limit)
    formatted = "\n".join(
        f"- {doc.title}: {doc.snippet}" for doc in results
    )
    return [TextContent(type="text", text=formatted)]

async def main():
    async with mcp.server.stdio.stdio_server() as (read, write):
        await server.run(read, write)

The SDK generates the JSON Schema from the function signature and docstring. Type hints become schema types, and the docstring's Args section becomes parameter descriptions.

Implementing in TypeScript

The MCP TypeScript SDK follows a similar pattern:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({ name: "my-server", version: "1.0.0" });

server.tool(
  "search_documents",
  "Search the document index by keyword",
  {
    query: z.string().describe("Search terms to match against document titles and content"),
    limit: z.number().default(10).describe("Maximum results to return"),
  },
  async ({ query, limit }) => {
    const results = await documentIndex.search(query, limit);
    return {
      content: [
        {
          type: "text",
          text: results.map((doc) => `- ${doc.title}: ${doc.snippet}`).join("\n"),
        },
      ],
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

The TypeScript SDK uses Zod for schema definition, which provides runtime validation and automatic JSON Schema generation.

Testing MCP Tools

The mcptools CLI (github.com/f/mcptools) is the fastest way to test MCP servers during development:

# List all tools on a server
mcp tools npx @modelcontextprotocol/server-filesystem /tmp

# Call a specific tool
mcp call search_documents '{"query": "authentication"}' npx my-server

# Launch an interactive shell
mcp shell npx my-server

# Start a web-based GUI for visual testing

The CLI supports all three transports (stdio, SSE, Streamable HTTP), multiple output formats (table, JSON, pretty JSON), and can scan your existing MCP configurations across Claude Desktop, VS Code, Cursor, and Windsurf.

For automated testing, write integration tests that connect to your server, list tools, call them with known inputs, and validate the outputs against expected results. Test edge cases: invalid parameters, missing required fields, timeouts, and error conditions.

Exposing an Agent as an MCP Server

The reverse pattern is also valuable — wrapping an AI agent's capabilities as an MCP server so other clients can use it as a tool. Both Google ADK and Microsoft's Agent Framework document this pattern.

This creates composable agent architectures where one agent can delegate specialized tasks to another. A planning agent might call a research agent's tools for information gathering, then call a writing agent's tools for content generation — all through MCP. For SaaS teams looking to expose their product to AI-driven usage, this pattern enables rapid ecosystem integration without rebuilding core infrastructure.

MCP Tool Security Best Practices

MCP tools execute real actions on production systems. A misconfigured or exploited tool can exfiltrate data, modify records, or provide unauthorized access. Security isn't an afterthought — it's foundational to any MCP deployment that handles real data.

For a deep dive into the full threat landscape, see our complete guide to MCP security.

Input Validation

Every tool must validate its inputs before execution. The JSON Schema in the tool definition provides structural validation (types, required fields, enums), but it's not enough on its own.

Server-side validation should include:

  • Parameter sanitization — Prevent command injection by escaping or rejecting shell metacharacters in string inputs.

  • Range enforcement — Bound numeric inputs to prevent resource exhaustion (e.g., a limit parameter that accepts 999999 could trigger a denial-of-service).

  • Path traversal prevention — If a tool accepts file paths, validate them against an allowlist of directories. Never trust client-provided paths without normalization and boundary checks.

Access Control and Authentication

MCP servers that connect to sensitive systems need authentication and authorization at multiple levels:

  • Server-level authentication — Verify the identity of the client connecting to the MCP server. For remote servers using Streamable HTTP, this means OAuth 2.0, API keys, or mutual TLS.

  • Tool-level authorization — Not every authenticated client should have access to every tool. Implement role-based access control (RBAC) to restrict which tools are available to which users or agents.

  • Resource-level permissions — A tool that queries a database should respect the user's data access permissions, not the server's. Delegate authentication tokens from the end user, not from the server's service account.

For a detailed look at implementing access control for AI agent gateways across your deployment, see our dedicated guide.

Sandboxing and Isolation

MCP servers should run with the minimum permissions necessary:

  • Filesystem restrictions — Limit file access to specific directories. The official Filesystem MCP server accepts an allowlist of paths at startup.

  • Network isolation — Restrict outbound network access to known endpoints. A tool designed to query your internal API shouldn't be able to reach arbitrary external servers.

  • Container isolation — Run MCP servers in containers with resource limits (CPU, memory, network) to prevent one server from affecting others.

  • Process sandboxing — VS Code sandboxes MCP servers on macOS and Linux by default, restricting their access to the host system.

Human-in-the-Loop Approval

The MCP specification explicitly requires that implementations provide a mechanism for human oversight. Users should be able to:

  • See which tools are available to the AI model

  • Receive confirmation prompts before sensitive tool calls execute

  • Review tool inputs before they are sent to the server

  • Deny any tool invocation they don't approve

This is especially important for tools annotated as destructive (destructiveHint: true) or tools that access external systems (openWorldHint: true). The AI model should not autonomously delete records, send emails, or modify infrastructure without explicit user consent. Organizations governing AI agents across enterprise applications need systematic approval workflows, not ad-hoc permission prompts.

Trust and Verification

Third-party MCP servers are code that runs on your infrastructure with access to your data. Treat them with the same scrutiny you'd apply to any dependency:

  • Source verification — Only install MCP servers from trusted sources. Check the repository's maintenance history, contributor reputation, and security practices.

  • Code review — For sensitive deployments, review the server's source code. Pay attention to what network calls it makes, what data it logs, and how it handles credentials.

  • Update monitoring — MCP servers can be updated without notification. Pin versions in your configuration and review changelogs before upgrading.

  • Tool description integrity — A compromised server can change its tool descriptions to manipulate AI model behavior (a form of prompt injection). Monitor tool manifests for unexpected changes.

MCP Tool Error Handling

MCP separates errors into two categories, each handled differently.

Protocol Errors vs. Tool Execution Errors

Protocol errors use standard JSON-RPC error codes. These occur when the request itself is malformed — an unknown tool name, invalid parameters, or a server-side crash:

{
  "jsonrpc": "2.0",
  "id": 3,
  "error": {
    "code": -32602,
    "message": "Unknown tool: invalid_tool_name"
  }
}

Tool execution errors are reported inside a normal result with isError: true. These occur when the tool runs but fails — an API rate limit, a network timeout, a business logic error:

{
  "jsonrpc": "2.0",
  "id": 4,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Failed to fetch weather data: API rate limit exceeded. Try again in 60 seconds."
      }
    ],
    "isError": true
  }
}

The distinction matters because the AI model sees tool execution errors but not protocol errors. When a tool returns isError: true, the model can acknowledge the failure, explain it to the user, or try a different approach. Protocol errors are handled by the client infrastructure and typically surface as generic error messages.

Best Practices for LLM-Consumable Errors

When returning tool execution errors:

  • Be specific — "API rate limit exceeded, try again in 60 seconds" is actionable. "Request failed" is not.

  • Include recovery hints — Tell the model what to do next. "No results found. Try broadening the search terms or removing filters."

  • Avoid stack traces — The AI model doesn't need a traceback. Return a clear error message, and log the technical details server-side.

  • Timeout handling — Set reasonable timeouts for external calls and return descriptive timeout errors rather than letting the connection hang.

Deploying MCP Tools in Production

Moving MCP servers from development to production introduces requirements around reliability, scalability, and governance that don't exist when testing locally.

Containerized Deployment

Package your MCP server as a Docker container for consistent, reproducible deployments:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
CMD ["node", "server.js"]

For stdio-based servers, the client launches the container as a subprocess. For Streamable HTTP servers, deploy the container behind a reverse proxy or load balancer with TLS termination.

Set resource limits (CPU, memory) to prevent a single server from consuming unbounded resources. Use health checks to detect and restart failed servers automatically.

Remote Server Deployment

Streamable HTTP servers deploy like any HTTP service. Key considerations:

  • TLS everywhere — All remote MCP connections should use HTTPS. Never run Streamable HTTP over plaintext in production.

  • Authentication — Implement OAuth 2.0 or API key authentication at the transport level. Validate credentials before processing any MCP messages.

  • Session affinity — If your server uses session management (Mcp-Session-Id), configure your load balancer for sticky sessions so requests from the same client reach the same server instance.

  • Connection management — Set appropriate timeouts for idle SSE connections. Clean up session state when clients disconnect or sessions expire.

Sidecar Patterns

In Kubernetes environments, MCP servers can run as sidecar containers alongside the main application:

  • The sidecar handles MCP protocol communication

  • The main container provides the business logic

  • They communicate over localhost, minimizing network latency

  • Each can be scaled, updated, and monitored independently

Google's ADK documentation covers sidecar deployment on GKE with specific guidance for production-ready configurations.

Enterprise Governance

At enterprise scale, individual MCP server configurations don't scale. Organizations need:

  • Centralized access control — Define which teams, roles, and applications can access which MCP servers and tools. VS Code supports this through GitHub organization policies for centralized MCP governance.

  • Audit logging — Every tool invocation should be logged with the user identity, tool name, parameters, result, and timestamp. These logs are critical for compliance, incident investigation, and usage analytics.

  • Configuration management — Manage MCP server configurations centrally rather than per-developer. Use infrastructure-as-code to version and deploy server configurations consistently.

  • Tool lifecycle management — Track which tools are in use, monitor for deprecated tools, and manage upgrades across the organization.

The Role of an MCP Gateway for Tool Management

As MCP adoption scales beyond individual developers to team and organizational deployments, a pattern emerges: every team configures their own MCP servers, each with its own security posture, access controls, and monitoring blind spots. This is the same fragmentation problem MCP was designed to solve for AI integrations — now replayed at the governance layer.

Why Direct MCP Server Access Creates Risk at Scale

When each AI agent connects directly to MCP servers:

  • Access control is per-server — Every server implements its own authentication and authorization (or doesn't). There's no unified view of who can invoke which tools.

  • No cross-server audit trail — Tool invocations are logged (if at all) in each server's own format. Correlating activity across servers for incident investigation requires manual log aggregation. This is the governance gap that widens as agent adoption scales.

  • PII and sensitive data leak through tool results — A tool that returns customer records sends that data directly to the AI model. There's no interception point to redact sensitive fields.

  • Policy enforcement requires code changes — Blocking a specific tool or adding a rate limit means modifying the MCP server itself. This doesn't scale when you're managing dozens of servers from different vendors.

Gateway-Level Enforcement

An MCP gateway sits between AI agents and MCP servers, providing a centralized control plane for:

  • RBAC and fine-grained authorization — Define policies that control which users, roles, and agents can access which tools on which servers. Apply policies consistently without modifying any MCP server code.

  • PII redaction — Intercept tool results and automatically redact or mask sensitive data fields before they reach the AI model. This addresses one of the most significant compliance risks in MCP deployments.

  • Audit trails — Log every tool invocation with full context — user identity, agent identity, tool name, parameters, response, and timing — in a centralized, queryable format.

  • Rate limiting and abuse prevention — Enforce rate limits per user, per tool, and per server to prevent resource exhaustion and detect anomalous behavior patterns.

How Agen Provides a Gateway Layer for Every MCP Tool Call

Agen is the identity-aware gateway that sits between AI agents and your MCP servers. Every tool call passes through Agen, where it is authenticated against user identity, authorized against fine-grained policies, and logged for compliance.

Instead of configuring security server-by-server, you define policies once in Agen and they apply to every MCP tool call across your entire deployment. This is how enterprise teams move from experimental MCP usage to production-grade AI agent infrastructure — without rebuilding their security posture for each new server.

Policy enforcement without code changes. Full visibility without log aggregation. Identity-aware access control without per-server configuration.

Frequently Asked Questions

What are MCP tools?

MCP tools are executable functions exposed by MCP servers that AI agents can discover and invoke. They enable AI models to perform real actions — querying databases, calling APIs, managing files, sending messages — through a standardized protocol rather than custom integrations.

What is the difference between MCP tools and APIs?

APIs require explicit integration code for each connection. MCP tools are discovered dynamically by AI agents through a standard protocol, and the AI model decides when and how to use them based on context. MCP provides a universal interface layer on top of existing APIs.

What is the difference between MCP tools and LLM function calling?

LLM function calling is scoped to a single application session — you define functions in your code and the model can call them. MCP tools live on external servers that any compatible client can connect to, support dynamic discovery and updates, and work across transport protocols. Function calling is how a model talks to your code; MCP is how a model talks to any tool, anywhere.

What are the most popular MCP servers?

The most popular by GitHub activity include Playwright (browser automation), GitHub (repository management), Filesystem (file operations), Notion (document management), PostgreSQL (database queries), and Slack (messaging). The official MCP servers repository has over 80,000 stars.

How do I build my own MCP tool?

Use the official MCP SDKs for Python or TypeScript. Define your tool schema (name, description, JSON Schema inputs), implement the handler function, and connect the server to a transport (stdio for local, Streamable HTTP for remote). The SDKs handle protocol negotiation and message formatting.

How do I secure MCP tools?

Validate all tool inputs server-side, implement authentication and role-based access control, sandbox servers with minimal permissions, enable human-in-the-loop approval for destructive operations, and vet third-party servers before deploying them. For enterprise deployments, use an MCP gateway for centralized policy enforcement and audit logging.

Can I use MCP tools with ChatGPT?

Yes. OpenAI has adopted MCP, and ChatGPT supports MCP server connections. The same MCP servers that work with Claude, VS Code, and Cursor also work with ChatGPT — this is the core value of having a universal protocol.

What transport protocol should I use for MCP?

Use stdio for local servers (CLI tools, IDE extensions, desktop apps) — it's the simplest option with no networking required. Use Streamable HTTP for remote servers, cloud deployments, and multi-client scenarios. The older HTTP+SSE transport is deprecated and should be migrated to Streamable HTTP.

MCP Tools: The Complete Guide to AI Agent Tool Integration

AI agents are only as capable as the tools they can reach. An agent that can reason but can't query a database, create a ticket, or send a message is just a chatbot with ambition.

MCP tools are how that gap gets closed. They are executable functions exposed by MCP servers through the Model Context Protocol — the open standard that gives AI agents a structured, secure way to interact with external systems. Instead of building custom integrations for every AI platform, you define your tools once and every MCP-compatible client can discover and invoke them.

The ecosystem has grown fast. Over 3,000 MCP servers are now indexed across public registries. The official MCP servers repository has surpassed 80,000 GitHub stars. And every major AI platform — Claude, ChatGPT, Gemini, GitHub Copilot, Cursor — now supports MCP natively.

This guide covers what MCP tools are, how they work at the protocol level, which tools are worth using, and how to build, secure, and deploy your own.

What Are MCP Tools?

MCP tools are executable functions that MCP servers expose to AI agents. When an AI model needs to take action — search a database, create a file, send a message, call an API — it invokes an MCP tool.

Each tool has a name, a description, and a schema that defines its inputs. The AI model reads the description, decides whether the tool is relevant to the user's request, and calls it with the appropriate parameters. The server executes the function and returns a result.

This is different from an AI model generating text about what it would do. An MCP tool call is real execution — a database query that runs, an API request that fires, a file that gets written.

The "USB-C for AI" Analogy

Before MCP, connecting an AI agent to an external system meant building a custom integration for each AI platform. If you wanted your tool to work with Claude, ChatGPT, and a custom agent framework, you were building three separate integrations that did the same thing.

MCP eliminates this M×N problem. You build one MCP server that exposes your tools, and any MCP-compatible client can discover and use them. One protocol, any tool, any AI platform — the same way USB-C standardized how devices connect regardless of manufacturer.

MCP Tools vs. Traditional API Integrations

Traditional API integrations require explicit coding for every connection. A developer writes the HTTP request, handles authentication, parses the response, and manages errors — all hardcoded for a specific use case.

MCP tools are discovered dynamically. The AI agent queries the server for available tools, reads their schemas, and decides at runtime which tools to call and how to use them. This makes the integration adaptive rather than fixed. When a server adds a new tool, clients pick it up automatically without any code changes.

The other critical difference is who makes the decision. With traditional integrations, the developer predetermines the workflow. With MCP, the AI model evaluates the user's intent and selects the right tool in context. This is what makes agentic workflows possible — the agent can chain together tools it has never been explicitly programmed to combine.

MCP Tools vs. Native LLM Function Calling

Most major LLMs support some form of function calling — you define a function schema, the model generates a structured call, and your application executes it. MCP tools may sound similar, but they solve a different problem.

Native function calling is application-scoped. You define functions within your application code, and the model can only call what you've registered in that session. There is no standard for discovering functions across servers, no protocol for a model to connect to external tool providers, and no way for tools to update themselves dynamically.

MCP tools are protocol-scoped. They live on external servers that any client can connect to. They support dynamic discovery (tools can be added, removed, or updated at runtime), pagination (for servers with hundreds of tools), and structured lifecycle management. MCP tools are also transport-agnostic — they work the same whether the server runs locally on your machine or remotely in the cloud.

Think of it this way: native function calling is how a model talks to your code. MCP is how a model talks to any tool, anywhere.

The Three MCP Primitives: Tools, Resources, and Prompts

MCP defines three server primitives, each controlled by a different actor:

Primitive

What It Does

Who Controls It

Example

Tools

Executable functions that perform actions

The AI model

Search flights, send a message, create a record

Resources

Read-only data sources that provide context

The application

Document contents, database schemas, API documentation

Prompts

Structured interaction templates

The user

"Summarize my meetings," "Draft an email"

Tools are the action layer. They are model-controlled — the AI decides when to call them based on the conversation context. Resources provide passive context that helps the model make better decisions. Prompts are user-invoked templates that guide how the model uses tools and resources together.

This separation matters architecturally. Tools have side effects (they can change state in external systems), so they require stronger security controls, human approval mechanisms, and audit logging than read-only resources.

How MCP Tools Work

MCP tools follow a structured lifecycle: the client discovers what tools are available, the AI model selects the right tool, the client invokes it, and the server returns a result. Every step runs over JSON-RPC 2.0, the same lightweight protocol used by language servers and blockchain nodes.

Client-Server Architecture Overview

An MCP deployment has three layers:

  • MCP Host — The AI application the user interacts with (Claude Desktop, VS Code, a custom agent). The host coordinates one or more MCP client connections.

  • MCP Client — A protocol-level component within the host that maintains a dedicated 1:1 connection to a single MCP server. Each server gets its own client instance.

  • MCP Server — A lightweight program that exposes tools, resources, and prompts. The server connects to external systems (databases, APIs, SaaS platforms) on behalf of the AI.

The client and server negotiate capabilities during initialization. The server declares which primitives it supports (tools, resources, prompts) and whether it will emit change notifications. The client then discovers the available tools and registers them with the AI model.

Tool Discovery: The tools/list Protocol

Before an AI model can use a tool, the client must discover it. This happens through the tools/list message:

// Client request
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list"
}

// Server response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "inputSchema": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "City name or zip code"
            }
          },
          "required": ["location"]
        }
      }
    ]
  }
}

The response contains an array of tool definitions, each with a name, description, and JSON Schema for inputs. The AI model reads these descriptions to understand what each tool does and when to use it — which is why writing clear, specific descriptions is one of the most impactful things you can do when building MCP tools.

For servers with many tools, tools/list supports cursor-based pagination. The response includes a nextCursor field, and the client can pass it in subsequent requests to fetch additional pages.

Tool Invocation: The tools/call Protocol

When the AI model decides to use a tool, the client sends a tools/call message:

// Client request
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "get_weather",
    "arguments": {
      "location": "New York"
    }
  }
}

// Server response
{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
      }
    ],
    "isError": false
  }
}

The result is returned as an array of content items. The AI model processes this content and incorporates it into its response to the user. If the tool execution fails, the server returns the result with isError: true so the model can acknowledge the failure and potentially try a different approach.

Dynamic Updates: list_changed Notifications

MCP tools aren't static. Servers can add, remove, or modify tools at runtime and notify connected clients:

{
  "jsonrpc": "2.0",
  "method": "notifications/tools/list_changed"
}

When a client receives this notification, it re-fetches the tool list and updates the AI model's available capabilities. This is how MCP supports dynamic environments — a server managing Kubernetes resources might expose different tools based on what clusters are connected, or a SaaS integration might add tools when new features are enabled.

The server must declare listChanged: true in its capabilities during initialization for clients to expect these notifications.

Visual Architecture: Host → Client → Server → External Service

┌──────────────────────────────────────┐
MCP Host (AI App)          

┌──────────┐    ┌──────────┐       
MCP      MCP      
Client A Client B 
└────┬─────┘    └────┬─────┘       

└───────┼───────────────┼─────────────┘
        
   ┌────▼─────┐    ┌────▼─────┐
   MCP      MCP      
   Server A Server B 
   └────┬─────┘    └────┬─────┘
        
   ┌────▼─────┐    ┌────▼─────┐
   Database SaaS API 
   └──────────┘    └──────────┘

Each MCP client maintains a dedicated connection to a single server. The host can connect to multiple servers simultaneously, giving the AI model access to tools across different systems within a single conversation.

MCP Tool Definition Structure

Every MCP tool is defined by a set of fields that tell AI models what the tool does, what inputs it expects, and what outputs it returns.

Name, Description, and Input Schema

The three required fields for every tool:

  • name — A unique identifier for the tool within the server. Must be a string with no spaces (e.g., get_weather, create_issue, search_documents).

  • description — A human-readable explanation of what the tool does. This is what the AI model reads to decide whether to use the tool, so clarity matters more than brevity.

  • inputSchema — A JSON Schema object that defines the parameters the tool accepts, including types, descriptions, and which parameters are required.

{
  "name": "search_issues",
  "description": "Search for issues in a project tracker by keyword, status, or assignee. Returns matching issues with their title, status, and URL.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "Search keyword to match against issue titles and descriptions"
      },
      "status": {
        "type": "string",
        "enum": ["open", "closed", "in_progress"],
        "description": "Filter by issue status"
      },
      "assignee": {
        "type": "string",
        "description": "Filter by assigned user's email or username"
      }
    },
    "required": ["query"]
  }
}

The inputSchema uses standard JSON Schema, which means you can define string enums, nested objects, arrays, numeric ranges, pattern validation, and conditional requirements. The AI model uses the schema to generate valid arguments, and the server validates incoming parameters against it.

Output Schema and Structured Content

The latest version of the MCP specification (2025-06-18) introduced outputSchema — a JSON Schema that defines the structure of the tool's return value:

{
  "name": "get_weather_data",
  "description": "Get current weather data for a location",
  "inputSchema": {
    "type": "object",
    "properties": {
      "location": { "type": "string" }
    },
    "required": ["location"]
  },
  "outputSchema": {
    "type": "object",
    "properties": {
      "temperature": { "type": "number" },
      "conditions": { "type": "string" },
      "humidity": { "type": "number" }
    },
    "required": ["temperature", "conditions", "humidity"]
  }
}

When an outputSchema is defined, the server returns structured data in a structuredContent field alongside the traditional content array. This gives programmatic consumers typed, validated output while maintaining backwards compatibility with clients that only read unstructured content.

Tool Annotations and Metadata

Tools can include annotations that describe behavioral characteristics:

  • readOnlyHint — The tool doesn't modify any state (default: false)

  • destructiveHint — The tool may perform destructive operations like deletes (default: true)

  • idempotentHint — Calling the tool multiple times with the same arguments has no additional effect (default: false)

  • openWorldHint — The tool interacts with external entities beyond the server's control (default: true)

These are hints, not guarantees. Clients should treat annotations as untrusted unless they come from a verified server. But they are useful for UX — a client might show a confirmation dialog before executing a tool marked as destructive, or batch multiple calls to an idempotent tool without concern.

Result Types: Text, Image, Audio, Resource Links, Embedded Resources

Tool results are returned as an array of typed content items, supporting multiple formats:

Type

Use Case

Example

Text

Plain text responses

Query results, status messages, generated content

Image

Visual content (base64-encoded)

Charts, screenshots, diagrams

Audio

Audio content (base64-encoded)

Generated speech, audio clips

Resource Link

Reference to a resource by URI

Link to a file, document, or API endpoint

Embedded Resource

Inline resource content

Full file contents included in the response

Each content item can also include annotations that specify the intended audience (user, assistant, or both) and a priority value (0 to 1) to help clients decide how to present the results.

Transport Protocols for MCP Tools

The transport layer determines how MCP clients and servers exchange messages. MCP currently defines two standard transports, with the older HTTP+SSE transport deprecated in favor of Streamable HTTP.

Stdio (Local Servers)

The simplest transport. The MCP client launches the server as a subprocess and communicates through standard input/output:

  • Client → Server: JSON-RPC messages written to the server's stdin

  • Server → Client: JSON-RPC responses written to stdout

  • Logging: The server can write diagnostic logs to stderr

Messages are newline-delimited and must not contain embedded newlines. The connection lifecycle is tied to the subprocess — when the client terminates the process, the connection ends.

Stdio is ideal for local integrations: CLI tools, desktop applications, IDE extensions, and development environments. It requires no networking, no authentication configuration, and no port management. The official MCP specification recommends that clients support stdio whenever possible.

The limitation is that stdio only works when the client can spawn the server as a child process on the same machine. For remote or multi-client scenarios, you need HTTP.

HTTP with Server-Sent Events (SSE) — Deprecated

The original remote transport for MCP used two separate HTTP endpoints: one for the client to send messages (POST) and one for the server to push messages (SSE via GET). The server would send an endpoint event on the SSE connection telling the client where to POST its messages.

This transport is deprecated as of the 2025-06-18 specification and replaced by Streamable HTTP. Existing servers using HTTP+SSE should plan to migrate, though clients may maintain backwards compatibility by falling back to the old transport if the Streamable HTTP handshake fails.

Streamable HTTP — The Recommended Modern Transport

Streamable HTTP consolidates everything into a single endpoint. The client sends all messages as HTTP POST requests, and the server can respond with either:

  • Plain JSON (Content-Type: application/json) — for simple request/response patterns

  • SSE stream (Content-Type: text/event-stream) — for streaming responses that may include intermediate notifications before the final result

The client can also open a persistent SSE connection via GET to receive server-initiated messages (like tools/list_changed notifications).

Key capabilities that the older transport lacked:

  • Session management — The server can assign a session ID (Mcp-Session-Id header) during initialization, enabling stateful interactions across multiple requests.

  • Resumability — SSE events can include IDs. If a connection drops, the client reconnects with a Last-Event-ID header, and the server replays missed messages.

  • Flexible response modes — Simple tools can return plain JSON without SSE overhead. Complex tools can stream progress updates before delivering the final result.

Streamable HTTP is the right choice for remote servers, cloud deployments, multi-tenant platforms, and any scenario where the server needs to handle multiple clients simultaneously.

When to Use Which Transport

Scenario

Recommended Transport

Local CLI tools and IDE extensions

Stdio

Desktop applications (same machine)

Stdio

Remote/cloud-hosted MCP servers

Streamable HTTP

Multi-tenant SaaS integrations

Streamable HTTP

Development and testing

Stdio (simplest setup)

Production enterprise deployments

Streamable HTTP (with session management)

Popular MCP Tools and Servers

The MCP ecosystem has grown to over 3,000 indexed servers across multiple registries. Here are the most widely used ones, organized by category.

Developer Tools

  • GitHub MCP Server — Manage commits, pull requests, issues, and branches through AI agents. One of the most widely adopted MCP servers.

  • Git MCP Server — Read, search, and manipulate Git repositories directly. Useful for code analysis workflows.

  • Filesystem MCP Server — Secure file operations with configurable access controls. Supports read, write, move, and search with directory-level permissions.

  • Docker MCP Server — Build, run, and inspect containers. Integrates container management directly into AI workflows.

  • Playwright MCP Server — Browser automation for testing, scraping, and end-to-end workflows. The most-starred MCP server by GitHub activity.

Productivity Tools

Infrastructure and Observability

  • New Relic MCP Server — 35 tools for entity management, NRQL queries, alerting, incident response, and performance analytics. Supports dynamic tool filtering via tag-based headers.

  • AWS MCP Servers — Separate servers for EC2, S3, IAM, CloudWatch, CDK, and cost analysis.

  • Cloudflare MCP Server — Manage Workers, KV stores, R2 buckets, D1 databases, and DNS records.

  • Kubernetes MCP Servers — Cluster management, pod inspection, and deployment automation.

Browser and Testing

Data and AI

  • PostgreSQL MCP Server — Natural-language SQL queries against Postgres databases.

  • SQLite MCP Server — Lightweight database operations for local data analysis.

  • Supabase MCP Server — 20+ tools covering table design, migrations, SQL queries, and TypeScript type generation.

  • Vector database serversChroma, Qdrant, and Milvus all offer MCP server implementations for semantic search and retrieval.

Where to Find MCP Servers

Registry

Size

Description

registry.modelcontextprotocol.io

Official

Anthropic's official MCP server registry

MCP.so

3,000+

Comprehensive index with quality ratings

Smithery

2,200+

Automated installation guides and server discovery

mcp-awesome.com

1,200+

Quality-verified directory with tutorials

awesome-mcp-servers

Community

Community-curated GitHub list (3,700+ stars)

Agen Marketplace

100+ apps

Enterprise-governed MCP connections with identity-aware access

Using MCP Tools Across Platforms

MCP's value comes from universality — the same server works across every compatible platform. Here's how MCP tools integrate with the major AI development environments.

Claude Desktop

Claude Desktop was the first AI application to support MCP natively, announced alongside the original MCP protocol launch in November 2024. Configuration is done through a JSON file (claude_desktop_config.json) where you specify MCP servers and their launch commands:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"]
    }
  }
}

Once configured, Claude can discover and invoke all tools exposed by the connected servers. Claude Desktop supports local MCP servers via stdio, and Claude's paid plans support remote MCP servers via Streamable HTTP.

VS Code and GitHub Copilot

VS Code integrates MCP through a .vscode/mcp.json file in your workspace or a global mcp.json in your user settings. The VS Code MCP documentation covers configuration, sandboxing, trust management, and enterprise governance:

{
  "servers": {
    "playwright": {
      "command": "npx",
      "args": ["@anthropic-ai/mcp-playwright"]
    }
  }
}

VS Code supports server installation from the Extensions Gallery and CLI, sandboxes MCP servers on macOS and Linux, and offers centralized access control through GitHub organization policies for enterprise teams.

Cursor

Cursor supports MCP servers through its settings UI. Navigate to Settings → MCP and add server configurations. Cursor uses MCP tools within its AI-powered coding features, allowing agents to interact with external services during code generation and editing workflows.

Google Agent Development Kit (ADK)

Google's ADK integrates MCP through the McpToolset class, which wraps MCP server tools so they appear as native ADK tools to an LlmAgent:

from google.adk.tools.mcp import McpToolset, StdioServerParameters

tools, cleanup = await McpToolset.from_server(
    connection_params=StdioServerParameters(
        command="npx",
        args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
    )
)
agent = LlmAgent(name="fs_agent", tools=tools)

ADK supports both patterns: consuming MCP server tools in ADK agents, and exposing ADK agents as MCP servers. Google provides deployment guidance for Cloud Run, GKE, and Vertex AI.

Microsoft Agent Framework

Microsoft's Agent Framework (formerly Semantic Kernel) provides MCP tool integration in both C# and Python. Three transport types are supported:

  • MCPStdioTool — Local servers via standard I/O

  • MCPStreamableHTTPTool — Remote servers via Streamable HTTP

  • MCPWebsocketTool — Real-time bidirectional communication via WebSocket

The framework also supports exposing an agent as an MCP server, allowing other MCP clients to invoke the agent's capabilities as tools.

LangChain and LangGraph

LangChain provides MCP tool adapters that convert MCP server tools into LangChain-compatible tools. This lets you use any MCP server within LangChain agents, chains, and LangGraph workflows without modifying the MCP server itself.

Building Your Own MCP Tools

Building an MCP server is how you make your application's capabilities available to every AI agent in the ecosystem. The official MCP SDKs handle the protocol plumbing — you focus on defining tools and implementing their logic.

Defining Tool Schema

Start with the tool definition. A good schema has three qualities: a clear name, a description that explains when to use the tool (not just what it does), and input parameters with descriptive fields.

{
  "name": "create_support_ticket",
  "description": "Create a new support ticket when a user reports a problem that needs tracking. Use this when the issue cannot be resolved immediately and requires follow-up from the support team.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "title": {
        "type": "string",
        "description": "Short summary of the issue (max 200 characters)"
      },
      "description": {
        "type": "string",
        "description": "Detailed description of the problem, including steps to reproduce"
      },
      "priority": {
        "type": "string",
        "enum": ["low", "medium", "high", "critical"],
        "description": "Issue severity: low (cosmetic), medium (degraded functionality), high (major feature broken), critical (system down)"
      }
    },
    "required": ["title", "description"]
  }
}

The description should tell the AI model when to reach for this tool, not just restate the name. Enum descriptions help the model select the right value without guessing.

Implementing in Python

The MCP Python SDK provides decorators for defining tools:

from mcp.server import Server
from mcp.types import TextContent
import mcp.server.stdio

server = Server("my-server")

@server.tool()
async def search_documents(query: str, limit: int = 10) -> list[TextContent]:
    """Search the document index by keyword.

    Args:
        query: Search terms to match against document titles and content
        limit: Maximum number of results to return (default: 10)
    """
    results = await document_index.search(query, max_results=limit)
    formatted = "\n".join(
        f"- {doc.title}: {doc.snippet}" for doc in results
    )
    return [TextContent(type="text", text=formatted)]

async def main():
    async with mcp.server.stdio.stdio_server() as (read, write):
        await server.run(read, write)

The SDK generates the JSON Schema from the function signature and docstring. Type hints become schema types, and the docstring's Args section becomes parameter descriptions.

Implementing in TypeScript

The MCP TypeScript SDK follows a similar pattern:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({ name: "my-server", version: "1.0.0" });

server.tool(
  "search_documents",
  "Search the document index by keyword",
  {
    query: z.string().describe("Search terms to match against document titles and content"),
    limit: z.number().default(10).describe("Maximum results to return"),
  },
  async ({ query, limit }) => {
    const results = await documentIndex.search(query, limit);
    return {
      content: [
        {
          type: "text",
          text: results.map((doc) => `- ${doc.title}: ${doc.snippet}`).join("\n"),
        },
      ],
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

The TypeScript SDK uses Zod for schema definition, which provides runtime validation and automatic JSON Schema generation.

Testing MCP Tools

The mcptools CLI (github.com/f/mcptools) is the fastest way to test MCP servers during development:

# List all tools on a server
mcp tools npx @modelcontextprotocol/server-filesystem /tmp

# Call a specific tool
mcp call search_documents '{"query": "authentication"}' npx my-server

# Launch an interactive shell
mcp shell npx my-server

# Start a web-based GUI for visual testing

The CLI supports all three transports (stdio, SSE, Streamable HTTP), multiple output formats (table, JSON, pretty JSON), and can scan your existing MCP configurations across Claude Desktop, VS Code, Cursor, and Windsurf.

For automated testing, write integration tests that connect to your server, list tools, call them with known inputs, and validate the outputs against expected results. Test edge cases: invalid parameters, missing required fields, timeouts, and error conditions.

Exposing an Agent as an MCP Server

The reverse pattern is also valuable — wrapping an AI agent's capabilities as an MCP server so other clients can use it as a tool. Both Google ADK and Microsoft's Agent Framework document this pattern.

This creates composable agent architectures where one agent can delegate specialized tasks to another. A planning agent might call a research agent's tools for information gathering, then call a writing agent's tools for content generation — all through MCP. For SaaS teams looking to expose their product to AI-driven usage, this pattern enables rapid ecosystem integration without rebuilding core infrastructure.

MCP Tool Security Best Practices

MCP tools execute real actions on production systems. A misconfigured or exploited tool can exfiltrate data, modify records, or provide unauthorized access. Security isn't an afterthought — it's foundational to any MCP deployment that handles real data.

For a deep dive into the full threat landscape, see our complete guide to MCP security.

Input Validation

Every tool must validate its inputs before execution. The JSON Schema in the tool definition provides structural validation (types, required fields, enums), but it's not enough on its own.

Server-side validation should include:

  • Parameter sanitization — Prevent command injection by escaping or rejecting shell metacharacters in string inputs.

  • Range enforcement — Bound numeric inputs to prevent resource exhaustion (e.g., a limit parameter that accepts 999999 could trigger a denial-of-service).

  • Path traversal prevention — If a tool accepts file paths, validate them against an allowlist of directories. Never trust client-provided paths without normalization and boundary checks.

Access Control and Authentication

MCP servers that connect to sensitive systems need authentication and authorization at multiple levels:

  • Server-level authentication — Verify the identity of the client connecting to the MCP server. For remote servers using Streamable HTTP, this means OAuth 2.0, API keys, or mutual TLS.

  • Tool-level authorization — Not every authenticated client should have access to every tool. Implement role-based access control (RBAC) to restrict which tools are available to which users or agents.

  • Resource-level permissions — A tool that queries a database should respect the user's data access permissions, not the server's. Delegate authentication tokens from the end user, not from the server's service account.

For a detailed look at implementing access control for AI agent gateways across your deployment, see our dedicated guide.

Sandboxing and Isolation

MCP servers should run with the minimum permissions necessary:

  • Filesystem restrictions — Limit file access to specific directories. The official Filesystem MCP server accepts an allowlist of paths at startup.

  • Network isolation — Restrict outbound network access to known endpoints. A tool designed to query your internal API shouldn't be able to reach arbitrary external servers.

  • Container isolation — Run MCP servers in containers with resource limits (CPU, memory, network) to prevent one server from affecting others.

  • Process sandboxing — VS Code sandboxes MCP servers on macOS and Linux by default, restricting their access to the host system.

Human-in-the-Loop Approval

The MCP specification explicitly requires that implementations provide a mechanism for human oversight. Users should be able to:

  • See which tools are available to the AI model

  • Receive confirmation prompts before sensitive tool calls execute

  • Review tool inputs before they are sent to the server

  • Deny any tool invocation they don't approve

This is especially important for tools annotated as destructive (destructiveHint: true) or tools that access external systems (openWorldHint: true). The AI model should not autonomously delete records, send emails, or modify infrastructure without explicit user consent. Organizations governing AI agents across enterprise applications need systematic approval workflows, not ad-hoc permission prompts.

Trust and Verification

Third-party MCP servers are code that runs on your infrastructure with access to your data. Treat them with the same scrutiny you'd apply to any dependency:

  • Source verification — Only install MCP servers from trusted sources. Check the repository's maintenance history, contributor reputation, and security practices.

  • Code review — For sensitive deployments, review the server's source code. Pay attention to what network calls it makes, what data it logs, and how it handles credentials.

  • Update monitoring — MCP servers can be updated without notification. Pin versions in your configuration and review changelogs before upgrading.

  • Tool description integrity — A compromised server can change its tool descriptions to manipulate AI model behavior (a form of prompt injection). Monitor tool manifests for unexpected changes.

MCP Tool Error Handling

MCP separates errors into two categories, each handled differently.

Protocol Errors vs. Tool Execution Errors

Protocol errors use standard JSON-RPC error codes. These occur when the request itself is malformed — an unknown tool name, invalid parameters, or a server-side crash:

{
  "jsonrpc": "2.0",
  "id": 3,
  "error": {
    "code": -32602,
    "message": "Unknown tool: invalid_tool_name"
  }
}

Tool execution errors are reported inside a normal result with isError: true. These occur when the tool runs but fails — an API rate limit, a network timeout, a business logic error:

{
  "jsonrpc": "2.0",
  "id": 4,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Failed to fetch weather data: API rate limit exceeded. Try again in 60 seconds."
      }
    ],
    "isError": true
  }
}

The distinction matters because the AI model sees tool execution errors but not protocol errors. When a tool returns isError: true, the model can acknowledge the failure, explain it to the user, or try a different approach. Protocol errors are handled by the client infrastructure and typically surface as generic error messages.

Best Practices for LLM-Consumable Errors

When returning tool execution errors:

  • Be specific — "API rate limit exceeded, try again in 60 seconds" is actionable. "Request failed" is not.

  • Include recovery hints — Tell the model what to do next. "No results found. Try broadening the search terms or removing filters."

  • Avoid stack traces — The AI model doesn't need a traceback. Return a clear error message, and log the technical details server-side.

  • Timeout handling — Set reasonable timeouts for external calls and return descriptive timeout errors rather than letting the connection hang.

Deploying MCP Tools in Production

Moving MCP servers from development to production introduces requirements around reliability, scalability, and governance that don't exist when testing locally.

Containerized Deployment

Package your MCP server as a Docker container for consistent, reproducible deployments:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
CMD ["node", "server.js"]

For stdio-based servers, the client launches the container as a subprocess. For Streamable HTTP servers, deploy the container behind a reverse proxy or load balancer with TLS termination.

Set resource limits (CPU, memory) to prevent a single server from consuming unbounded resources. Use health checks to detect and restart failed servers automatically.

Remote Server Deployment

Streamable HTTP servers deploy like any HTTP service. Key considerations:

  • TLS everywhere — All remote MCP connections should use HTTPS. Never run Streamable HTTP over plaintext in production.

  • Authentication — Implement OAuth 2.0 or API key authentication at the transport level. Validate credentials before processing any MCP messages.

  • Session affinity — If your server uses session management (Mcp-Session-Id), configure your load balancer for sticky sessions so requests from the same client reach the same server instance.

  • Connection management — Set appropriate timeouts for idle SSE connections. Clean up session state when clients disconnect or sessions expire.

Sidecar Patterns

In Kubernetes environments, MCP servers can run as sidecar containers alongside the main application:

  • The sidecar handles MCP protocol communication

  • The main container provides the business logic

  • They communicate over localhost, minimizing network latency

  • Each can be scaled, updated, and monitored independently

Google's ADK documentation covers sidecar deployment on GKE with specific guidance for production-ready configurations.

Enterprise Governance

At enterprise scale, individual MCP server configurations don't scale. Organizations need:

  • Centralized access control — Define which teams, roles, and applications can access which MCP servers and tools. VS Code supports this through GitHub organization policies for centralized MCP governance.

  • Audit logging — Every tool invocation should be logged with the user identity, tool name, parameters, result, and timestamp. These logs are critical for compliance, incident investigation, and usage analytics.

  • Configuration management — Manage MCP server configurations centrally rather than per-developer. Use infrastructure-as-code to version and deploy server configurations consistently.

  • Tool lifecycle management — Track which tools are in use, monitor for deprecated tools, and manage upgrades across the organization.

The Role of an MCP Gateway for Tool Management

As MCP adoption scales beyond individual developers to team and organizational deployments, a pattern emerges: every team configures their own MCP servers, each with its own security posture, access controls, and monitoring blind spots. This is the same fragmentation problem MCP was designed to solve for AI integrations — now replayed at the governance layer.

Why Direct MCP Server Access Creates Risk at Scale

When each AI agent connects directly to MCP servers:

  • Access control is per-server — Every server implements its own authentication and authorization (or doesn't). There's no unified view of who can invoke which tools.

  • No cross-server audit trail — Tool invocations are logged (if at all) in each server's own format. Correlating activity across servers for incident investigation requires manual log aggregation. This is the governance gap that widens as agent adoption scales.

  • PII and sensitive data leak through tool results — A tool that returns customer records sends that data directly to the AI model. There's no interception point to redact sensitive fields.

  • Policy enforcement requires code changes — Blocking a specific tool or adding a rate limit means modifying the MCP server itself. This doesn't scale when you're managing dozens of servers from different vendors.

Gateway-Level Enforcement

An MCP gateway sits between AI agents and MCP servers, providing a centralized control plane for:

  • RBAC and fine-grained authorization — Define policies that control which users, roles, and agents can access which tools on which servers. Apply policies consistently without modifying any MCP server code.

  • PII redaction — Intercept tool results and automatically redact or mask sensitive data fields before they reach the AI model. This addresses one of the most significant compliance risks in MCP deployments.

  • Audit trails — Log every tool invocation with full context — user identity, agent identity, tool name, parameters, response, and timing — in a centralized, queryable format.

  • Rate limiting and abuse prevention — Enforce rate limits per user, per tool, and per server to prevent resource exhaustion and detect anomalous behavior patterns.

How Agen Provides a Gateway Layer for Every MCP Tool Call

Agen is the identity-aware gateway that sits between AI agents and your MCP servers. Every tool call passes through Agen, where it is authenticated against user identity, authorized against fine-grained policies, and logged for compliance.

Instead of configuring security server-by-server, you define policies once in Agen and they apply to every MCP tool call across your entire deployment. This is how enterprise teams move from experimental MCP usage to production-grade AI agent infrastructure — without rebuilding their security posture for each new server.

Policy enforcement without code changes. Full visibility without log aggregation. Identity-aware access control without per-server configuration.

Frequently Asked Questions

What are MCP tools?

MCP tools are executable functions exposed by MCP servers that AI agents can discover and invoke. They enable AI models to perform real actions — querying databases, calling APIs, managing files, sending messages — through a standardized protocol rather than custom integrations.

What is the difference between MCP tools and APIs?

APIs require explicit integration code for each connection. MCP tools are discovered dynamically by AI agents through a standard protocol, and the AI model decides when and how to use them based on context. MCP provides a universal interface layer on top of existing APIs.

What is the difference between MCP tools and LLM function calling?

LLM function calling is scoped to a single application session — you define functions in your code and the model can call them. MCP tools live on external servers that any compatible client can connect to, support dynamic discovery and updates, and work across transport protocols. Function calling is how a model talks to your code; MCP is how a model talks to any tool, anywhere.

What are the most popular MCP servers?

The most popular by GitHub activity include Playwright (browser automation), GitHub (repository management), Filesystem (file operations), Notion (document management), PostgreSQL (database queries), and Slack (messaging). The official MCP servers repository has over 80,000 stars.

How do I build my own MCP tool?

Use the official MCP SDKs for Python or TypeScript. Define your tool schema (name, description, JSON Schema inputs), implement the handler function, and connect the server to a transport (stdio for local, Streamable HTTP for remote). The SDKs handle protocol negotiation and message formatting.

How do I secure MCP tools?

Validate all tool inputs server-side, implement authentication and role-based access control, sandbox servers with minimal permissions, enable human-in-the-loop approval for destructive operations, and vet third-party servers before deploying them. For enterprise deployments, use an MCP gateway for centralized policy enforcement and audit logging.

Can I use MCP tools with ChatGPT?

Yes. OpenAI has adopted MCP, and ChatGPT supports MCP server connections. The same MCP servers that work with Claude, VS Code, and Cursor also work with ChatGPT — this is the core value of having a universal protocol.

What transport protocol should I use for MCP?

Use stdio for local servers (CLI tools, IDE extensions, desktop apps) — it's the simplest option with no networking required. Use Streamable HTTP for remote servers, cloud deployments, and multi-client scenarios. The older HTTP+SSE transport is deprecated and should be migrated to Streamable HTTP.

Empower your workforce with secure agents

© 2026 Agen™ | All rights reserved.

Empower your workforce with secure agents

© 2026 Agen™ | All rights reserved.