MCP Catalogs: The Complete Guide to Finding and Managing MCP Servers
MCP Catalogs: The Complete Guide to Finding and Managing MCP Servers

MCP Catalogs: How to Find, Evaluate, and Manage MCP Servers for Enterprise AI
There are now over 10,000 publicly available MCP servers. They connect AI agents to everything from Salesforce and Jira to Postgres and Kubernetes. The ecosystem is growing fast — and finding the right server for your use case is becoming a real problem.
MCP catalogs solve that problem. They are the directories, registries, and curated collections that help developers and enterprises discover, evaluate, and manage MCP servers at scale.
This guide covers what MCP catalogs are, how they differ from registries and gateways, which ones are worth using, and what enterprise teams need to think about before connecting their agents to anything they find in a public directory.
What Is an MCP Catalog?
An MCP catalog is any organized collection of MCP (Model Context Protocol) servers that helps developers and AI platforms discover available tools and integrations. Some catalogs are simple directories — searchable lists of community-submitted servers. Others are curated, verified collections backed by vendors like Docker or Microsoft. And at the infrastructure level, the official MCP Registry provides a standardized API that other catalogs can build on top of.
The core function is the same across all of them: make it easy to find MCP servers so AI agents can connect to the tools they need.
MCP Catalog vs. MCP Registry — What's the Difference?
The terms "catalog" and "registry" are often used interchangeably, but they serve different functions in the MCP ecosystem:
MCP Registry — The infrastructure layer. A registry stores standardized metadata about MCP servers (name, description, version, installation instructions, transport configuration) and exposes it through a REST API. The official MCP Registry is the canonical example — a centralized metadata repository backed by Anthropic, GitHub, PulseMCP, and Microsoft. It is designed to be consumed by downstream aggregators, not browsed directly by end users.
MCP Catalog — The discovery layer. A catalog is a curated, browsable collection of MCP servers — often built on top of a registry. Catalogs add presentation, filtering, search, trending metrics, ratings, and editorial context. Think of the registry as the database and the catalog as the storefront.
MCP Directory / Marketplace — Often synonymous with catalog, but typically emphasizes community curation over vendor curation. PulseMCP and mcpservers.org are directories. Docker MCP Catalog is a vendor-curated catalog.
In practice, most people searching for "MCP catalogs" want a place to browse and discover servers. The registry matters when you need programmatic access, want to publish your own server, or are building infrastructure that needs a standardized API.
Why MCP Catalogs Exist
Twelve months ago, finding an MCP server meant browsing GitHub repos or asking in Discord. That worked when the ecosystem had a few hundred servers. It does not work at 10,000+.
MCP catalogs emerged to solve three problems:
Discoverability — Developers need to find servers for specific tools (Slack, Notion, Salesforce) without searching GitHub manually. Catalogs provide search, filtering, and categorization.
Evaluation — Not all MCP servers are created equal. Some are official vendor implementations. Others are weekend projects with no authentication and no maintenance. Catalogs help surface quality signals — verification status, update frequency, popularity, and transport support.
Standardization — The official MCP Registry introduced a standardized
server.jsonformat and namespace system, so every server is described consistently. This makes it possible for AI platforms to auto-discover and auto-configure servers programmatically.
Why MCP Catalogs Matter for Enterprise AI Teams
For individual developers, MCP catalogs are a convenience. For enterprise teams running AI agents at scale, they are an operational necessity — and a potential security risk.
The Tool Sprawl Problem — Agents Need Curated Access
When your organization has dozens of teams building agents, each team discovers and connects MCP servers independently. One team finds a Slack MCP server on PulseMCP. Another team finds a different one on mcpservers.org. A third team builds their own. Nobody knows which servers are in use, which are maintained, or which have been vetted by security.
This is tool sprawl — the MCP equivalent of shadow IT. And it compounds fast.
Security and Trust — Not All MCP Servers Are Created Equal
Public MCP catalogs list community-submitted servers alongside vendor-verified ones. The official MCP Registry uses namespace authentication (reverse DNS verification) to confirm that servers come from their claimed sources — but this only verifies identity, not security. A namespace-verified server can still have vulnerabilities, excessive permissions, or questionable data handling.
Enterprise teams need to evaluate MCP servers the same way they evaluate third-party dependencies: check the source, review the permissions, and assess the maintenance posture before connecting any agent. This is where MCP security becomes an infrastructure concern, not just a best practice.
Governance at Scale — Managing Hundreds of AI Agent Integrations
Discovery is the first problem. Governance is the harder one.
Once your teams start pulling MCP servers from public catalogs, you need answers to questions that no catalog can provide: Which servers are approved for production use? Which teams are allowed to use which servers? What approval workflow does a new server go through? Who is notified when an agent connects to a server that has not been vetted?
Catalogs tell you what is available. Governance tells you what is allowed. And the governance gap is growing faster than most organizations realize.
Compliance and Auditability — Knowing What Your Agents Can Reach
For regulated industries, the stakes are higher. If an AI agent connects to an MCP server that exposes customer data, you need a record of who approved that connection, when, and under what policy. Public catalogs do not provide this. Governing AI agents across enterprise apps requires purpose-built infrastructure.
The Official MCP Registry
Before diving into individual catalogs, it is worth understanding the infrastructure layer that underpins much of the ecosystem.
The official MCP Registry launched in preview in September 2025 as an open catalog and API for publicly available MCP servers. It is maintained by a working group that includes contributors from Anthropic, GitHub, PulseMCP, Block, and Microsoft — making it the closest thing the ecosystem has to a canonical source of truth.
The registry is not a browsable catalog in the traditional sense. It is a metadata repository with a REST API, designed to be consumed programmatically by downstream aggregators (catalogs, marketplaces, AI platforms) rather than browsed directly by developers.
How the Registry Ecosystem Works
The MCP Registry sits at the center of a layered ecosystem:
Package registries (npm, PyPI, Docker Hub) host the actual server code and binaries. The MCP Registry does not host code — it hosts metadata that points to packages in these registries.
The MCP Registry stores standardized metadata about each server in a
server.jsonformat: the server's unique name, where to find it, how to install it, what capabilities it exposes, and its version history.Downstream aggregators (catalogs and marketplaces like PulseMCP, Docker MCP Catalog, and vendor-specific listings) pull metadata from the registry and add their own curation layer — search interfaces, ratings, editorial reviews, verification badges.
This layered design means the registry can stay focused on metadata accuracy and namespace integrity while catalogs compete on user experience, curation quality, and enterprise features.
The registry also supports private sub-registries. Organizations can run internal registries using the same OpenAPI specification, so only vetted, company-approved MCP servers appear in their internal catalog. This is the foundation of enterprise MCP governance.
How the Registry API Works
The registry exposes a straightforward REST API at https://registry.modelcontextprotocol.io/v0/:
List servers:
GET /v0/servers?limit=10returns paginated server metadata.Search:
GET /v0/servers?search=sql&limit=5filters by keyword.Cursor-based pagination handles the full catalog with a 100-result-per-page limit.
Each server entry includes a namespace identifier (reverse DNS format like io.github.username/server-name), description, repository link, version, and package installation details.
Publishing to the MCP Registry
Server maintainers can publish using the mcp-publisher CLI tool:
Install:
brew install mcp-publisher(macOS)Initialize:
mcp-publisher initinside your server directoryAuthenticate:
mcp-publisher login githubPublish:
mcp-publisher publish
Namespace ownership is verified through GitHub account authentication or DNS verification, ensuring only the legitimate owner of a domain or GitHub account can publish servers under that namespace. This prevents impersonation but does not guarantee security — that responsibility falls on the server maintainer and on the downstream governance layer.
The Top MCP Catalogs and Registries Compared
The MCP catalog landscape splits into four categories: vendor-curated catalogs, community directories, vendor-specific catalogs, and the official registry. Here is how the major options compare.
Docker MCP Catalog
Docker's MCP Catalog is a curated collection of 300+ verified MCP servers, packaged as Docker images and distributed through Docker Hub. Every server goes through Docker's verification process, which includes provenance checks, security scanning, and versioning.
What sets Docker apart is the operational model. MCP servers are not standalone installs — they are containerized, versioned images managed through Docker's existing infrastructure. This means you get the same security, update, and lifecycle management you already use for application containers.
Docker also introduces MCP Profiles — named groupings that organize servers by project or team. A "web-dev" profile might include GitHub and Playwright servers, while a "backend" profile bundles database and monitoring tools. Profiles support both containerized servers from the catalog and remote MCP servers, so you can mix and match.
The Docker MCP Gateway routes requests from any connected client (Claude Code, Cursor, Zed, and others) to the right server, handling authentication and lifecycle management centrally.
Best for: Teams already in the Docker ecosystem who want containerized, verified MCP servers with centralized management. Strong choice for security-conscious organizations.
Microsoft MCP Catalog
Microsoft's MCP Catalog is a GitHub repository listing all official Microsoft-built MCP servers. It covers a wide range of the Microsoft ecosystem:
Cloud infrastructure — Azure, Azure Kubernetes Service (AKS), Azure DevOps
Productivity — Microsoft 365 Calendar, Mail, Copilot Chat, Teams
Developer tools — GitHub, Dev Box, Markitdown
Data and analytics — Fabric, Dataverse, Clarity, Sentinel
Each server includes installation instructions for multiple IDEs (VS Code, Visual Studio, IntelliJ, Eclipse) and supports both local (Stdio) and remote (HTTP) deployment.
This is not a general-purpose catalog — it is Microsoft's own MCP server inventory. But for enterprises running on Azure and M365, it is the authoritative source for connecting AI agents to the Microsoft stack.
Best for: Enterprises in the Microsoft ecosystem looking for official, maintained MCP servers for Azure, M365, and GitHub.
PulseMCP
PulseMCP is the largest open MCP server directory, listing over 10,000 servers and updated daily. It is a pure discovery platform — searchable, filterable, with trending metrics that show which servers are gaining adoption.
PulseMCP does not verify or curate servers. It indexes everything publicly available and lets developers filter by category, client compatibility, and popularity. The trending view is useful for gauging ecosystem momentum — which integrations are seeing the most adoption across the MCP community.
PulseMCP is also one of the backing contributors to the official MCP Registry, which means its data feeds into the broader ecosystem infrastructure.
Best for: Broad discovery and research. Useful when you need to survey the full landscape of available MCP servers or track ecosystem trends.
mcpservers.org (Awesome MCP Servers)
mcpservers.org is a community-curated collection that combines a Featured and Latest MCP server listing with an educational FAQ section explaining MCP fundamentals. It is leaner than PulseMCP, focusing on quality over quantity.
The featured section highlights well-maintained, popular servers (Bright Data, Google, Supabase), while the FAQ covers foundational questions like "What is MCP?" and "What problem does it solve?" — making it a useful starting point for teams new to the protocol.
Best for: Developers exploring MCP for the first time who want a curated entry point rather than an overwhelming directory.
Postman MCP Server List
Postman's MCP offering takes a different approach. Rather than building its own standalone catalog, Postman provides a cross-catalog search tool — a notebook workspace that queries multiple MCP catalogs (Postman's own, Glama.ai, PulseMCP) and returns unified results.
For API developers already working in Postman, this is a natural extension. You can discover MCP servers alongside your existing API collections and integrate them into your development workflow without leaving the Postman environment.
Best for: API developers already using Postman who want MCP server discovery integrated into their existing workflow.
MCP Market
MCP Market is a marketplace-style directory focused on MCP servers for Claude and Cursor. It presents a clean browsable interface with server cards showing descriptions and compatibility. Less editorial depth than PulseMCP, but a more focused browsing experience for developers using specific AI coding tools.
Best for: Developers using Claude or Cursor who want a quick, visual way to browse compatible MCP servers.
Figma MCP Catalog
Figma's MCP Catalog is an example of a vendor-specific catalog — a single company's MCP server with installation guides for 18+ MCP clients. Rather than listing many servers, it provides a comprehensive guide for connecting Figma to every major AI platform and IDE.
This pattern is becoming common: major SaaS vendors publishing their own MCP server alongside a catalog page that simplifies adoption. Shopify follows the same model with its Catalog MCP server for product discovery.
Best for: Teams that use Figma and want to connect it to their AI development workflow.
MCP Catalog Comparison Table
Catalog | Type | Server Count | Verified / Curated | API Access | Enterprise Features |
|---|---|---|---|---|---|
Official MCP Registry | Protocol registry | Growing | Namespace-verified | REST API (v0) | Private sub-registries |
Docker MCP Catalog | Vendor-curated | 300+ | Verified + security-scanned | Docker Hub API | Custom catalogs, Profiles, Gateway |
Microsoft MCP | Vendor catalog | 18+ | Official Microsoft servers | GitHub | Enterprise ecosystem (Azure, M365) |
PulseMCP | Community directory | 10,000+ | Unverified (indexed) | Via MCP Registry | None |
mcpservers.org | Community directory | Hundreds | Community-curated | None | None |
MCP Market | Marketplace | Thousands | Unverified | None | None |
Postman | Cross-catalog search | Aggregated | Mixed | Postman API | Workspace integration |
How to Evaluate MCP Servers from a Catalog
Finding an MCP server in a catalog is the easy part. Deciding whether to deploy it is where enterprise discipline matters.
Security Posture — Is the Server Verified or Community-Submitted?
The first question for any MCP server: who built it, and who verified it?
Official vendor servers (Microsoft, Docker-verified, Shopify) have the strongest trust signal. The vendor has a reputation and business incentive to maintain security.
Namespace-verified servers in the official MCP Registry confirm the publisher's identity, but not the server's security. Verification means "this person controls this GitHub account or domain" — not "this server is safe."
Community-submitted servers in open directories (PulseMCP, mcpservers.org) have no verification guarantee. Treat them like any third-party open-source dependency: review the source, check the permissions, and assess the risk.
Transport Support — Stdio vs. SSE vs. Streamable HTTP
MCP servers support different transport protocols, and the choice matters for your deployment model:
Stdio — Local process communication. The server runs on the same machine as the client. Simplest, most secure (no network exposure), but does not scale across teams.
SSE (Server-Sent Events) — One-way streaming over HTTP. Suitable for remote servers that push updates.
Streamable HTTP — The newest transport, supporting full bidirectional communication over standard HTTP. The preferred choice for production remote deployments.
If your team is deploying agents in a centralized infrastructure, prioritize servers that support Streamable HTTP. If you are running agents locally for individual developers, Stdio is fine.
Authentication and Authorization — Does It Support OAuth 2.1?
Production MCP servers should support modern authentication standards. Look for:
OAuth 2.0/2.1 support for delegated access
API key authentication as a minimum baseline
On-behalf-of attribution — the ability to track which user the agent is acting for, not just which agent is making the request
Servers that require pasting raw API keys into configuration files are a red flag for enterprise use. That is a credential management problem waiting to happen.
Tool Scope — What Actions Can It Perform?
Every MCP server exposes a set of tools — executable functions the agent can invoke. Before deploying a server, review the tool list carefully:
Does it expose read-only tools, write tools, or both?
Can the tools be scoped or restricted?
Are there any destructive operations (delete, overwrite, send) that could cause damage if an agent misuses them?
The tool scope of an MCP server directly determines the blast radius of a misbehaving agent.
Maintenance and Update Frequency
Check the server's commit history, release cadence, and issue tracker. An MCP server that has not been updated in six months may have unpatched vulnerabilities, broken compatibility with the latest MCP spec, or abandoned support for newer transport protocols.
Active maintenance is a trust signal. Abandonment is a risk signal.
The Enterprise Challenge — Why Catalogs Are Not Enough
MCP catalogs solve the discovery problem. They do not solve the governance problem.
Discovery Is Solved. Governance Is Not.
With 10,000+ MCP servers indexed across public catalogs, your developers will never struggle to find an integration. But finding a server and being authorized to use it are different questions entirely.
As a 2026 Gravitee survey found, only 14.4% of organizations had full security approval for their deployed AI agents. The other 85.6% are connecting agents to MCP servers without formal governance — a pattern that mirrors the early days of shadow IT, but with autonomous systems acting on behalf of your users.
From Open Directory to Controlled Registry
The practical path for enterprise teams is to maintain both:
Public catalog access for evaluation and research — let your teams discover what is available.
A private internal registry for production use — only approved, vetted MCP servers are available to agents in your environment.
The official MCP Registry supports this model natively. Its OpenAPI spec can be implemented by private registries, so your internal catalog uses the same API contract as the public one. Agents connect to your internal registry and only see servers your security team has approved.
Access Control for MCP Server Selection
Enterprise MCP governance goes beyond "approved" and "blocked." You need role-based control over which teams can use which servers:
A finance team's agents can access the Salesforce MCP server but not the GitHub MCP server.
A development team can access GitHub and Jira but not Salesforce.
Sensitive MCP servers (database write access, external communications) require explicit approval before any agent can connect.
This is where the catalog layer ends and the gateway layer begins. Catalogs tell agents what exists. Gateways enforce what is permitted.
How Agen Turns MCP Catalogs Into Governed Infrastructure
Agen.co sits between your AI agents and the MCP servers they connect to — whether those servers come from Docker's catalog, Microsoft's repository, PulseMCP, or your own internal builds. It provides the governance layer that public catalogs lack.
Centralized MCP server registry with approval workflows. Instead of letting teams connect to any server they find in a public catalog, Agen.co routes all MCP server access through a governed registry. New servers go through an approval workflow before they are available to agents.
Fine-grained tool authorization. Not just "can this agent access this server," but "can this agent invoke this specific tool on this server." A tool governance layer that operates at the action level, not just the connection level.
Identity-aware access control. Every agent connection inherits your existing identity infrastructure — Okta, Azure AD, Google Workspace. Agents act on behalf of authenticated users, with delegated permissions and full on-behalf-of attribution.
Real-time observability. A live dashboard showing every agent interaction across every MCP server: which tools are being called, by whom, and on behalf of which users. No more guessing what your agents are doing.
Data governance and masking. Control how data flows between agents and MCP servers. Define redaction and masking policies once and apply them across all agent connections — ensuring sensitive data does not leak through MCP tool calls.
For SaaS companies exposing products to AI-driven usage, Agen.co for SaaS provides the MCP connector with tenant isolation. For internal teams deploying agents across enterprise tools, Agen.co for Work provides governed connectivity with approval workflows and audit trails.
Building Your Own Internal MCP Catalog
For organizations that want full control over their MCP server ecosystem, building an internal catalog is increasingly practical.
Why Enterprises Need a Private MCP Registry
Public catalogs are designed for discoverability. Enterprise environments need the opposite: controlled visibility. A private MCP registry ensures that:
Only vetted servers appear in your internal catalog
Agents cannot discover or connect to unapproved servers
Server metadata is consistent with your organization's naming conventions and documentation standards
Deprecated or vulnerable servers can be removed instantly without waiting for a public catalog to update
Key Components: Server Metadata, Approval Workflow, Access Policies
A functional internal MCP catalog needs three things:
Standardized server metadata — Use the official MCP Registry's
server.jsonformat so your internal catalog is compatible with ecosystem tooling. Include server name, description, version, transport configuration, tool list, and ownership.Approval workflow — A process for requesting, reviewing, and approving new MCP servers. This should include security review, tool scope assessment, and data handling evaluation before any server is available to agents.
Access policies — Role-based rules that determine which teams, agents, and users can discover and connect to which servers. These policies should integrate with your existing IAM infrastructure.
Integrating with Public Catalogs
Your internal catalog does not have to start from scratch. The practical approach:
Evaluate servers from public catalogs (Docker, PulseMCP, Microsoft) using the criteria above.
Import approved server metadata into your private registry.
Monitor public catalogs for updates to servers you have already approved — version bumps, security patches, deprecation notices.
Block unapproved public catalog access at the gateway level so agents cannot bypass your internal registry.
Versioning and Lifecycle Management
MCP servers evolve. New versions add tools, change authentication requirements, or drop support for transport protocols. Your internal catalog needs lifecycle management:
Version pinning — Lock agents to specific server versions until updates are reviewed and approved.
Deprecation notices — Flag servers that are approaching end-of-life or have been superseded by vendor-maintained alternatives.
Automated health checks — Monitor server availability and flag downtime or errors before they impact agent workflows.
The Future of MCP Catalogs
The MCP catalog ecosystem is maturing fast. Several trends are shaping where it goes next.
From Static Directories to Dynamic Registries
Early MCP catalogs were static lists — curated markdown files or simple web directories. The official MCP Registry introduced dynamic, API-driven discovery. The next step is real-time capability advertisement, where MCP servers actively publish their available tools and resources to registries as they change.
The 2026 MCP Roadmap signals continued investment in extensions, transport improvements, and registry capabilities — all of which will make catalogs more dynamic and programmatically useful.
Verification Standards and Trust Scores
Today, "verified" means different things in different catalogs. Docker verification means containerized and security-scanned. MCP Registry verification means namespace-authenticated. Neither is a comprehensive trust score.
Expect catalog providers to develop more nuanced trust signals: automated security scanning, community ratings, behavioral analysis, and compliance certifications. The catalogs that provide the most reliable trust signals will become the default for enterprise adoption.
Catalog Federation — Connecting Public and Private Registries
The MCP Registry's OpenAPI spec already enables this: a federated model where public registries, vendor catalogs, and private enterprise registries all implement the same API. An agent can query its local registry, which aggregates results from approved sources — a unified discovery experience backed by distributed governance.
The Role of Gateways in Catalog Governance
Catalogs and gateways are converging. Today, they are separate layers — catalogs handle discovery, gateways handle enforcement. Increasingly, enterprise platforms are combining both: a governed catalog that not only shows you which servers are available, but enforces access control, monitors usage, and audits every connection in real time.
This convergence is where the MCP ecosystem is heading — and it is where platforms like Agen.co are already operating.
Frequently Asked Questions
What is an MCP Catalog?
An MCP catalog is an organized collection of MCP (Model Context Protocol) servers that helps developers and AI platforms discover available tools and integrations. Catalogs range from simple community directories like PulseMCP (10,000+ servers) to vendor-curated collections like the Docker MCP Catalog (300+ verified servers). They provide search, filtering, and metadata to help users find the right MCP server for their use case.
What is the official MCP Registry?
The official MCP Registry is a centralized metadata repository for publicly available MCP servers, maintained by contributors from Anthropic, GitHub, PulseMCP, and Microsoft. Available at registry.modelcontextprotocol.io, it provides a REST API for programmatic server discovery and a standardized server.json format for server metadata. It is designed to be consumed by downstream catalogs and aggregators rather than browsed directly.
How many MCP servers are available?
As of early 2026, over 10,000 MCP servers are publicly indexed across major catalogs. PulseMCP alone lists 10,000+, the Docker MCP Catalog provides 300+ verified servers, and Microsoft offers 18+ official MCP servers for Azure and M365. The ecosystem is growing rapidly, with new servers published daily.
Can I run a private MCP catalog for my organization?
Yes. The official MCP Registry's OpenAPI specification can be implemented by private registries, allowing enterprises to run internal MCP catalogs that only expose approved, vetted servers to their agents. This is the recommended approach for organizations that need governance over which MCP servers their AI agents can access.
What is the difference between an MCP catalog and an MCP gateway?
An MCP catalog handles discovery — it tells agents and developers which MCP servers exist and how to connect to them. An MCP gateway handles enforcement — it sits in the runtime path between agents and servers, controlling authentication, authorization, rate limiting, and audit logging. Catalogs answer "what is available?" Gateways answer "what is allowed?" Enterprise teams typically need both.
Are MCP servers in catalogs safe to use?
It depends on the catalog. Vendor-verified catalogs (Docker, Microsoft) apply security scanning and provenance checks. The official MCP Registry verifies namespace ownership but not server security. Community directories (PulseMCP, mcpservers.org) list servers without verification. Treat unverified MCP servers like any third-party open-source dependency — review the source code, check the permissions, and assess the maintenance posture before connecting any agent.
How do I publish my MCP server to a catalog?
For the official MCP Registry, install mcp-publisher (brew install mcp-publisher on macOS), initialize your server directory, authenticate with GitHub, and run mcp-publisher publish. Your server's namespace is verified through GitHub account ownership or DNS verification. For Docker's catalog, package your server as a Docker image and submit it through Docker Hub's verification process. Community directories typically accept submissions through GitHub pull requests or web forms.
MCP Catalogs: How to Find, Evaluate, and Manage MCP Servers for Enterprise AI
There are now over 10,000 publicly available MCP servers. They connect AI agents to everything from Salesforce and Jira to Postgres and Kubernetes. The ecosystem is growing fast — and finding the right server for your use case is becoming a real problem.
MCP catalogs solve that problem. They are the directories, registries, and curated collections that help developers and enterprises discover, evaluate, and manage MCP servers at scale.
This guide covers what MCP catalogs are, how they differ from registries and gateways, which ones are worth using, and what enterprise teams need to think about before connecting their agents to anything they find in a public directory.
What Is an MCP Catalog?
An MCP catalog is any organized collection of MCP (Model Context Protocol) servers that helps developers and AI platforms discover available tools and integrations. Some catalogs are simple directories — searchable lists of community-submitted servers. Others are curated, verified collections backed by vendors like Docker or Microsoft. And at the infrastructure level, the official MCP Registry provides a standardized API that other catalogs can build on top of.
The core function is the same across all of them: make it easy to find MCP servers so AI agents can connect to the tools they need.
MCP Catalog vs. MCP Registry — What's the Difference?
The terms "catalog" and "registry" are often used interchangeably, but they serve different functions in the MCP ecosystem:
MCP Registry — The infrastructure layer. A registry stores standardized metadata about MCP servers (name, description, version, installation instructions, transport configuration) and exposes it through a REST API. The official MCP Registry is the canonical example — a centralized metadata repository backed by Anthropic, GitHub, PulseMCP, and Microsoft. It is designed to be consumed by downstream aggregators, not browsed directly by end users.
MCP Catalog — The discovery layer. A catalog is a curated, browsable collection of MCP servers — often built on top of a registry. Catalogs add presentation, filtering, search, trending metrics, ratings, and editorial context. Think of the registry as the database and the catalog as the storefront.
MCP Directory / Marketplace — Often synonymous with catalog, but typically emphasizes community curation over vendor curation. PulseMCP and mcpservers.org are directories. Docker MCP Catalog is a vendor-curated catalog.
In practice, most people searching for "MCP catalogs" want a place to browse and discover servers. The registry matters when you need programmatic access, want to publish your own server, or are building infrastructure that needs a standardized API.
Why MCP Catalogs Exist
Twelve months ago, finding an MCP server meant browsing GitHub repos or asking in Discord. That worked when the ecosystem had a few hundred servers. It does not work at 10,000+.
MCP catalogs emerged to solve three problems:
Discoverability — Developers need to find servers for specific tools (Slack, Notion, Salesforce) without searching GitHub manually. Catalogs provide search, filtering, and categorization.
Evaluation — Not all MCP servers are created equal. Some are official vendor implementations. Others are weekend projects with no authentication and no maintenance. Catalogs help surface quality signals — verification status, update frequency, popularity, and transport support.
Standardization — The official MCP Registry introduced a standardized
server.jsonformat and namespace system, so every server is described consistently. This makes it possible for AI platforms to auto-discover and auto-configure servers programmatically.
Why MCP Catalogs Matter for Enterprise AI Teams
For individual developers, MCP catalogs are a convenience. For enterprise teams running AI agents at scale, they are an operational necessity — and a potential security risk.
The Tool Sprawl Problem — Agents Need Curated Access
When your organization has dozens of teams building agents, each team discovers and connects MCP servers independently. One team finds a Slack MCP server on PulseMCP. Another team finds a different one on mcpservers.org. A third team builds their own. Nobody knows which servers are in use, which are maintained, or which have been vetted by security.
This is tool sprawl — the MCP equivalent of shadow IT. And it compounds fast.
Security and Trust — Not All MCP Servers Are Created Equal
Public MCP catalogs list community-submitted servers alongside vendor-verified ones. The official MCP Registry uses namespace authentication (reverse DNS verification) to confirm that servers come from their claimed sources — but this only verifies identity, not security. A namespace-verified server can still have vulnerabilities, excessive permissions, or questionable data handling.
Enterprise teams need to evaluate MCP servers the same way they evaluate third-party dependencies: check the source, review the permissions, and assess the maintenance posture before connecting any agent. This is where MCP security becomes an infrastructure concern, not just a best practice.
Governance at Scale — Managing Hundreds of AI Agent Integrations
Discovery is the first problem. Governance is the harder one.
Once your teams start pulling MCP servers from public catalogs, you need answers to questions that no catalog can provide: Which servers are approved for production use? Which teams are allowed to use which servers? What approval workflow does a new server go through? Who is notified when an agent connects to a server that has not been vetted?
Catalogs tell you what is available. Governance tells you what is allowed. And the governance gap is growing faster than most organizations realize.
Compliance and Auditability — Knowing What Your Agents Can Reach
For regulated industries, the stakes are higher. If an AI agent connects to an MCP server that exposes customer data, you need a record of who approved that connection, when, and under what policy. Public catalogs do not provide this. Governing AI agents across enterprise apps requires purpose-built infrastructure.
The Official MCP Registry
Before diving into individual catalogs, it is worth understanding the infrastructure layer that underpins much of the ecosystem.
The official MCP Registry launched in preview in September 2025 as an open catalog and API for publicly available MCP servers. It is maintained by a working group that includes contributors from Anthropic, GitHub, PulseMCP, Block, and Microsoft — making it the closest thing the ecosystem has to a canonical source of truth.
The registry is not a browsable catalog in the traditional sense. It is a metadata repository with a REST API, designed to be consumed programmatically by downstream aggregators (catalogs, marketplaces, AI platforms) rather than browsed directly by developers.
How the Registry Ecosystem Works
The MCP Registry sits at the center of a layered ecosystem:
Package registries (npm, PyPI, Docker Hub) host the actual server code and binaries. The MCP Registry does not host code — it hosts metadata that points to packages in these registries.
The MCP Registry stores standardized metadata about each server in a
server.jsonformat: the server's unique name, where to find it, how to install it, what capabilities it exposes, and its version history.Downstream aggregators (catalogs and marketplaces like PulseMCP, Docker MCP Catalog, and vendor-specific listings) pull metadata from the registry and add their own curation layer — search interfaces, ratings, editorial reviews, verification badges.
This layered design means the registry can stay focused on metadata accuracy and namespace integrity while catalogs compete on user experience, curation quality, and enterprise features.
The registry also supports private sub-registries. Organizations can run internal registries using the same OpenAPI specification, so only vetted, company-approved MCP servers appear in their internal catalog. This is the foundation of enterprise MCP governance.
How the Registry API Works
The registry exposes a straightforward REST API at https://registry.modelcontextprotocol.io/v0/:
List servers:
GET /v0/servers?limit=10returns paginated server metadata.Search:
GET /v0/servers?search=sql&limit=5filters by keyword.Cursor-based pagination handles the full catalog with a 100-result-per-page limit.
Each server entry includes a namespace identifier (reverse DNS format like io.github.username/server-name), description, repository link, version, and package installation details.
Publishing to the MCP Registry
Server maintainers can publish using the mcp-publisher CLI tool:
Install:
brew install mcp-publisher(macOS)Initialize:
mcp-publisher initinside your server directoryAuthenticate:
mcp-publisher login githubPublish:
mcp-publisher publish
Namespace ownership is verified through GitHub account authentication or DNS verification, ensuring only the legitimate owner of a domain or GitHub account can publish servers under that namespace. This prevents impersonation but does not guarantee security — that responsibility falls on the server maintainer and on the downstream governance layer.
The Top MCP Catalogs and Registries Compared
The MCP catalog landscape splits into four categories: vendor-curated catalogs, community directories, vendor-specific catalogs, and the official registry. Here is how the major options compare.
Docker MCP Catalog
Docker's MCP Catalog is a curated collection of 300+ verified MCP servers, packaged as Docker images and distributed through Docker Hub. Every server goes through Docker's verification process, which includes provenance checks, security scanning, and versioning.
What sets Docker apart is the operational model. MCP servers are not standalone installs — they are containerized, versioned images managed through Docker's existing infrastructure. This means you get the same security, update, and lifecycle management you already use for application containers.
Docker also introduces MCP Profiles — named groupings that organize servers by project or team. A "web-dev" profile might include GitHub and Playwright servers, while a "backend" profile bundles database and monitoring tools. Profiles support both containerized servers from the catalog and remote MCP servers, so you can mix and match.
The Docker MCP Gateway routes requests from any connected client (Claude Code, Cursor, Zed, and others) to the right server, handling authentication and lifecycle management centrally.
Best for: Teams already in the Docker ecosystem who want containerized, verified MCP servers with centralized management. Strong choice for security-conscious organizations.
Microsoft MCP Catalog
Microsoft's MCP Catalog is a GitHub repository listing all official Microsoft-built MCP servers. It covers a wide range of the Microsoft ecosystem:
Cloud infrastructure — Azure, Azure Kubernetes Service (AKS), Azure DevOps
Productivity — Microsoft 365 Calendar, Mail, Copilot Chat, Teams
Developer tools — GitHub, Dev Box, Markitdown
Data and analytics — Fabric, Dataverse, Clarity, Sentinel
Each server includes installation instructions for multiple IDEs (VS Code, Visual Studio, IntelliJ, Eclipse) and supports both local (Stdio) and remote (HTTP) deployment.
This is not a general-purpose catalog — it is Microsoft's own MCP server inventory. But for enterprises running on Azure and M365, it is the authoritative source for connecting AI agents to the Microsoft stack.
Best for: Enterprises in the Microsoft ecosystem looking for official, maintained MCP servers for Azure, M365, and GitHub.
PulseMCP
PulseMCP is the largest open MCP server directory, listing over 10,000 servers and updated daily. It is a pure discovery platform — searchable, filterable, with trending metrics that show which servers are gaining adoption.
PulseMCP does not verify or curate servers. It indexes everything publicly available and lets developers filter by category, client compatibility, and popularity. The trending view is useful for gauging ecosystem momentum — which integrations are seeing the most adoption across the MCP community.
PulseMCP is also one of the backing contributors to the official MCP Registry, which means its data feeds into the broader ecosystem infrastructure.
Best for: Broad discovery and research. Useful when you need to survey the full landscape of available MCP servers or track ecosystem trends.
mcpservers.org (Awesome MCP Servers)
mcpservers.org is a community-curated collection that combines a Featured and Latest MCP server listing with an educational FAQ section explaining MCP fundamentals. It is leaner than PulseMCP, focusing on quality over quantity.
The featured section highlights well-maintained, popular servers (Bright Data, Google, Supabase), while the FAQ covers foundational questions like "What is MCP?" and "What problem does it solve?" — making it a useful starting point for teams new to the protocol.
Best for: Developers exploring MCP for the first time who want a curated entry point rather than an overwhelming directory.
Postman MCP Server List
Postman's MCP offering takes a different approach. Rather than building its own standalone catalog, Postman provides a cross-catalog search tool — a notebook workspace that queries multiple MCP catalogs (Postman's own, Glama.ai, PulseMCP) and returns unified results.
For API developers already working in Postman, this is a natural extension. You can discover MCP servers alongside your existing API collections and integrate them into your development workflow without leaving the Postman environment.
Best for: API developers already using Postman who want MCP server discovery integrated into their existing workflow.
MCP Market
MCP Market is a marketplace-style directory focused on MCP servers for Claude and Cursor. It presents a clean browsable interface with server cards showing descriptions and compatibility. Less editorial depth than PulseMCP, but a more focused browsing experience for developers using specific AI coding tools.
Best for: Developers using Claude or Cursor who want a quick, visual way to browse compatible MCP servers.
Figma MCP Catalog
Figma's MCP Catalog is an example of a vendor-specific catalog — a single company's MCP server with installation guides for 18+ MCP clients. Rather than listing many servers, it provides a comprehensive guide for connecting Figma to every major AI platform and IDE.
This pattern is becoming common: major SaaS vendors publishing their own MCP server alongside a catalog page that simplifies adoption. Shopify follows the same model with its Catalog MCP server for product discovery.
Best for: Teams that use Figma and want to connect it to their AI development workflow.
MCP Catalog Comparison Table
Catalog | Type | Server Count | Verified / Curated | API Access | Enterprise Features |
|---|---|---|---|---|---|
Official MCP Registry | Protocol registry | Growing | Namespace-verified | REST API (v0) | Private sub-registries |
Docker MCP Catalog | Vendor-curated | 300+ | Verified + security-scanned | Docker Hub API | Custom catalogs, Profiles, Gateway |
Microsoft MCP | Vendor catalog | 18+ | Official Microsoft servers | GitHub | Enterprise ecosystem (Azure, M365) |
PulseMCP | Community directory | 10,000+ | Unverified (indexed) | Via MCP Registry | None |
mcpservers.org | Community directory | Hundreds | Community-curated | None | None |
MCP Market | Marketplace | Thousands | Unverified | None | None |
Postman | Cross-catalog search | Aggregated | Mixed | Postman API | Workspace integration |
How to Evaluate MCP Servers from a Catalog
Finding an MCP server in a catalog is the easy part. Deciding whether to deploy it is where enterprise discipline matters.
Security Posture — Is the Server Verified or Community-Submitted?
The first question for any MCP server: who built it, and who verified it?
Official vendor servers (Microsoft, Docker-verified, Shopify) have the strongest trust signal. The vendor has a reputation and business incentive to maintain security.
Namespace-verified servers in the official MCP Registry confirm the publisher's identity, but not the server's security. Verification means "this person controls this GitHub account or domain" — not "this server is safe."
Community-submitted servers in open directories (PulseMCP, mcpservers.org) have no verification guarantee. Treat them like any third-party open-source dependency: review the source, check the permissions, and assess the risk.
Transport Support — Stdio vs. SSE vs. Streamable HTTP
MCP servers support different transport protocols, and the choice matters for your deployment model:
Stdio — Local process communication. The server runs on the same machine as the client. Simplest, most secure (no network exposure), but does not scale across teams.
SSE (Server-Sent Events) — One-way streaming over HTTP. Suitable for remote servers that push updates.
Streamable HTTP — The newest transport, supporting full bidirectional communication over standard HTTP. The preferred choice for production remote deployments.
If your team is deploying agents in a centralized infrastructure, prioritize servers that support Streamable HTTP. If you are running agents locally for individual developers, Stdio is fine.
Authentication and Authorization — Does It Support OAuth 2.1?
Production MCP servers should support modern authentication standards. Look for:
OAuth 2.0/2.1 support for delegated access
API key authentication as a minimum baseline
On-behalf-of attribution — the ability to track which user the agent is acting for, not just which agent is making the request
Servers that require pasting raw API keys into configuration files are a red flag for enterprise use. That is a credential management problem waiting to happen.
Tool Scope — What Actions Can It Perform?
Every MCP server exposes a set of tools — executable functions the agent can invoke. Before deploying a server, review the tool list carefully:
Does it expose read-only tools, write tools, or both?
Can the tools be scoped or restricted?
Are there any destructive operations (delete, overwrite, send) that could cause damage if an agent misuses them?
The tool scope of an MCP server directly determines the blast radius of a misbehaving agent.
Maintenance and Update Frequency
Check the server's commit history, release cadence, and issue tracker. An MCP server that has not been updated in six months may have unpatched vulnerabilities, broken compatibility with the latest MCP spec, or abandoned support for newer transport protocols.
Active maintenance is a trust signal. Abandonment is a risk signal.
The Enterprise Challenge — Why Catalogs Are Not Enough
MCP catalogs solve the discovery problem. They do not solve the governance problem.
Discovery Is Solved. Governance Is Not.
With 10,000+ MCP servers indexed across public catalogs, your developers will never struggle to find an integration. But finding a server and being authorized to use it are different questions entirely.
As a 2026 Gravitee survey found, only 14.4% of organizations had full security approval for their deployed AI agents. The other 85.6% are connecting agents to MCP servers without formal governance — a pattern that mirrors the early days of shadow IT, but with autonomous systems acting on behalf of your users.
From Open Directory to Controlled Registry
The practical path for enterprise teams is to maintain both:
Public catalog access for evaluation and research — let your teams discover what is available.
A private internal registry for production use — only approved, vetted MCP servers are available to agents in your environment.
The official MCP Registry supports this model natively. Its OpenAPI spec can be implemented by private registries, so your internal catalog uses the same API contract as the public one. Agents connect to your internal registry and only see servers your security team has approved.
Access Control for MCP Server Selection
Enterprise MCP governance goes beyond "approved" and "blocked." You need role-based control over which teams can use which servers:
A finance team's agents can access the Salesforce MCP server but not the GitHub MCP server.
A development team can access GitHub and Jira but not Salesforce.
Sensitive MCP servers (database write access, external communications) require explicit approval before any agent can connect.
This is where the catalog layer ends and the gateway layer begins. Catalogs tell agents what exists. Gateways enforce what is permitted.
How Agen Turns MCP Catalogs Into Governed Infrastructure
Agen.co sits between your AI agents and the MCP servers they connect to — whether those servers come from Docker's catalog, Microsoft's repository, PulseMCP, or your own internal builds. It provides the governance layer that public catalogs lack.
Centralized MCP server registry with approval workflows. Instead of letting teams connect to any server they find in a public catalog, Agen.co routes all MCP server access through a governed registry. New servers go through an approval workflow before they are available to agents.
Fine-grained tool authorization. Not just "can this agent access this server," but "can this agent invoke this specific tool on this server." A tool governance layer that operates at the action level, not just the connection level.
Identity-aware access control. Every agent connection inherits your existing identity infrastructure — Okta, Azure AD, Google Workspace. Agents act on behalf of authenticated users, with delegated permissions and full on-behalf-of attribution.
Real-time observability. A live dashboard showing every agent interaction across every MCP server: which tools are being called, by whom, and on behalf of which users. No more guessing what your agents are doing.
Data governance and masking. Control how data flows between agents and MCP servers. Define redaction and masking policies once and apply them across all agent connections — ensuring sensitive data does not leak through MCP tool calls.
For SaaS companies exposing products to AI-driven usage, Agen.co for SaaS provides the MCP connector with tenant isolation. For internal teams deploying agents across enterprise tools, Agen.co for Work provides governed connectivity with approval workflows and audit trails.
Building Your Own Internal MCP Catalog
For organizations that want full control over their MCP server ecosystem, building an internal catalog is increasingly practical.
Why Enterprises Need a Private MCP Registry
Public catalogs are designed for discoverability. Enterprise environments need the opposite: controlled visibility. A private MCP registry ensures that:
Only vetted servers appear in your internal catalog
Agents cannot discover or connect to unapproved servers
Server metadata is consistent with your organization's naming conventions and documentation standards
Deprecated or vulnerable servers can be removed instantly without waiting for a public catalog to update
Key Components: Server Metadata, Approval Workflow, Access Policies
A functional internal MCP catalog needs three things:
Standardized server metadata — Use the official MCP Registry's
server.jsonformat so your internal catalog is compatible with ecosystem tooling. Include server name, description, version, transport configuration, tool list, and ownership.Approval workflow — A process for requesting, reviewing, and approving new MCP servers. This should include security review, tool scope assessment, and data handling evaluation before any server is available to agents.
Access policies — Role-based rules that determine which teams, agents, and users can discover and connect to which servers. These policies should integrate with your existing IAM infrastructure.
Integrating with Public Catalogs
Your internal catalog does not have to start from scratch. The practical approach:
Evaluate servers from public catalogs (Docker, PulseMCP, Microsoft) using the criteria above.
Import approved server metadata into your private registry.
Monitor public catalogs for updates to servers you have already approved — version bumps, security patches, deprecation notices.
Block unapproved public catalog access at the gateway level so agents cannot bypass your internal registry.
Versioning and Lifecycle Management
MCP servers evolve. New versions add tools, change authentication requirements, or drop support for transport protocols. Your internal catalog needs lifecycle management:
Version pinning — Lock agents to specific server versions until updates are reviewed and approved.
Deprecation notices — Flag servers that are approaching end-of-life or have been superseded by vendor-maintained alternatives.
Automated health checks — Monitor server availability and flag downtime or errors before they impact agent workflows.
The Future of MCP Catalogs
The MCP catalog ecosystem is maturing fast. Several trends are shaping where it goes next.
From Static Directories to Dynamic Registries
Early MCP catalogs were static lists — curated markdown files or simple web directories. The official MCP Registry introduced dynamic, API-driven discovery. The next step is real-time capability advertisement, where MCP servers actively publish their available tools and resources to registries as they change.
The 2026 MCP Roadmap signals continued investment in extensions, transport improvements, and registry capabilities — all of which will make catalogs more dynamic and programmatically useful.
Verification Standards and Trust Scores
Today, "verified" means different things in different catalogs. Docker verification means containerized and security-scanned. MCP Registry verification means namespace-authenticated. Neither is a comprehensive trust score.
Expect catalog providers to develop more nuanced trust signals: automated security scanning, community ratings, behavioral analysis, and compliance certifications. The catalogs that provide the most reliable trust signals will become the default for enterprise adoption.
Catalog Federation — Connecting Public and Private Registries
The MCP Registry's OpenAPI spec already enables this: a federated model where public registries, vendor catalogs, and private enterprise registries all implement the same API. An agent can query its local registry, which aggregates results from approved sources — a unified discovery experience backed by distributed governance.
The Role of Gateways in Catalog Governance
Catalogs and gateways are converging. Today, they are separate layers — catalogs handle discovery, gateways handle enforcement. Increasingly, enterprise platforms are combining both: a governed catalog that not only shows you which servers are available, but enforces access control, monitors usage, and audits every connection in real time.
This convergence is where the MCP ecosystem is heading — and it is where platforms like Agen.co are already operating.
Frequently Asked Questions
What is an MCP Catalog?
An MCP catalog is an organized collection of MCP (Model Context Protocol) servers that helps developers and AI platforms discover available tools and integrations. Catalogs range from simple community directories like PulseMCP (10,000+ servers) to vendor-curated collections like the Docker MCP Catalog (300+ verified servers). They provide search, filtering, and metadata to help users find the right MCP server for their use case.
What is the official MCP Registry?
The official MCP Registry is a centralized metadata repository for publicly available MCP servers, maintained by contributors from Anthropic, GitHub, PulseMCP, and Microsoft. Available at registry.modelcontextprotocol.io, it provides a REST API for programmatic server discovery and a standardized server.json format for server metadata. It is designed to be consumed by downstream catalogs and aggregators rather than browsed directly.
How many MCP servers are available?
As of early 2026, over 10,000 MCP servers are publicly indexed across major catalogs. PulseMCP alone lists 10,000+, the Docker MCP Catalog provides 300+ verified servers, and Microsoft offers 18+ official MCP servers for Azure and M365. The ecosystem is growing rapidly, with new servers published daily.
Can I run a private MCP catalog for my organization?
Yes. The official MCP Registry's OpenAPI specification can be implemented by private registries, allowing enterprises to run internal MCP catalogs that only expose approved, vetted servers to their agents. This is the recommended approach for organizations that need governance over which MCP servers their AI agents can access.
What is the difference between an MCP catalog and an MCP gateway?
An MCP catalog handles discovery — it tells agents and developers which MCP servers exist and how to connect to them. An MCP gateway handles enforcement — it sits in the runtime path between agents and servers, controlling authentication, authorization, rate limiting, and audit logging. Catalogs answer "what is available?" Gateways answer "what is allowed?" Enterprise teams typically need both.
Are MCP servers in catalogs safe to use?
It depends on the catalog. Vendor-verified catalogs (Docker, Microsoft) apply security scanning and provenance checks. The official MCP Registry verifies namespace ownership but not server security. Community directories (PulseMCP, mcpservers.org) list servers without verification. Treat unverified MCP servers like any third-party open-source dependency — review the source code, check the permissions, and assess the maintenance posture before connecting any agent.
How do I publish my MCP server to a catalog?
For the official MCP Registry, install mcp-publisher (brew install mcp-publisher on macOS), initialize your server directory, authenticate with GitHub, and run mcp-publisher publish. Your server's namespace is verified through GitHub account ownership or DNS verification. For Docker's catalog, package your server as a Docker image and submit it through Docker Hub's verification process. Community directories typically accept submissions through GitHub pull requests or web forms.
Read More Articles
Empower your workforce with secure agents

© 2026 Agen™ | All rights reserved.
Deploy anywhere
Empower your workforce with secure agents

© 2026 Agen™ | All rights reserved.
Deploy anywhere


