What is MCP? — Model Context Protocol Explained (2026)

MCP (Model Context Protocol) is a standard for connecting AI agents to APIs as tools. Learn how MCP works, its architecture, and how to create MCP server configs.

Definition

MCP (Model Context Protocol) is an open standard created by Anthropic for connecting AI models to external tools, data sources, and APIs. It defines a standardized communication protocol between an AI agent (the client) and a tool server (the provider), enabling agents to discover available tools, understand their capabilities, and invoke them with structured inputs and outputs.

In simpler terms: MCP is the language AI agents speak when they want to use your API. Just as HTTP standardized how web browsers talk to web servers, MCP standardizes how AI agents talk to tool providers. Without MCP, every agent-API integration requires custom code. With MCP, any compatible agent can connect to any MCP server using the same protocol.

MCP was publicly released by Anthropic in late 2024 and has since been adopted by a growing ecosystem of AI platforms, developer tools, and API providers. As of 2026, it is the most widely supported protocol for AI agent tool use.

Why MCP was created

Before MCP, connecting AI agents to external tools required custom integrations for every agent-API pair. Each AI platform had its own approach: OpenAI used function calling with JSON schemas, LangChain had its own tool abstraction, and other frameworks invented their own formats. This created an N-times-M integration problem — N agents times M tools, each requiring custom glue code.

MCP solves this by introducing a single protocol that both agents and tools can implement. An API provider builds one MCP server, and it works with every MCP-compatible agent. An agent framework implements MCP client support once, and it gains access to every MCP server in the ecosystem. This reduces the integration problem from N-times-M to N-plus-M.

The need for standardization became acute as AI agents moved from simple question-answering to autonomous task execution. Agents in 2026 need to search the web, manage calendars, query databases, send emails, deploy code, and interact with hundreds of other services. Without a standard protocol, this ecosystem cannot scale. MCP provides the foundation for scalable, interoperable AI agent tool use.

How MCP works

MCP follows a client-server architecture with three primary components:

MCP Host

The application that the user interacts with — for example, Claude Desktop, Cursor, or a custom AI assistant. The host manages the overall user experience and coordinates between the AI model and MCP clients.

MCP Client

A component within the host that maintains a connection to an MCP server. The client handles protocol communication: discovering available tools, sending tool invocations, and receiving results. Each MCP server connection is managed by its own client instance.

MCP Server

A lightweight program that exposes your API's capabilities as MCP tools. The server receives tool invocation requests from the client, translates them into API calls (REST, GraphQL, database queries, or any other backend), and returns structured results. One MCP server can expose multiple tools.

The communication flow works like this:

MCP communication flow
User: "What tasks are on the Engineering board?"

1. Host receives user message
2. Host sends message to AI model (e.g., Claude)
3. Model decides it needs the "listTasks" tool
4. MCP Client sends tool invocation to MCP Server:
   {
     "method": "tools/call",
     "params": {
       "name": "listTasks",
       "arguments": {
         "boardId": "eng-board-001",
         "status": "open"
       }
     }
   }
5. MCP Server translates to API call:
   GET https://api.projectboard.io/v2/boards/eng-board-001/tasks?status=open
6. MCP Server returns structured result to Client
7. Model formats result for the user
8. Host displays: "Here are 12 open tasks on the Engineering board..."

MCP supports two transport mechanisms: stdio (standard input/output, for local servers running as child processes) and SSE (Server-Sent Events over HTTP, for remote servers). The stdio transport is simpler and more common for local development tools, while SSE enables cloud-hosted MCP servers that any client can connect to over the network.

MCP server config example

To connect an MCP-compatible client (like Claude Desktop) to an MCP server, you use a configuration file that specifies which servers to connect to and how. Here is a realistic example:

claude_desktop_config.json
{
  "mcpServers": {
    "projectboard": {
      "command": "npx",
      "args": ["-y", "@projectboard/mcp-server"],
      "env": {
        "PROJECTBOARD_API_KEY": "pb_live_abc123..."
      }
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "ghp_xxxxxxxxxxxx"
      }
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
      }
    }
  }
}

Each entry in mcpServers defines a server connection. The command and args fields specify how to start the server process (using stdio transport). The env field passes environment variables like API keys securely.

For remote servers using SSE transport, the config uses a URL instead:

Remote MCP server config
{
  "mcpServers": {
    "projectboard-remote": {
      "url": "https://mcp.projectboard.io/sse",
      "headers": {
        "Authorization": "Bearer pb_live_abc123..."
      }
    }
  }
}

Key MCP concepts

MCP defines four core primitives that servers can expose to clients:

Tools

Actions the agent can execute — the most commonly used primitive. Each tool has a name, description, and input schema. When an agent calls a tool, the MCP server executes the underlying logic (API call, database query, etc.) and returns the result. Tools are model-controlled: the AI decides when to use them.

Resources

Read-only data sources the agent can access. Resources are identified by URIs and provide contextual information — like files, database records, or API responses — that the agent can read but not modify. Resources are typically application-controlled: the host decides when to load them into context.

Prompts

Reusable prompt templates that the server provides to guide the AI model's behavior in specific scenarios. Prompts are user-controlled: the user selects which prompt template to activate. They are useful for standardizing how the agent interacts with a particular API or data source.

Sampling

A mechanism that allows MCP servers to request the AI model to generate text. This enables agentic workflows where the server can ask the model to reason about intermediate results before proceeding. Sampling requires explicit user approval for security.

MCP vs REST APIs

MCP does not replace REST APIs. It is a protocol layer that sits on top of your existing API, adding standardized discovery and invocation for AI agents.

A REST API defines endpoints, request/response formats, and authentication. It is designed for programmatic access by software developers who write code to call specific endpoints. MCP adds a layer that lets AI agents use those same endpoints without custom integration code.

AspectREST APIMCP
PurposeProgrammatic API accessAI agent tool use
ConsumerDeveloper-written codeAI models/agents
DiscoveryManual (read docs)Automatic (protocol-level)
Integration effortWrite client code per APIConfigure once, connect any server
TransportHTTPstdio or SSE (HTTP-based)

For a deeper exploration of how MCP changes the API integration landscape, see The Future of APIs: From Developers to AI Agents.

MCP vs agent.json

agent.json and MCP serve different but complementary roles in the AI agent ecosystem:

agent.json describes capabilities. It is a static file that tells agents what your API can do, what actions are available, what inputs each action requires, and when to use each action. It is a specification format — a machine-readable description of your API's surface area.

MCP enables connection. It is a runtime protocol that defines how an agent communicates with a tool server to invoke actions, receive results, and access resources. It is the wire protocol for AI agent tool use.

In practice, these work together: an agent might discover your API through your agent.json file, then connect to it via an MCP server to actually use it. Elba generates both from a single source of truth — your API definition — ensuring they stay consistent.

Who supports MCP

MCP was created by Anthropic and is natively supported by Claude across all interfaces — Claude Desktop, the Claude API, and Claude-powered applications. Since its release, adoption has expanded significantly:

AI-powered IDEs

Cursor, Windsurf, and Cline all support MCP, allowing developers to connect their coding assistants to databases, APIs, and internal tools via MCP servers.

Agent frameworks

Popular frameworks like LangChain, CrewAI, and others have added or are adding MCP client support, enabling agents built on these frameworks to use MCP servers.

Community servers

A growing ecosystem of MCP servers provides access to services like GitHub, Slack, PostgreSQL, Notion, Linear, and hundreds of other tools. The official MCP GitHub organization maintains reference implementations.

How to create an MCP config for your API

There are two paths to making your API available via MCP:

Option 1: Build a custom MCP server

Use the official MCP SDK (available in TypeScript and Python) to build a server that exposes your API endpoints as MCP tools. This gives you full control over the tool definitions, input validation, and response formatting.

server.ts (TypeScript MCP server)
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "projectboard",
  version: "1.0.0",
});

server.tool(
  "createTask",
  "Create a new task in a project board",
  {
    boardId: z.string().describe("ID of the board"),
    title: z.string().describe("Task title"),
    priority: z.enum(["low", "medium", "high", "urgent"]).optional(),
  },
  async ({ boardId, title, priority }) => {
    const response = await fetch("https://api.projectboard.io/v2/tasks", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": `Bearer ${process.env.PROJECTBOARD_API_KEY}`,
      },
      body: JSON.stringify({ boardId, title, priority }),
    });
    const task = await response.json();
    return { content: [{ type: "text", text: JSON.stringify(task) }] };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

Option 2: Generate with Elba

Import your OpenAPI spec or API definition into Elba, and it generates an MCP-compatible config automatically — along with agent.json and llms.txt. This approach requires no custom code and keeps all your agent-facing formats in sync. See the MCP documentation for details.

For a comprehensive overview of agent-native documentation approaches, see Best API Documentation for AI Agents.

Frequently asked questions

Is MCP open source?

Yes. MCP was created by Anthropic and released as an open specification. The protocol definition, reference implementations, and SDK libraries are all open source. Any organization can implement MCP servers or clients without licensing restrictions, and the community actively contributes to the ecosystem of MCP servers and tooling.

Does MCP replace REST APIs?

No. MCP is a protocol layer that sits on top of existing APIs — it does not replace them. Your REST, GraphQL, or RPC endpoints continue to work exactly as before. An MCP server acts as a bridge that translates between the MCP protocol and your existing API. Think of MCP as an adapter: it gives AI agents a standardized way to call your API without changing the API itself.

Which AI models and platforms support MCP?

MCP was created by Anthropic and is natively supported by Claude. It is also supported by a growing number of AI platforms and agent frameworks, including Cursor, Windsurf, Cline, and various open-source agent toolkits. Because MCP is an open standard, any platform can add support, and adoption is accelerating rapidly in the AI developer tools space.

Can I use MCP with my existing API?

Yes. You can create an MCP server that wraps your existing API endpoints as MCP tools. The MCP server handles the protocol communication with AI agents, then translates tool calls into standard HTTP requests to your API. Platforms like Elba can generate MCP server configs automatically from your OpenAPI spec or agent.json file.

Further reading

For a deeper technical walkthrough of MCP with additional examples, read our blog post MCP Explained: Model Context Protocol for API Integration. For integration details and setup guides, see the MCP documentation.

Related topics: What is agent.json? · What is llms.txt? · How to Make Your API Usable by AI Agents

Generate MCP configs with Elba
Import your API and get MCP server configs, agent.json, and llms.txt in minutes.

Related reading