Skip to content

Model Context Protocol

Prereq: Build a ReAct Agent. MCP is the standardization layer on top of the agent loop.

When Claude Desktop loads ~/Library/Application Support/Claude/claude_desktop_config.json and finds {"mcpServers": {"postgres": {"command": "python", "args": ["./db_server.py"]}}}, it spawns that Python script as a subprocess, asks it over JSON-RPC for a list of tools, and registers everything that comes back as functions the model can call mid-conversation. Three lines of config, and Claude Desktop just gained the ability to query your Postgres database. Cursor reading the same config gets the same ability. So does VS Code, Zed, and your custom agent. Build the server once; every compliant client inherits the integration.

Pre-, every LLM agent re-implemented its own tool-discovery handshake. Cursor had its tool spec, Claude Desktop had a different one, LangChain had a third, your in-house agent had a fourth. was the primitive but everyone had to ship N integrations for N clients. MCP collapses that to one open spec — three primitives (tools, resources, prompts), JSON-RPC over stdio or HTTP, official SDKs in Python, TypeScript, Go, and Rust. By 2026 there are hundreds of public MCP servers (Postgres, GitHub, Slack, Linear, every major filesystem) and every serious LLM client is a consumer. This lesson builds a working server in 30 lines and shows the protocol’s three primitives.

TL;DR

  • MCP (Model Context Protocol) is an open spec — released by Anthropic in late 2024, adopted broadly through 2025–2026 — for how an LLM client (Claude Desktop, Cursor, custom agent) discovers and calls tools / reads files / fetches prompts from an external server.
  • Three primitives: tools (functions the LLM can invoke), resources (files / URLs the LLM can read), prompts (parameterized templates the user can invoke).
  • One protocol, many transports: stdio (local subprocess), SSE / HTTP (network). Servers can be written in TypeScript, Python, Go, Rust — official SDKs in all of these.
  • The 2026 ecosystem: hundreds of public MCP servers (databases, APIs, file systems, GitHub, Slack, Linear). Claude Desktop, Cursor, VS Code, Zed, Continue.dev all consume MCP. Building an MCP server is the new “build an integration.”
  • Compared to ad-hoc tool definitions: MCP decouples the tool implementation from the client. Write once, ship everywhere.

Mental model

The client mediates; the server implements; the LLM uses. Same loop, standardized handshake.

Three primitives

Tools: callable functions the LLM may invoke. JSON-Schema input. Returns text or structured data.

Resources: read-only handles to data the LLM may fetch as context. Each has a URI (e.g., file:///path or db://table-name); listing returns IDs, reading returns content.

Prompts: pre-canned templates the user can invoke (e.g., a slash command in Claude Desktop). Server lists available prompts; user picks one; client renders + sends.

The simplest MCP server (Python)

# pip install mcp from mcp.server.fastmcp import FastMCP mcp = FastMCP("my-server") @mcp.tool() def get_weather(city: str) -> str: """Get current weather for a city.""" # Real impl: call a weather API return f"In {city}: 72°F, sunny" @mcp.tool() def calculate(expression: str) -> str: """Evaluate a math expression.""" return str(eval(expression, {"__builtins__": None}, {})) @mcp.resource("config://settings") def get_settings() -> str: """Return application settings.""" return '{"theme": "dark", "model": "claude-sonnet-4-5"}' @mcp.prompt() def code_review(language: str = "python") -> str: """Generate a code-review prompt for the given language.""" return f"Please review the following {language} code for bugs and style:" if __name__ == "__main__": mcp.run()

That’s a complete server. Three tools, one resource, one prompt. Run it:

python my_server.py

It listens on stdin/stdout for JSON-RPC requests from a client.

Wiring to Claude Desktop

In ~/Library/Application Support/Claude/claude_desktop_config.json:

{ "mcpServers": { "my-server": { "command": "python", "args": ["/absolute/path/to/my_server.py"] } } }

Restart Claude Desktop. The new tools / resources / prompts appear in the UI; Claude can now call get_weather etc. mid-conversation.

Transports

  • stdio: server is a subprocess of the client. Cheap, secure (no network), local-only. Default for desktop integrations.
  • HTTP / SSE: server is a network service. For multi-client / hosted scenarios. Auth via OAuth or simple bearer tokens.
  • WebSocket: full-duplex variant of HTTP. Used by some IDE integrations.

The protocol is the same; the transport is configurable.

A real production example: a Postgres MCP server

from mcp.server.fastmcp import FastMCP import psycopg2 mcp = FastMCP("postgres") conn = psycopg2.connect("postgresql://localhost/mydb") @mcp.tool() def query(sql: str) -> str: """Execute a read-only SQL query and return results.""" if not sql.strip().lower().startswith("select"): return "Error: only SELECT queries allowed." cur = conn.cursor() cur.execute(sql) rows = cur.fetchall() cols = [d[0] for d in cur.description] return "\n".join([" | ".join(cols)] + [" | ".join(map(str, r)) for r in rows[:100]]) @mcp.resource("db://schema") def schema() -> str: """Return the database schema.""" cur = conn.cursor() cur.execute("SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_schema='public'") return "\n".join(f"{t}.{c} {dt}" for t, c, dt in cur.fetchall())

Now any MCP client (Claude Desktop, Cursor, your custom agent) can:

  • Read db://schema to know the database structure.
  • Call query(...) with safe SELECT statements.

Built once; works in every MCP client. This is the value proposition.

Security considerations

MCP servers run with the privileges of the user who launches them. They can read files, hit APIs, modify state. Treat them like any other CLI tool: verify the source before installing, audit what they expose, isolate untrusted servers in containers.

Anthropic’s security guidance:

  • Scope tools narrowly. A read_file tool is fine; a run_shell tool is dangerous.
  • Distinguish read-only from mutating tools. UI clients should require confirmation for the latter.
  • Validate all inputs server-side. Don’t trust the LLM to send sensible arguments.
  • Log everything. Tool invocations are auditable events.

Public MCP server ecosystem

By 2026 there are hundreds: GitHub, Slack, Linear, Notion, Asana, AWS, Cloudflare, Stripe, Postgres, SQLite, MongoDB, Redis, every major filesystem, browser-automation, image generation, calendar, email. The MCP servers directory (modelcontextprotocol.io) lists them.

For most use cases: don’t write your own; pick from the list. Write your own for proprietary internal systems.

Building MCP into your agent

# Pseudocode: an MCP-aware ReAct agent from mcp.client import StdioClient async with StdioClient(command="python", args=["./db_server.py"]) as client: # Discover tools tools = await client.list_tools() tools_spec = [t.to_anthropic_format() for t in tools] # Pass to your LLM response = anthropic_client.messages.create( model="claude-sonnet-4-5", tools=tools_spec, messages=[...], ) # When the model calls a tool, dispatch via MCP for block in response.content: if block.type == "tool_use": result = await client.call_tool(block.name, block.input) # ... feed back to the model ...

Your agent becomes provider-of-tools-aware: bring up MCP servers, the agent uses them automatically. This is how Claude Desktop, Cursor, and every emerging agent platform work in 2026.

Run it in your browser — toy MCP-shaped server

Python — editableSimulate the MCP message handshake and tool dispatch in pure Python (no real protocol — just the shape).
Ctrl+Enter to run

The handshake is the protocol. Real MCP adds JSON-RPC framing, async, transports, capability negotiation — but the shape is what you simulated.

Quick check

Fill in the blank
The three primitive types an MCP server exposes:
Comma-separated triple.
Quick check
A team builds a custom internal tool integration for Claude Desktop, Cursor, and their own agent. Pre-MCP they had three implementations. The 2026 way:

Key takeaways

  1. MCP = open standard for tool / resource / prompt exposure to LLM clients.
  2. Three primitives (tools, resources, prompts), two transports (stdio, HTTP / SSE).
  3. Write a server once; every MCP client (Claude Desktop, Cursor, etc.) gets the integration for free.
  4. Hundreds of public MCP servers as of 2026. Default to using existing ones; write your own for proprietary systems.
  5. Security is your responsibility. Servers run with user privileges; scope narrowly, validate inputs, log everything.

Go deeper

Prereq: Build a ReAct Agent. MCP is the standardization layer on top of the agent loop.

TL;DR

  • MCP (Model Context Protocol) is an open spec — released by Anthropic in late 2024, adopted broadly through 2025–2026 — for how an LLM client (Claude Desktop, Cursor, custom agent) discovers and calls tools / reads files / fetches prompts from an external server.
  • Three primitives: tools (functions the LLM can invoke), resources (files / URLs the LLM can read), prompts (parameterized templates the user can invoke).
  • One protocol, many transports: stdio (local subprocess), SSE / HTTP (network). Servers can be written in TypeScript, Python, Go, Rust — official SDKs in all of these.
  • The 2026 ecosystem: hundreds of public MCP servers (databases, APIs, file systems, GitHub, Slack, Linear). Claude Desktop, Cursor, VS Code, Zed, Continue.dev all consume MCP. Building an MCP server is the new “build an integration.”
  • Compared to ad-hoc tool definitions: MCP decouples the tool implementation from the client. Write once, ship everywhere.

Why this matters

Pre-MCP: every LLM agent re-implemented its own tool-discovery and tool-calling protocol. Cursor had its own tool spec; Claude Desktop had its own; LangChain had a third. MCP collapses this to one standard. A team that writes a Postgres MCP server gets to ship to every MCP client at once. The product implication is enormous: 2026 is the year “AI integrations” stops being a one-off engineering project per platform and becomes a single open spec.

Mental model

The client mediates; the server implements; the LLM uses. Same ReAct loop, standardized handshake.

Concrete walkthrough

Three primitives

Tools: callable functions the LLM may invoke. JSON-Schema input. Returns text or structured data.

Resources: read-only handles to data the LLM may fetch as context. Each has a URI (e.g., file:///path or db://table-name); listing returns IDs, reading returns content.

Prompts: pre-canned templates the user can invoke (e.g., a slash command in Claude Desktop). Server lists available prompts; user picks one; client renders + sends.

The simplest MCP server (Python)

# pip install mcp from mcp.server.fastmcp import FastMCP mcp = FastMCP("my-server") @mcp.tool() def get_weather(city: str) -> str: """Get current weather for a city.""" # Real impl: call a weather API return f"In {city}: 72°F, sunny" @mcp.tool() def calculate(expression: str) -> str: """Evaluate a math expression.""" return str(eval(expression, {"__builtins__": None}, {})) @mcp.resource("config://settings") def get_settings() -> str: """Return application settings.""" return '{"theme": "dark", "model": "claude-sonnet-4-5"}' @mcp.prompt() def code_review(language: str = "python") -> str: """Generate a code-review prompt for the given language.""" return f"Please review the following {language} code for bugs and style:" if __name__ == "__main__": mcp.run()

That’s a complete server. Three tools, one resource, one prompt. Run it:

python my_server.py

It listens on stdin/stdout for JSON-RPC requests from a client.

Wiring to Claude Desktop

In ~/Library/Application Support/Claude/claude_desktop_config.json:

{ "mcpServers": { "my-server": { "command": "python", "args": ["/absolute/path/to/my_server.py"] } } }

Restart Claude Desktop. The new tools / resources / prompts appear in the UI; Claude can now call get_weather etc. mid-conversation.

Transports

  • stdio: server is a subprocess of the client. Cheap, secure (no network), local-only. Default for desktop integrations.
  • HTTP / SSE: server is a network service. For multi-client / hosted scenarios. Auth via OAuth or simple bearer tokens.
  • WebSocket: full-duplex variant of HTTP. Used by some IDE integrations.

The protocol is the same; the transport is configurable.

A real production example: a Postgres MCP server

from mcp.server.fastmcp import FastMCP import psycopg2 mcp = FastMCP("postgres") conn = psycopg2.connect("postgresql://localhost/mydb") @mcp.tool() def query(sql: str) -> str: """Execute a read-only SQL query and return results.""" if not sql.strip().lower().startswith("select"): return "Error: only SELECT queries allowed." cur = conn.cursor() cur.execute(sql) rows = cur.fetchall() cols = [d[0] for d in cur.description] return "\n".join([" | ".join(cols)] + [" | ".join(map(str, r)) for r in rows[:100]]) @mcp.resource("db://schema") def schema() -> str: """Return the database schema.""" cur = conn.cursor() cur.execute("SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_schema='public'") return "\n".join(f"{t}.{c} {dt}" for t, c, dt in cur.fetchall())

Now any MCP client (Claude Desktop, Cursor, your custom agent) can:

  • Read db://schema to know the database structure.
  • Call query(...) with safe SELECT statements.

Built once; works in every MCP client. This is the value proposition.

Security considerations

MCP servers run with the privileges of the user who launches them. They can read files, hit APIs, modify state. Treat them like any other CLI tool: verify the source before installing, audit what they expose, isolate untrusted servers in containers.

Anthropic’s security guidance:

  • Scope tools narrowly. A read_file tool is fine; a run_shell tool is dangerous.
  • Distinguish read-only from mutating tools. UI clients should require confirmation for the latter.
  • Validate all inputs server-side. Don’t trust the LLM to send sensible arguments.
  • Log everything. Tool invocations are auditable events.

Public MCP server ecosystem

By 2026 there are hundreds: GitHub, Slack, Linear, Notion, Asana, AWS, Cloudflare, Stripe, Postgres, SQLite, MongoDB, Redis, every major filesystem, browser-automation, image generation, calendar, email. The MCP servers directory (modelcontextprotocol.io) lists them.

For most use cases: don’t write your own; pick from the list. Write your own for proprietary internal systems.

Building MCP into your agent

# Pseudocode: an MCP-aware ReAct agent from mcp.client import StdioClient async with StdioClient(command="python", args=["./db_server.py"]) as client: # Discover tools tools = await client.list_tools() tools_spec = [t.to_anthropic_format() for t in tools] # Pass to your LLM response = anthropic_client.messages.create( model="claude-sonnet-4-5", tools=tools_spec, messages=[...], ) # When the model calls a tool, dispatch via MCP for block in response.content: if block.type == "tool_use": result = await client.call_tool(block.name, block.input) # ... feed back to the model ...

Your agent becomes provider-of-tools-aware: bring up MCP servers, the agent uses them automatically. This is how Claude Desktop, Cursor, and every emerging agent platform work in 2026.

Run it in your browser — toy MCP-shaped server

Python — editableSimulate the MCP message handshake and tool dispatch in pure Python (no real protocol — just the shape).
Ctrl+Enter to run

The handshake is the protocol. Real MCP adds JSON-RPC framing, async, transports, capability negotiation — but the shape is what you simulated.

Quick check

Fill in the blank
The three primitive types an MCP server exposes:
Comma-separated triple.
Quick check
A team builds a custom internal tool integration for Claude Desktop, Cursor, and their own agent. Pre-MCP they had three implementations. The 2026 way:

Key takeaways

  1. MCP = open standard for tool / resource / prompt exposure to LLM clients.
  2. Three primitives (tools, resources, prompts), two transports (stdio, HTTP / SSE).
  3. Write a server once; every MCP client (Claude Desktop, Cursor, etc.) gets the integration for free.
  4. Hundreds of public MCP servers as of 2026. Default to using existing ones; write your own for proprietary systems.
  5. Security is your responsibility. Servers run with user privileges; scope narrowly, validate inputs, log everything.

Go deeper