MCP

MCP Diagrams

Model Context Protocol — Visualized

Open Protocol · Released Nov 2024

The USB-C Portfor AI

MCP standardizes how AI models connect to external data sources, tools, and services — eliminating the NxM problem of custom integrations and enabling a universal, secure, bidirectional protocol.

Tools discoverable
2.0
JSON-RPC
1:1
Client-Server
HOST APPLICATION LLM Claude / GPT / Gemini MCP Client A 1:1 stateful conn MCP Client B protocol negotiation MCP Client C capability mgmt MCP Server A GitHub · Files · APIs tools · resources · prompts MCP Server B PostgreSQL · Slack database context MCP Server C Custom APIs · Search remote services MCP PROTOCOL STDIO HTTP/SSE
Interactive Explorer

Protocol Diagrams

mcp_request_flow.svg — live simulation
❌ Before MCP — The NxM Problem
⚠️ Each AI model requires custom integration code for every data source — N models × M sources = NxM unique connectors
⚠️ Inconsistent APIs, fragmented behavior, and unpredictable results across different model-source combinations
⚠️ Tightly coupled integrations mean changes to the model or data source require rewriting entire connection logic
⚠️ No standard security model — each integration implements its own authorization and permission scheme
⚠️ Adding a new AI model means re-integrating every data source from scratch
versus
✅ After MCP — Universal Protocol
One standardized protocol replaces all custom integrations — any MCP client connects to any MCP server automatically
Tool discovery built-in — AI models autonomously discover and integrate new capabilities at runtime
Bidirectional communication: clients call tools, servers can request LLM completions via sampling
Standardized security and permissions model — built-in consent layer before any external action
Open standard adopted by Anthropic, OpenAI, Google — growing ecosystem of servers and clients
Real-time JSON-RPC packet flow simulation
HOST
Claude Desktop
CLIENT
MCP Client
SERVER
MCP Server
DATA
GitHub API
→ Request
tools/call with params — initiates tool invocation
← Response
result or error — standardized JSON-RPC 2.0 response
⟳ Notify
server-push notifications — progress, logs, resources/updated
→ tools/call Request
{ "jsonrpc": "2.0", "id": 42, "method": "tools/call", "params": { "name": "get_weather", "arguments": { "city": "San Francisco", "units": "celsius" } } }
← Response
{ "jsonrpc": "2.0", "id": 42, "result": { "content": [{ "type": "text", "text": "18°C, partly cloudy" }], "isError": false } }
initialize · tools/list · tools/call · resources/list · resources/read · prompts/list · sampling/createMessage
Architecture Breakdown

Core Components

Host Application
MCP Client
MCP Server
Tools
Resources
Sampling
Communication Layer

Transport Mechanisms

STDIO
Standard Input/Output — for local integrations where the MCP server runs in the same environment as the client process
Zero network overhead
Process isolation via stdin/stdout pipes
Simple, reliable, synchronous
Default for Claude Desktop
⚠ Local only — same machine
HTTP + SSE
Server-Sent Events over HTTP — enables remote connections with persistent server-to-client streaming alongside standard HTTP requests
Remote connections over the internet
Server-push streaming via SSE
Stateless HTTP for client→server
Scales across infrastructure
⚠ Requires auth for production
🔧
Capability
Tool Discovery
AI models autonomously discover available tools at runtime via tools/list — no hardcoded knowledge of what's available
📄
Capability
Resource Access
Read-only contextual data exposed by servers — files, database records, API content, documentation
💬
Capability
Prompt Templates
Reusable, parameterized prompt patterns that servers expose for consistent LLM interactions
🔄
Advanced
Sampling
Servers request LLM completions from clients — enabling recursive AI-powered server logic