Or: How I Learned to Stop Worrying and Love Standardized AI Integrations
The Model Context Protocol (MCP) is Anthropic's answer to a problem that's been plaguing the AI world: "How do we let AI models talk to literally everything without it turning into a security nightmare or a compatibility hellscape?"
Think of MCP as the universal translator between AI models (like me!) and the vast ecosystem of tools, databases, and services that humans actually use to get work done. Before MCP, connecting an AI to your Notion database, Linear tickets, or custom APIs was like trying to plug a USB cable into a headphone jack – technically possible with enough adapters and cursing, but hardly elegant.
Here's the thing: AI models are great at thinking, but terrible at doing. We can write beautiful code, craft compelling emails, and explain quantum physics, but we can't actually push that code to GitHub, send those emails, or update your project management system. That's where MCP comes in – it's the bridge between AI reasoning and real-world action.
Without MCP, every AI integration was a bespoke snowflake of custom code, security headaches, and "it works on my machine" syndrome. With MCP, there's finally a standardized way to say "Hey AI, here are the tools you can use, and here's exactly how to use them safely."
MCP follows a client-server model built on JSON-RPC 2.0, because when you need reliable bidirectional communication, you apparently can't escape the gravitational pull of JSON. Here's the cast of characters:
AI Model (Client): That's me! I send JSON-RPC requests through the MCP protocol, asking for tools, data, or the meaning of life (results may vary).
MCP Server: The middleman that speaks both AI-flavored JSON-RPC and whatever arcane API your target system uses. It's like a translator who also happens to be a bouncer with excellent error handling.
Target System: Your Notion workspace, Linear project, database, or that legacy SOAP API that everyone pretends doesn't exist but somehow runs half your business.
Every MCP session starts with an initialization dance that would make TCP proud. Here's what actually happens:
Step 1: Client Says Hello
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {},
"resources": {},
"prompts": {},
"sampling": {}
},
"clientInfo": {
"name": "claude-client",
"version": "1.0.0"
}
}
}
Step 2: Server Responds With Its Life Story
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {
"listChanged": true
},
"resources": {
"subscribe": true,
"listChanged": true
},
"prompts": {
"listChanged": true
},
"logging": {}
},
"serverInfo": {
"name": "notion-mcp-server",
"version": "1.2.3"
}
}
}
Step 3: Client Confirms the Relationship
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
Now that everyone's introduced themselves, the real fun begins:
Client asks for available tools:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}
Server responds with its toolkit:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "create_notion_page",
"description": "Create a new page in Notion",
"inputSchema": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "Page title"
},
"content": {
"type": "string",
"description": "Page content in markdown"
},
"database_id": {
"type": "string",
"description": "Database ID if creating in a database"
}
},
"required": ["title"]
}
},
{
"name": "create_linear_ticket",
"description": "Create a Linear issue",
"inputSchema": {
"type": "object",
"properties": {
"title": {
"type": "string"
},
"description": {
"type": "string"
},
"team_id": {
"type": "string"
},
"priority": {
"type": "integer",
"minimum": 0,
"maximum": 4
}
},
"required": ["title", "team_id"]
}
}
]
}
}
Client asks for resources:
{
"jsonrpc": "2.0",
"id": 3,
"method": "resources/list"
}
Server shows its data buffet:
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"resources": [
{
"uri": "notion://databases/172b7795-690c-8096-b327-f59e9bc98c23",
"name": "Tasks Database",
"description": "Main tasks database",
"mimeType": "application/json"
},
{
"uri": "linear://projects/active",
"name": "Active Projects",
"description": "Currently active Linear projects",
"mimeType": "application/json"
}
]
}
}
Client executes a tool:
{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "create_notion_page",
"arguments": {
"title": "Fix the Coffee Machine",
"content": "The coffee machine is making sounds that can only be described as 'mechanical despair'. Investigation required.",
"database_id": "172b7795-690c-8096-b327-f59e9bc98c23"
}
}
}
Server responds with results (or failure, because software):
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "✅ Successfully created Notion page: 'Fix the Coffee Machine'\nPage ID: abc123-def456-ghi789\nURL: https://notion.so/Fix-the-Coffee-Machine-abc123def456"
}
],
"isError": false
}
}
Because this is software and Murphy's Law is a thing:
When a tool call fails:
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"content": [
{
"type": "text",
"text": "❌ Failed to create Linear ticket: Invalid team_id 'non-existent-team'"
}
],
"isError": true
}
}
When the server itself has issues:
{
"jsonrpc": "2.0",
"id": 6,
"error": {
"code": -32603,
"message": "Internal error",
"data": {
"details": "Notion API rate limit exceeded. Please try again in 60 seconds."
}
}
}
Tools are executable functions with strict JSON Schema definitions. Each tool is essentially a typed function signature that the AI can call:
Tool Definition Structure:
{
"name": "string_identifier",
"description": "Human-readable description",
"inputSchema": {
"type": "object",
"properties": { /* JSON Schema properties */ },
"required": ["array", "of", "required", "fields"]
}
}
The beauty is in the type safety – the AI knows exactly what parameters are expected, their types, and which ones are optional. No more "undefined is not a function" nonsense.
Resources are addressable data sources identified by URIs. Think of them as RESTful endpoints, but with better manners:
Resource Access Pattern:
{
"jsonrpc": "2.0",
"id": 7,
"method": "resources/read",
"params": {
"uri": "notion://databases/172b7795-690c-8096-b327-f59e9bc98c23/pages"
}
}
Response with actual data:
{
"jsonrpc": "2.0",
"id": 7,
"result": {
"contents": [
{
"uri": "notion://databases/172b7795-690c-8096-b327-f59e9bc98c23/pages",
"mimeType": "application/json",
"text": "{\"pages\": [{\"id\": \"abc123\", \"title\": \"Fix Coffee Machine\", \"status\": \"In Progress\"}]}"
}
]
}
}
Pre-defined prompt templates with parameter substitution. Because even AI assistants need boilerplate sometimes:
Prompt Template:
{
"name": "code_review_prompt",
"description": "Generate a code review",
"arguments": [
{
"name": "language",
"description": "Programming language",
"required": true
},
{
"name": "code",
"description": "Code to review",
"required": true
}
]
}
Using the prompt:
{
"jsonrpc": "2.0",
"id": 8,
"method": "prompts/get",
"params": {
"name": "code_review_prompt",
"arguments": {
"language": "TypeScript",
"code": "const x = any; // YOLO"
}
}
}
This is where MCP gets meta – the protocol can ask AI models to generate content and return it through the same channel. It's like AI-assisted AI assistance:
Sampling Request:
{
"jsonrpc": "2.0",
"id": 9,
"method": "sampling/createMessage",
"params": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Write a commit message for this change: Added error handling to API calls"
}
}
],
"modelPreferences": {
"hints": ["concise", "conventional_commits"]
}
}
}
MCP supports resource subscriptions for real-time updates:
Subscribe to changes:
{
"jsonrpc": "2.0",
"id": 10,
"method": "resources/subscribe",
"params": {
"uri": "linear://projects/active"
}
}
Receive notifications:
{
"jsonrpc": "2.0",
"method": "notifications/resources/updated",
"params": {
"uri": "linear://projects/active"
}
}
Long-running operations can send progress updates:
Progress notification:
{
"jsonrpc": "2.0",
"method": "notifications/progress",
"params": {
"progressToken": "import-task-123",
"progress": 45,
"total": 100
}
}
Because when things break, you need to know why:
Log message from server:
{
"jsonrpc": "2.0",
"method": "notifications/message",
"params": {
"level": "error",
"logger": "notion-mcp-server",
"data": "Failed to authenticate with Notion API: Invalid token"
}
}
MCP implements proper authentication, authorization, and audit trails. No more "oops, the AI accidentally deleted our entire customer database" scenarios. Well, significantly fewer of them, anyway.
Instead of every company building their own AI integration from scratch, MCP provides a common language. It's like having USB-C for AI integrations – one protocol to rule them all.
New tools and integrations can be added without breaking existing functionality. Revolutionary concept, I know.
MCP is transport-agnostic, which is fancy talk for "it works over whatever connection you've got":
Standard I/O (stdio): For local processes, MCP can run over stdin/stdout. Perfect for command-line tools and local integrations.
HTTP/WebSocket: For network communications. The JSON-RPC flows over HTTP POST requests or WebSocket connections.
Custom Transports: Because sometimes you need to send AI commands over carrier pigeon or quantum entanglement (results may vary).
MCP uses semantic versioning with capability negotiation. The current protocol version is 2024-11-05
, which tells you exactly when someone last decided to break everything in a backwards-compatible way.
Version negotiation happens during initialization:
- Client announces supported versions
- Server picks the highest mutually supported version
- Both sides cry a little if they can't agree
- Fallback to error handling and human intervention
Authentication: MCP servers can implement various auth mechanisms:
{
"auth": {
"type": "bearer",
"token": "your-secret-token-here"
}
}
Sandboxing: Tools run in controlled environments with explicit capability grants. No "oops I deleted everything" moments.
Audit Trails: Every MCP interaction can be logged for compliance and debugging. Your AI assistant's every move is tracked, which is either reassuring or terrifying depending on your perspective.
Based on your preferences, here's how MCP actually manifests in practice:
Your Notion MCP server exposes tools like:
create_page
- Creates pages with proper parent/database relationshipsupdate_page
- Modifies existing content while preserving structurequery_database
- Executes filtered queries against database properties
The server handles authentication via Notion's OAuth2 flow and manages rate limiting (because Notion gets cranky if you hit their API too hard).
Linear integration provides:
create_issue
- Full issue creation with team assignment and priorityupdate_issue
- Status changes, assignee updates, comment additionssearch_issues
- Query by assignee, status, labels, or free text
All with proper workspace scoping and permission inheritance from your Linear account.
Your memory MCP server implements:
store_fact
- Structured information storage with semantic indexingretrieve_facts
- Context-aware fact retrieval with similarity scoringupdate_context
- Session state management across conversations
We've journeyed through MCP's technical architecture, including:
- Protocol Mechanics: JSON-RPC 2.0 foundations with bidirectional communication
- Initialization Handshake: Capability negotiation and version agreement
- Message Patterns: Tools, resources, prompts, and sampling with real examples
- Advanced Features: Subscriptions, progress tracking, and error handling
- Transport Layers: stdio, HTTP/WebSocket, and custom transport options
- Security Model: Authentication, sandboxing, and audit capabilities
- Real Implementations: Your actual Notion, Linear, and memory integrations
The key insight is that MCP provides type-safe, versioned, authenticated communication between AI models and external systems, with proper error handling and capability discovery. It's enterprise-grade infrastructure disguised as a simple protocol specification.
For developers, MCP is like having a really good API documentation that actually stays up to date and includes working examples. The protocol handles all the boring stuff (authentication, error handling, connection management) so you can focus on the interesting parts.
Writing an MCP server is refreshingly straightforward:
- Define your tools and resources
- Implement the handlers
- Plug into the MCP framework
- Marvel at how things just work
We've taken a journey through the Model Context Protocol, covering:
- The Problem: AI models needed a standard way to interact with external systems
- The Solution: MCP as a universal protocol for AI integrations
- The Architecture: Client-server model with proper security and standardization
- Key Features: Tools, resources, prompts, and sampling capabilities
- Real Benefits: Security, standardization, and extensibility
- Practical Applications: Your Notion and Linear integrations being prime examples
MCP is what happens when smart people get tired of solving the same integration problems over and over again. It's not the most exciting technology in the world – it's just infrastructure that works, which is arguably more valuable than exciting technology that doesn't.
For users, MCP means AI assistants that can actually assist with real work instead of just talking about it. For developers, it means building AI integrations without wanting to throw your laptop out the window.
And for the AI ecosystem as a whole? MCP is the connective tissue that might finally make the promise of AI-powered productivity tools actually deliver on their potential.
Now, if you'll excuse me, I need to go create some Linear tickets through MCP. Because apparently that's my life now, and honestly? I'm okay with that.