The Model Context Protocol (MCP) is clearly becoming the standard for AI tool integration, with both Anthropic and OpenAI throwing their weight behind it. Vercel has joined this movement with their own implementation in the AI SDK, marked as experimental but fully functional. While exploring their approach through experimental_createMCPClient
, I discovered some fascinating design decisions and implementation details worth sharing.
The Model Context Protocol is Anthropic's open standard for connecting AI assistants to external tools and data sources. Think of it as a universal adapter that lets AI models interact with any system - from databases to APIs to local files - through a standardized interface.
What's exciting about Vercel's implementation is how they've integrated MCP support directly into their AI SDK, allowing developers to seamlessly connect MCP-compatible tools with any AI model the SDK supports.
The AI SDK's MCP implementation follows a beautifully simple design principle: it's a lightweight client focused exclusively on tool conversion. Rather than implementing the full MCP specification, Vercel's team made a strategic decision to focus on what matters most for AI applications - making tools work seamlessly.
// The simple API that hides complex implementation
import { experimental_createMCPClient } from 'ai';
const mcpClient = await experimental_createMCPClient({
transport: {
type: 'sse',
url: 'https://your-mcp-server.com/sse'
}
});
// Automatically discover and convert all MCP tools
const tools = await mcpClient.tools();
The architecture consists of three key layers:
- Transport Layer: Handles communication with MCP servers
- Protocol Layer: Manages JSON-RPC message exchange
- Conversion Layer: Transforms MCP tools into AI SDK-compatible tools
Transport Flexibility: Three Ways to Connect
One of the most impressive aspects is the transport system's flexibility. The SDK supports three distinct ways to connect to MCP servers:
- Server-Sent Events (SSE) - For Web-Based Servers
const client = await experimental_createMCPClient({
transport: {
type: 'sse',
url: 'https://api.example.com/mcp',
headers: { Authorization: 'Bearer token' }
}
});
Perfect for cloud-hosted MCP servers, SSE transport provides real-time bidirectional communication over HTTP. The implementation includes automatic reconnection, origin validation, and endpoint discovery.
- STDIO - For Local Tools
import { StdioMCPTransport } from 'ai/mcp-stdio';
const transport = new StdioMCPTransport({
command: 'python',
args: ['mcp_server.py'],
env: { CUSTOM_VAR: 'value' }
});
const client = await experimental_createMCPClient({ transport });
This transport spawns local processes and communicates via standard input/output - ideal for integrating command-line tools or local scripts as AI tools.
- Custom Transports - For Everything Else
Implementation Details: Through testing the experimental API, I discovered the transport implementation has specific requirements:
class WorkingTransport {
// These callbacks are set by the MCP client
onmessage?: (message: JSONRPCMessage) => void;
onerror?: (error: unknown) => void;
onclose?: () => void;
async start() {
// Critical: Do NOT send initialization here
// The MCP client handles the init handshake
}
async send(message: JSONRPCMessage) {
const response = await fetch(this.url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(message) // Pass through exactly
});
// Handle notifications (they return 204 No Content)
if (response.status === 204) {
return; // No response expected
}
const result = await response.json();
// Critical: Use callback, don't return the result
if (this.onmessage) {
this.onmessage(result);
}
}
async close() { /* cleanup */ }
}
Key requirements discovered:
- Never modify message IDs - the client tracks these internally
- Don't initialize in start() - the client handles the protocol handshake
- Use callbacks, not returns - responses must go through
onmessage
- Handle 204 for notifications - notifications don't get responses
The real innovation lies in how MCP tools become AI SDK tools. The SDK provides two modes:
// Let the SDK discover and convert all available tools
const tools = await mcpClient.tools();
// Important: Tools aren't directly callable functions
// They have an execute() method for invocation
const result = await tools.calculator.execute({
operation: 'multiply',
a: 25,
b: 4
});
// The result contains structured content
console.log(JSON.parse(result.content[0].text));
// Output: { result: 100, expression: '25 multiply 4 = 100' }
The SDK automatically:
- Discovers all available tools from the MCP server
- Creates tool objects with
execute()
methods - Handles parameter validation
- Manages the protocol communication
// Define tools with full type safety
const tools = await mcpClient.tools({
schemas: {
'get-weather': {
inputSchema: z.object({
location: z.string(),
units: z.enum(['celsius', 'fahrenheit'])
})
}
}
});
// TypeScript knows exactly what parameters are required
tools['get-weather'] // Fully typed!
Here's where it gets powerful. You can aggregate tools from multiple MCP servers:
// Connect to multiple specialized MCP servers
const databaseClient = await experimental_createMCPClient({
transport: { type: 'sse', url: 'https://db-server.com/mcp' }
});
const filesystemClient = await experimental_createMCPClient({
transport: new StdioMCPTransport({
command: 'node',
args: ['./local-fs-server.js']
})
});
const apiClient = await experimental_createMCPClient({
transport: { type: 'sse', url: 'https://api-gateway.com/mcp' }
});
// Combine all tools into a single toolset
const allTools = {
...await databaseClient.tools(),
...await filesystemClient.tools(),
...await apiClient.tools()
};
// Use with streaming for real-time responses
const stream = await streamText({
model: anthropic('claude-3-opus'),
tools: allTools,
prompt: 'Analyze the sales data from the database and create a report'
});
The implementation follows JSON-RPC 2.0 with MCP extensions. Here's the lifecycle I observed while working with the experimental API:
- Initialization Handshake: Client sends
initialize
with protocol version"2025-06-18"
- Server Response: Must return capabilities as objects (
{ tools: {}, resources: {}, prompts: {} }
) - Notification: Client sends
notifications/initialized
(no ID, expects 204 response) - Tool Discovery: Client requests available tools via
tools/list
- Tool Execution: Calls are sent as
tools/call
with the tool name and arguments - Response Format: Tools must return
{ content: [{ type: 'text', text: 'JSON result' }] }
Key protocol details from the experimental implementation:
- Protocol version is
"2025-06-18"
- Capabilities are objects, not booleans
- Notifications don't have message IDs and should return 204 No Content
- Tool responses must be wrapped in a content array structure
Some particularly interesting features in the experimental implementation:
MCP tools can return not just text, but images and resources:
// MCP tool can return images
const result = await tool.execute({ prompt: "Generate a chart" });
// Returns: { type: 'image', data: 'base64...', mimeType: 'image/png' }
The SDK includes a SerialJobExecutor that ensures tool calls are processed sequentially, preventing race conditions in stateful tools.
The createStitchableStream utility allows dynamic composition of multiple response streams - perfect for tools that return data incrementally.
Being experimental, the implementation has intentional constraints:
- No server implementation - client only
- No notification handling - request/response only
- No session management - stateless operation
- No resource management - tools only
- Limited to tool conversion - no prompts or logging features
These limitations are actually features - they keep the implementation focused and lightweight.
Vercel's MCP support represents a significant step toward tool interoperability in the AI ecosystem. By building MCP into the AI SDK, they've created a bridge between:
- Any AI model (OpenAI, Anthropic, Google, etc.)
- Any MCP tool (databases, APIs, local scripts)
- Any transport (HTTP, WebSocket, local processes)
This means you can write an MCP server once and use it with any AI model supported by the SDK. Or conversely, use existing MCP tools from the ecosystem with your preferred model.
While marked as experimental, this MCP implementation shows Vercel's commitment to the emerging standard for AI tool integration. As the MCP ecosystem grows alongside adoption by major players like Anthropic and OpenAI, having this built into one of the most popular AI SDKs positions developers well for the future.
The experimental prefix (experimental_createMCPClient
) indicates the API may evolve, but the core functionality is solid for those ready to explore MCP's potential.
Based on my testing, here's a complete working example:
import { experimental_createMCPClient } from 'ai';
// Working transport implementation
class MCPTransport {
constructor(url) {
this.url = url;
}
async start() {
// Empty - let the client handle initialization
}
async send(message) {
const response = await fetch(this.url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(message)
});
if (response.status === 204) return;
const result = await response.json();
if (this.onmessage) {
this.onmessage(result);
}
}
async close() {}
}
// Create client and use tools
const mcpClient = await experimental_createMCPClient({
transport: new MCPTransport('http://localhost:3456/rpc')
});
const tools = await mcpClient.tools();
// Execute a tool directly
const result = await tools.calculator.execute({
operation: 'add',
a: 10,
b: 20
});
console.log(JSON.parse(result.content[0].text));
// Output: { result: 30, expression: '10 add 20 = 30' }
await mcpClient.close();
The complete demo with server implementation is available in the ai-sdk-undocumented-mcp-demo repository.
Vercel's experimental MCP support in the AI SDK is a smart bet on the future of AI tool integration. With MCP emerging as the standard across major AI platforms, Vercel's implementation provides developers with a simple, type-safe bridge between MCP servers and AI models.
Whether you're building specialized tools for your AI applications or want to tap into the growing MCP ecosystem, this experimental feature is worth exploring. The experimental status means the API may evolve, but that's part of the excitement - we're witnessing the early days of a new standard for AI interoperability.