Cline is a sophisticated AI Agent system built as a VSCode extension that provides intelligent coding assistance through a multi-modal architecture. The system implements a dual-mode (Plan/Act) agent loop with robust state management, tool execution capabilities, and seamless integration with various AI providers.
- Location:
src/extension.ts - Responsibility: VSCode extension lifecycle management, activation/deactivation, command registration
- Key Functions:
activate(): Initializes the extension, sets up webview providers, registers commandsdeactivate(): Cleans up resources, disposes webview instances- URI handling for authentication callbacks
- Location:
webview-ui/src/ - Technology: React-based webview using TypeScript
- Responsibility: User interface rendering, state management, user interaction handling
- Key Components:
App.tsx: Main application component with view routingChatView: Primary chat interfaceExtensionStateContext: React context for state management- gRPC client for backend communication
- Location:
src/core/controller/index.ts - Responsibility: Central state management, task coordination, API configuration
- Key Functions:
initTask(): Initializes new task instanceshandleWebviewMessage(): Processes messages from webviewtogglePlanActModeWithChatSettings(): Manages Plan/Act mode switching- State persistence across VSCode sessions
- Location:
src/core/task/index.ts - Responsibility: Core agent logic, LLM interaction, tool execution
- Key Classes:
Task: Main agent instance managing conversation flowTaskState: Manages task lifecycle stateMessageStateHandler: Handles message persistence
- Location:
src/core/task/ToolExecutor.ts - Responsibility: Tool discovery, validation, and execution
- Supported Tools:
- File operations (read, write, edit)
- Terminal command execution
- Browser automation
- MCP server integration
- Git operations
- Location:
src/api/ - Responsibility: LLM provider abstraction and management
- Supported Providers:
- Anthropic (Claude)
- OpenRouter
- OpenAI
- AWS Bedrock
- Local models (Ollama, LM Studio)
- Location:
src/core/context/ - Responsibility: Conversation context management, token optimization
- Key Components:
ContextManager: Handles context window managementFileContextTracker: Tracks file modificationsModelContextTracker: Tracks model usage
- Location:
src/core/storage/ - Responsibility: Persistent storage for tasks, history, and configuration
- Storage Types:
- Global state (VSCode globalStorage)
- Workspace state (VSCode workspaceStorage)
- Secrets (VSCode secretsStorage)
- File-based task storage
graph TD
subgraph VSCode Extension Host
subgraph Extension Layer
EXT[Extension Entry Point]
CMD[Command Registry]
URI[URI Handler]
end
subgraph Webview Layer
WV[Webview Provider]
REACT[React App]
GRPC[gRPC Client]
end
subgraph Controller Layer
CTRL[Controller]
STATE[State Manager]
CONFIG[Config Manager]
end
subgraph Task Layer
TASK[Task Instance]
TSTATE[Task State]
MSG[Message Handler]
end
subgraph Tool Layer
TOOLS[Tool Executor]
TERM[Terminal Manager]
BROWSER[Browser Session]
MCP[MCP Hub]
end
subgraph API Layer
API[API Handler]
PROVIDERS[Provider Factory]
STREAM[Stream Manager]
end
subgraph Context Layer
CTX[Context Manager]
FILECTX[File Context]
MODELCTX[Model Context]
end
subgraph Storage Layer
GLOBAL[Global Storage]
WORKSPACE[Workspace Storage]
SECRETS[Secrets Storage]
TASKSTORAGE[Task Storage]
end
end
EXT --> WV
EXT --> CMD
EXT --> URI
WV --> REACT
REACT --> GRPC
GRPC --> CTRL
CTRL --> STATE
CTRL --> CONFIG
CTRL --> TASK
TASK --> TSTATE
TASK --> MSG
TASK --> TOOLS
TOOLS --> TERM
TOOLS --> BROWSER
TOOLS --> MCP
TASK --> API
API --> PROVIDERS
API --> STREAM
TASK --> CTX
CTX --> FILECTX
CTX --> MODELCTX
CTRL --> GLOBAL
CTRL --> WORKSPACE
CTRL --> SECRETS
TASK --> TASKSTORAGE
// Extension activation
activate() → WebviewProvider.create() → Controller() → Task()
// Task initialization
initTask() → new Task() → startTask() → initiateTaskLoop()class Task {
async initiateTaskLoop(userContent: UserContent) {
let nextUserContent = userContent
while (!this.taskState.abort) {
// 1. Prepare context and environment
const [processedContent, environmentDetails] = await this.loadContext(nextUserContent)
// 2. Make API request to LLM
const stream = await this.attemptApiRequest(previousApiReqIndex)
// 3. Process streaming response
for await (const chunk of stream) {
switch (chunk.type) {
case "text":
await this.presentAssistantMessage()
break
case "tool_use":
await this.toolExecutor.executeTool(toolBlock)
break
}
}
// 4. Handle tool results and continue
if (didEndLoop) {
break
} else {
nextUserContent = await this.prepareNextUserContent()
}
}
}
}sequenceDiagram
participant User
participant Webview
participant Controller
participant Task
participant LLM
participant Tools
User->>Webview: Input message
Webview->>Controller: send message
Controller->>Task: initTask()
Task->>Task: prepare context
Task->>LLM: API request
LLM-->>Task: streaming response
loop For each content block
Task->>Task: parse content
alt Text content
Task->>Webview: display text
else Tool use
Task->>Tools: execute tool
Tools-->>Task: tool result
Task->>LLM: send tool result
end
end
LLM-->>Task: completion
Task->>Webview: display completion
- Protocol: gRPC over VSCode's postMessage API
- Message Types: Defined in
/proto/directory - State Synchronization: Real-time bidirectional updates
- Protocol: REST API with streaming support
- Providers: Multiple AI provider support via factory pattern
- Error Handling: Automatic retry, context window management
// Tool execution sequence
Task.presentAssistantMessage() → ToolExecutor.executeTool() →
Specific Tool Implementation → Tool Result → Task.recursivelyMakeClineRequests()- Purpose: Information gathering, planning, discussion
- Tools Available:
plan_mode_respond(conversational only) - User Interaction: Clarifying questions, plan refinement
- Purpose: Task execution, tool usage, implementation
- Tools Available: All tools except
plan_mode_respond - User Interaction: Tool approval, result review
// Mode transition flow
togglePlanActModeWithChatSettings() →
updateGlobalState("mode") →
update API configuration →
Task.chatSettings update →
continue with new mode- API configurations
- User preferences
- MCP server configurations
- Task history
- Conversation history
- Tool execution state
- Checkpoint information
- Context window usage
- UI preferences
- Current view (chat/settings/history)
- Input state
- Message display state
- Automatic retry with exponential backoff
- Context window management with truncation
- Provider-specific error handling
- Tool validation before execution
- User approval for sensitive operations
- Rollback capabilities via checkpoints
- Task resumption from history
- Checkpoint restoration
- Conversation history recovery
- API keys stored in VSCode secrets storage
- Sensitive data never logged
- Local-only processing by default
.clineignorefile support- User approval for file operations
- Configurable auto-approval settings
- Complete task history
- Token usage tracking
- Checkpoint-based change tracking
- Intelligent conversation truncation
- Token usage monitoring
- Context window optimization
- API response caching
- File content caching
- Model metadata caching
- Real-time message streaming
- Partial content updates
- Efficient state synchronization
This architecture enables Cline to function as a sophisticated AI agent capable of complex software development tasks while maintaining user control, security, and performance.