Skip to content

Instantly share code, notes, and snippets.

@AshikNesin
Forked from AndrewAltimit/!README.md
Created July 27, 2025 17:09
Show Gist options
  • Save AshikNesin/f15e1bf564da5f72295f65b63f266fe7 to your computer and use it in GitHub Desktop.
Save AshikNesin/f15e1bf564da5f72295f65b63f266fe7 to your computer and use it in GitHub Desktop.
Claude Code and Gemini CLI Integration

Gemini CLI Integration for Claude Code MCP Server

A complete setup guide for integrating Google's Gemini CLI with Claude Code through an MCP (Model Context Protocol) server. This provides automatic second opinion consultation when Claude expresses uncertainty or encounters complex technical decisions.

Usage

See the template repository for a complete example, including Gemini CLI automated PR reviews : Example PR , Script.

mcp-demo

Quick Start

1. Install Gemini CLI (Host-based)

# Switch to Node.js 22.16.0
nvm use 22.16.0

# Install Gemini CLI globally
npm install -g @google/gemini-cli

# Test installation
gemini --help

# Authenticate with Google account (free tier: 60 req/min, 1,000/day)
# Authentication happens automatically on first use

2. Direct Usage (Fastest)

# Direct consultation (no container setup needed)
echo "Your question here" | gemini

# Example: Technical questions
echo "Best practices for microservice authentication?" | gemini -m gemini-2.5-pro

Host-Based MCP Integration

Architecture Overview

  • Host-Based Setup: Both MCP server and Gemini CLI run on host machine
  • Why Host-Only: Gemini CLI requires interactive authentication and avoids Docker-in-Docker complexity
  • Communication Modes:
    • stdio (recommended): Bidirectional streaming for production use
    • HTTP: Simple request/response for testing
  • Auto-consultation: Detects uncertainty patterns in Claude responses
  • Manual consultation: On-demand second opinions via MCP tools
  • Response synthesis: Combines both AI perspectives
  • Singleton Pattern: Ensures consistent state management across all tool calls

Key Files Structure

β”œβ”€β”€ gemini_mcp_server.py     # stdio-based MCP server with HTTP mode support
β”œβ”€β”€ gemini_mcp_server_http.py # HTTP server implementation (imported by main)
β”œβ”€β”€ gemini_integration.py     # Core integration module with singleton pattern  
β”œβ”€β”€ gemini-config.json        # Gemini configuration
β”œβ”€β”€ start-gemini-mcp.sh       # Startup script for both modes
└── test_gemini_mcp.py        # Test script for both server modes

All files should be placed in the same directory for easy deployment.

Host-Based MCP Server Setup

stdio Mode (Recommended for Production)

# Start MCP server in stdio mode (default)
cd your-project
python3 gemini_mcp_server.py --project-root .

# Or with environment variables
GEMINI_ENABLED=true \
GEMINI_AUTO_CONSULT=true \
GEMINI_CLI_COMMAND=gemini \
GEMINI_TIMEOUT=200 \
GEMINI_RATE_LIMIT=2 \
python3 gemini_mcp_server.py --project-root .

HTTP Mode (For Testing)

# Start MCP server in HTTP mode
python3 gemini_mcp_server.py --project-root . --port 8006

# The main server automatically:
# 1. Detects the --port argument
# 2. Imports gemini_mcp_server_http module
# 3. Starts the FastAPI server on the specified port

Claude Code Configuration

stdio Configuration (Recommended)

Add to your Claude Code's MCP settings:

{
  "mcpServers": {
    "gemini": {
      "command": "python3",
      "args": ["/path/to/gemini_mcp_server.py", "--project-root", "."],
      "cwd": "/path/to/your/project",
      "env": {
        "GEMINI_ENABLED": "true",
        "GEMINI_AUTO_CONSULT": "true", 
        "GEMINI_CLI_COMMAND": "gemini"
      }
    }
  }
}

HTTP Configuration (For Testing)

{
  "mcpServers": {
    "gemini-http": {
      "url": "http://localhost:8006",
      "transport": "http"
    }
  }
}

Server Mode Comparison

Feature stdio Mode HTTP Mode
Communication Bidirectional streaming Request/Response
Performance Better for long operations Good for simple queries
Real-time updates βœ… Supported ❌ Not supported
Setup complexity Moderate Simple
Use case Production Testing/Development

Core Features

1. Container Detection (Critical Feature)

Both server modes automatically detect if running inside a container and exit immediately with helpful instructions. This is critical because:

  • Gemini CLI requires Docker access for containerized execution
  • Running Docker-in-Docker causes authentication and performance issues
  • The server must run on the host system to access the Docker daemon
  • Detection happens before any imports to fail fast with clear error messages

2. Uncertainty Detection

Automatically detects patterns like:

  • "I'm not sure", "I think", "possibly", "probably"
  • "Multiple approaches", "trade-offs", "alternatives"
  • Critical operations: "security", "production", "database migration"

3. MCP Tools Available

consult_gemini

Manual consultation with Gemini for second opinions or validation.

Parameters:

  • query (required): The question or topic to consult Gemini about
  • context (optional): Additional context for the consultation
  • comparison_mode (optional, default: true): Whether to request structured comparison format
  • force (optional, default: false): Force consultation even if disabled

Example:

# In Claude Code
Use the consult_gemini tool with:
query: "Should I use WebSockets or gRPC for real-time communication?"
context: "Building a multiplayer application with real-time updates"
comparison_mode: true

gemini_status

Check Gemini integration status and statistics.

Returns:

  • Configuration status (enabled, auto-consult, CLI command, timeout, rate limit)
  • Gemini CLI availability and version
  • Consultation statistics (total, completed, average time)
  • Conversation history size

Example:

# Check current status
Use the gemini_status tool

toggle_gemini_auto_consult

Enable or disable automatic Gemini consultation on uncertainty detection.

Parameters:

  • enable (optional): true to enable, false to disable. If not provided, toggles current state.

Example:

# Toggle auto-consultation
Use the toggle_gemini_auto_consult tool

# Or explicitly enable/disable
Use the toggle_gemini_auto_consult tool with:
enable: false

clear_gemini_history

Clear Gemini conversation history to start fresh.

Example:

# Clear all consultation history
Use the clear_gemini_history tool

4. Response Synthesis

  • Identifies agreement/disagreement between Claude and Gemini
  • Provides confidence levels (high/medium/low)
  • Generates combined recommendations
  • Tracks execution time and consultation ID

5. Advanced Features

Conversation History

The integration maintains conversation history across consultations:

  • Configurable history size (default: 10 entries)
  • History included in subsequent consultations for context
  • Can be cleared with clear_gemini_history tool

Uncertainty Detection API

The MCP server exposes methods for detecting uncertainty:

# Detect uncertainty in responses
has_uncertainty, patterns = server.detect_response_uncertainty(response_text)

# Automatically consult if uncertain
result = await server.maybe_consult_gemini(response_text, context)

Statistics Tracking

  • Total consultations attempted
  • Successful completions
  • Average execution time per consultation
  • Total execution time across all consultations
  • Conversation history size
  • Last consultation timestamp
  • Error tracking and timeout monitoring

Configuration

Environment Variables

GEMINI_ENABLED=true                   # Enable integration
GEMINI_AUTO_CONSULT=true              # Auto-consult on uncertainty
GEMINI_CLI_COMMAND=gemini             # CLI command to use
GEMINI_TIMEOUT=200                    # Query timeout in seconds
GEMINI_RATE_LIMIT=5                   # Delay between calls (seconds)
GEMINI_MAX_CONTEXT=4000               # Max context length
GEMINI_MODEL=gemini-2.5-flash         # Model to use
GEMINI_SANDBOX=false                  # Sandboxing isolates operations
GEMINI_API_KEY=                       # Optional (blank for free tier)
GEMINI_LOG_CONSULTATIONS=true         # Log consultation details
GEMINI_DEBUG=false                    # Debug mode
GEMINI_INCLUDE_HISTORY=true           # Include conversation history
GEMINI_MAX_HISTORY=10                 # Max history entries to maintain
GEMINI_MCP_PORT=8006                  # Port for HTTP mode (if used)
GEMINI_MCP_HOST=127.0.0.1             # Host for HTTP mode (if used)

Gemini Configuration File

Create gemini-config.json:

{
  "enabled": true,
  "auto_consult": true,
  "cli_command": "gemini",
  "timeout": 300,
  "rate_limit_delay": 5.0,
  "log_consultations": true,
  "model": "gemini-2.5-flash",
  "sandbox_mode": true,
  "debug_mode": false,
  "include_history": true,
  "max_history_entries": 10,
  "uncertainty_thresholds": {
    "uncertainty_patterns": true,
    "complex_decisions": true,
    "critical_operations": true
  }
}

Integration Module Core

Uncertainty Patterns (Python)

UNCERTAINTY_PATTERNS = [
    r"\bI'm not sure\b",
    r"\bI think\b", 
    r"\bpossibly\b",
    r"\bprobably\b",
    r"\bmight be\b",
    r"\bcould be\b",
    # ... more patterns
]

COMPLEX_DECISION_PATTERNS = [
    r"\bmultiple approaches\b",
    r"\bseveral options\b", 
    r"\btrade-offs?\b",
    r"\balternatives?\b",
    # ... more patterns
]

CRITICAL_OPERATION_PATTERNS = [
    r"\bproduction\b",
    r"\bdatabase migration\b",
    r"\bsecurity\b",
    r"\bauthentication\b",
    # ... more patterns
]

Basic Integration Class Structure

class GeminiIntegration:
    def __init__(self, config: Optional[Dict[str, Any]] = None):
        self.config = config or {}
        self.enabled = self.config.get('enabled', True)
        self.auto_consult = self.config.get('auto_consult', True)
        self.cli_command = self.config.get('cli_command', 'gemini')
        self.timeout = self.config.get('timeout', 60)
        self.rate_limit_delay = self.config.get('rate_limit_delay', 2)
        self.conversation_history = []
        self.max_history_entries = self.config.get('max_history_entries', 10)
        
    async def consult_gemini(self, query: str, context: str = "") -> Dict[str, Any]:
        """Consult Gemini CLI for second opinion"""
        # Rate limiting
        await self._enforce_rate_limit()
        
        # Prepare query with context and history
        full_query = self._prepare_query(query, context)
        
        # Execute Gemini CLI command
        result = await self._execute_gemini_cli(full_query)
        
        # Update conversation history
        if self.include_history and result.get("output"):
            self.conversation_history.append((query, result["output"]))
            # Trim history if needed
            if len(self.conversation_history) > self.max_history_entries:
                self.conversation_history = self.conversation_history[-self.max_history_entries:]
        
        return result
        
    def detect_uncertainty(self, text: str) -> Tuple[bool, List[str]]:
        """Detect if text contains uncertainty patterns"""
        found_patterns = []
        # Check all pattern categories
        # Returns (has_uncertainty, list_of_matched_patterns)

# Singleton pattern implementation
_integration = None

def get_integration(config: Optional[Dict[str, Any]] = None) -> GeminiIntegration:
    """Get or create the global Gemini integration instance"""
    global _integration
    if _integration is None:
        _integration = GeminiIntegration(config)
    return _integration

Singleton Pattern Benefits

The singleton pattern ensures:

  • Consistent Rate Limiting: All MCP tool calls share the same rate limiter
  • Unified Configuration: Changes to config affect all usage points
  • State Persistence: Consultation history and statistics are maintained
  • Resource Efficiency: Only one instance manages the Gemini CLI connection

Example Workflows

Manual Consultation

# In Claude Code
Use the consult_gemini tool with:
query: "Should I use WebSockets or gRPC for real-time communication?"
context: "Building a multiplayer application with real-time updates"

Automatic Consultation Flow

User: "How should I handle authentication?"

Claude: "I think OAuth might work, but I'm not certain about the security implications..."

[Auto-consultation triggered]

Gemini: "For authentication, consider these approaches: 1) OAuth 2.0 with PKCE for web apps..."

Synthesis: Both suggest OAuth but Claude uncertain about security. Gemini provides specific implementation details. Recommendation: Follow Gemini's OAuth 2.0 with PKCE approach.

Testing

Test Both Server Modes

# Test stdio mode (default)
python3 test_gemini_mcp.py

# Test HTTP mode
python3 test_gemini_mcp.py --mode http

# Test specific server
python3 test_gemini_mcp.py --mode stdio --verbose

Manual Testing via HTTP

# Start HTTP server
python3 gemini_mcp_server.py --port 8006

# Test endpoints
curl http://localhost:8006/health
curl http://localhost:8006/mcp/tools

# Test Gemini consultation
curl -X POST http://localhost:8006/mcp/tools/consult_gemini \
  -H "Content-Type: application/json" \
  -d '{"query": "What is the best Python web framework?"}'

Troubleshooting

Issue Solution
Gemini CLI not found Install Node.js 18+ and npm install -g @google/gemini-cli
Authentication errors Run gemini and sign in with Google account
Node version issues Use nvm use 22.16.0
Timeout errors Increase GEMINI_TIMEOUT (default: 60s)
Auto-consult not working Check GEMINI_AUTO_CONSULT=true
Rate limiting Adjust GEMINI_RATE_LIMIT (default: 2s)
Container detection error Ensure running on host system, not in Docker
stdio connection issues Check Claude Code MCP configuration
HTTP connection refused Verify port availability and firewall settings

Security Considerations

  1. API Credentials: Store securely, use environment variables
  2. Data Privacy: Be cautious about sending proprietary code
  3. Input Sanitization: Sanitize queries before sending
  4. Rate Limiting: Respect API limits (free tier: 60/min, 1000/day)
  5. Host-Based Architecture: Both Gemini CLI and MCP server run on host for auth compatibility
  6. Network Security: HTTP mode binds to 127.0.0.1 by default (not 0.0.0.0)

Best Practices

  1. Rate Limiting: Implement appropriate delays between calls
  2. Context Management: Keep context concise and relevant
  3. Error Handling: Always handle Gemini failures gracefully
  4. User Control: Allow users to disable auto-consultation
  5. Logging: Log consultations for debugging and analysis
  6. History Management: Periodically clear history to avoid context bloat
  7. Mode Selection: Use stdio for production, HTTP for testing

Use Cases

  • Architecture Decisions: Get second opinions on design choices
  • Security Reviews: Validate security implementations
  • Performance Optimization: Compare optimization strategies
  • Code Quality: Review complex algorithms or patterns
  • Troubleshooting: Debug complex technical issues
  • API Design: Validate REST/GraphQL/gRPC decisions
  • Database Schema: Review data modeling choices
{
"enabled": true,
"auto_consult": true,
"cli_command": "gemini",
"timeout": 300,
"rate_limit_delay": 5.0,
"max_context_length": 4000,
"log_consultations": true,
"model": "gemini-2.5-flash",
"sandbox_mode": true,
"debug_mode": false,
"include_history": true,
"max_history_entries": 10,
"uncertainty_thresholds": {
"uncertainty_patterns": true,
"complex_decisions": true,
"critical_operations": true
}
}
#!/usr/bin/env python3
"""
Gemini CLI Integration Module
Provides automatic consultation with Gemini for second opinions and validation
"""
import asyncio
import logging
import re
import time
from datetime import datetime
from typing import Any, Dict, List, Optional, Tuple
# Setup logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Uncertainty patterns that trigger automatic Gemini consultation
UNCERTAINTY_PATTERNS = [
r"\bI'm not sure\b",
r"\bI think\b",
r"\bpossibly\b",
r"\bprobably\b",
r"\bmight be\b",
r"\bcould be\b",
r"\bI believe\b",
r"\bIt seems\b",
r"\bappears to be\b",
r"\buncertain\b",
r"\bI would guess\b",
r"\blikely\b",
r"\bperhaps\b",
r"\bmaybe\b",
r"\bI assume\b",
]
# Complex decision patterns that benefit from second opinions
COMPLEX_DECISION_PATTERNS = [
r"\bmultiple approaches\b",
r"\bseveral options\b",
r"\btrade-offs?\b",
r"\bconsider(?:ing)?\b",
r"\balternatives?\b",
r"\bpros and cons\b",
r"\bweigh(?:ing)? the options\b",
r"\bchoice between\b",
r"\bdecision\b",
]
# Critical operations that should trigger consultation
CRITICAL_OPERATION_PATTERNS = [
r"\bproduction\b",
r"\bdatabase migration\b",
r"\bsecurity\b",
r"\bauthentication\b",
r"\bencryption\b",
r"\bAPI key\b",
r"\bcredentials?\b",
r"\bperformance\s+critical\b",
]
class GeminiIntegration:
"""Handles Gemini CLI integration for second opinions and validation"""
def __init__(self, config: Optional[Dict[str, Any]] = None):
self.config = config or {}
self.enabled = self.config.get("enabled", True)
self.auto_consult = self.config.get("auto_consult", True)
self.cli_command = self.config.get("cli_command", "gemini")
self.timeout = self.config.get("timeout", 60)
self.rate_limit_delay = self.config.get("rate_limit_delay", 2.0)
self.last_consultation = 0
self.consultation_log = []
self.max_context_length = self.config.get("max_context_length", 4000)
self.model = self.config.get("model", "gemini-2.5-flash")
# Conversation history for maintaining state
self.conversation_history = []
self.max_history_entries = self.config.get("max_history_entries", 10)
self.include_history = self.config.get("include_history", True)
async def consult_gemini(
self,
query: str,
context: str = "",
comparison_mode: bool = True,
force_consult: bool = False,
) -> Dict[str, Any]:
"""Consult Gemini CLI for second opinion"""
if not self.enabled and not force_consult:
return {"status": "disabled", "message": "Gemini integration is disabled"}
if not force_consult:
await self._enforce_rate_limit()
consultation_id = f"consult_{int(time.time())}_{len(self.consultation_log)}"
try:
# Prepare query with context
full_query = self._prepare_query(query, context, comparison_mode)
# Execute Gemini CLI command
result = await self._execute_gemini_cli(full_query)
# Save to conversation history
if self.include_history and result.get("output"):
self.conversation_history.append((query, result["output"]))
# Trim history if it exceeds max entries
if len(self.conversation_history) > self.max_history_entries:
self.conversation_history = self.conversation_history[
-self.max_history_entries :
]
# Log consultation
if self.config.get("log_consultations", True):
self.consultation_log.append(
{
"id": consultation_id,
"timestamp": datetime.now().isoformat(),
"query": (query[:200] + "..." if len(query) > 200 else query),
"status": "success",
"execution_time": result.get("execution_time", 0),
}
)
return {
"status": "success",
"response": result["output"],
"execution_time": result["execution_time"],
"consultation_id": consultation_id,
"timestamp": datetime.now().isoformat(),
}
except Exception as e:
logger.error(f"Error consulting Gemini: {str(e)}")
return {
"status": "error",
"error": str(e),
"consultation_id": consultation_id,
}
def detect_uncertainty(self, text: str) -> Tuple[bool, List[str]]:
"""Detect if text contains uncertainty patterns"""
found_patterns = []
# Check uncertainty patterns
for pattern in UNCERTAINTY_PATTERNS:
if re.search(pattern, text, re.IGNORECASE):
found_patterns.append(f"uncertainty: {pattern}")
# Check complex decision patterns
for pattern in COMPLEX_DECISION_PATTERNS:
if re.search(pattern, text, re.IGNORECASE):
found_patterns.append(f"complex_decision: {pattern}")
# Check critical operation patterns
for pattern in CRITICAL_OPERATION_PATTERNS:
if re.search(pattern, text, re.IGNORECASE):
found_patterns.append(f"critical_operation: {pattern}")
return len(found_patterns) > 0, found_patterns
def clear_conversation_history(self) -> Dict[str, Any]:
"""Clear the conversation history"""
old_count = len(self.conversation_history)
self.conversation_history = []
return {
"status": "success",
"cleared_entries": old_count,
"message": f"Cleared {old_count} conversation entries",
}
def get_consultation_stats(self) -> Dict[str, Any]:
"""Get statistics about consultations"""
if not self.consultation_log:
return {"total_consultations": 0}
completed = [e for e in self.consultation_log if e.get("status") == "success"]
total_execution_time = sum(e.get("execution_time", 0) for e in completed)
stats = {
"total_consultations": len(self.consultation_log),
"completed_consultations": len(completed),
"average_execution_time": (
total_execution_time / len(completed)
if completed
else 0
),
"total_execution_time": total_execution_time,
"conversation_history_size": len(self.conversation_history),
}
# Add last consultation timestamp if available
if self.consultation_log:
last_entry = self.consultation_log[-1]
if last_entry.get("timestamp"):
stats["last_consultation"] = last_entry["timestamp"]
return stats
async def _enforce_rate_limit(self):
"""Enforce rate limiting between consultations"""
current_time = time.time()
time_since_last = current_time - self.last_consultation
if time_since_last < self.rate_limit_delay:
sleep_time = self.rate_limit_delay - time_since_last
await asyncio.sleep(sleep_time)
self.last_consultation = time.time()
def _prepare_query(self, query: str, context: str, comparison_mode: bool) -> str:
"""Prepare the full query for Gemini CLI"""
parts = []
if comparison_mode:
parts.append("Please provide a technical analysis and second opinion:")
parts.append("")
# Include conversation history if enabled and available
if self.include_history and self.conversation_history:
parts.append("Previous conversation:")
parts.append("-" * 40)
for i, (prev_q, prev_a) in enumerate(
self.conversation_history[-self.max_history_entries :], 1
):
parts.append(f"Q{i}: {prev_q}")
# Truncate long responses in history
if len(prev_a) > 500:
parts.append(f"A{i}: {prev_a[:500]}... [truncated]")
else:
parts.append(f"A{i}: {prev_a}")
parts.append("")
parts.append("-" * 40)
parts.append("")
# Truncate context if too long
if len(context) > self.max_context_length:
context = context[: self.max_context_length] + "\n[Context truncated...]"
if context:
parts.append("Context:")
parts.append(context)
parts.append("")
parts.append("Current Question/Topic:")
parts.append(query)
if comparison_mode:
parts.extend(
[
"",
"Please structure your response with:",
"1. Your analysis and understanding",
"2. Recommendations or approach",
"3. Any concerns or considerations",
"4. Alternative approaches (if applicable)",
]
)
return "\n".join(parts)
async def _execute_gemini_cli(self, query: str) -> Dict[str, Any]:
"""Execute Gemini CLI command and return results"""
start_time = time.time()
# Build command
cmd = [self.cli_command]
if self.model:
cmd.extend(["-m", self.model])
cmd.extend(["-p", query]) # Non-interactive mode
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await asyncio.wait_for(
process.communicate(), timeout=self.timeout
)
execution_time = time.time() - start_time
if process.returncode != 0:
error_msg = stderr.decode() if stderr else "Unknown error"
if "authentication" in error_msg.lower():
error_msg += "\nTip: Run 'gemini' interactively to authenticate"
raise Exception(f"Gemini CLI failed: {error_msg}")
return {"output": stdout.decode().strip(), "execution_time": execution_time}
except asyncio.TimeoutError:
raise Exception(f"Gemini CLI timed out after {self.timeout} seconds")
# Singleton pattern implementation
_integration = None
def get_integration(config: Optional[Dict[str, Any]] = None) -> GeminiIntegration:
"""
Get or create the global Gemini integration instance.
This ensures that all parts of the application share the same instance,
maintaining consistent state for rate limiting, consultation history,
and configuration across all tool calls.
Args:
config: Optional configuration dict. Only used on first call.
Returns:
The singleton GeminiIntegration instance
"""
global _integration
if _integration is None:
_integration = GeminiIntegration(config)
return _integration
#!/usr/bin/env python3
"""
MCP Server with Gemini Integration
Provides development workflow automation with AI second opinions
"""
import asyncio
import json
import os
import sys
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
import mcp.server.stdio
import mcp.types as types
from mcp.server import NotificationOptions, Server, InitializationOptions
# Check if running in container BEFORE any other imports or operations
def check_container_and_exit():
"""Check if running in a container and exit immediately if true."""
if os.path.exists("/.dockerenv") or os.environ.get("CONTAINER_ENV"):
print("ERROR: Gemini MCP Server cannot run inside a container!", file=sys.stderr)
print(
"The Gemini CLI requires Docker access and must run on the host system.",
file=sys.stderr,
)
print("Please launch this server directly on the host with:", file=sys.stderr)
print(" python gemini_mcp_server.py", file=sys.stderr)
sys.exit(1)
# Perform container check immediately
check_container_and_exit()
# Now import the integration module
from gemini_integration import get_integration
class MCPServer:
def __init__(self, project_root: str = None):
self.project_root = Path(project_root) if project_root else Path.cwd()
self.server = Server("gemini-mcp-server")
# Initialize Gemini integration with singleton pattern
self.gemini_config = self._load_gemini_config()
# Get the singleton instance, passing config on first call
self.gemini = get_integration(self.gemini_config)
# Track uncertainty for auto-consultation
self.last_response_uncertainty = None
self._setup_tools()
def _load_gemini_config(self) -> Dict[str, Any]:
"""Load Gemini configuration from environment or config file."""
# Try to load .env file if it exists
env_file = self.project_root / '.env'
if env_file.exists():
try:
with open(env_file, 'r') as f:
for line in f:
line = line.strip()
if line and not line.startswith('#') and '=' in line:
key, value = line.split('=', 1)
# Only set if not already in environment
if key not in os.environ:
os.environ[key] = value
except Exception as e:
print(f"Warning: Could not load .env file: {e}")
config = {
'enabled': os.getenv('GEMINI_ENABLED', 'true').lower() == 'true',
'auto_consult': os.getenv('GEMINI_AUTO_CONSULT', 'true').lower() == 'true',
'cli_command': os.getenv('GEMINI_CLI_COMMAND', 'gemini'),
'timeout': int(os.getenv('GEMINI_TIMEOUT', '60')),
'rate_limit_delay': float(os.getenv('GEMINI_RATE_LIMIT', '2')),
'max_context_length': int(os.getenv('GEMINI_MAX_CONTEXT', '4000')),
'log_consultations': os.getenv('GEMINI_LOG_CONSULTATIONS', 'true').lower() == 'true',
'model': os.getenv('GEMINI_MODEL', 'gemini-2.5-flash'),
'sandbox_mode': os.getenv('GEMINI_SANDBOX', 'false').lower() == 'true',
'debug_mode': os.getenv('GEMINI_DEBUG', 'false').lower() == 'true',
'include_history': os.getenv('GEMINI_INCLUDE_HISTORY', 'true').lower() == 'true',
'max_history_entries': int(os.getenv('GEMINI_MAX_HISTORY', '10')),
}
# Try to load from config file
config_file = self.project_root / 'gemini-config.json'
if config_file.exists():
try:
with open(config_file, 'r') as f:
file_config = json.load(f)
config.update(file_config)
except Exception as e:
print(f"Warning: Could not load gemini-config.json: {e}")
return config
def _setup_tools(self):
"""Register all MCP tools"""
# Gemini consultation tool
@self.server.call_tool()
async def consult_gemini(arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Consult Gemini CLI for a second opinion or validation."""
query = arguments.get('query', '')
context = arguments.get('context', '')
comparison_mode = arguments.get('comparison_mode', True)
force_consult = arguments.get('force', False)
if not query:
return [types.TextContent(
type="text",
text="❌ Error: 'query' parameter is required for Gemini consultation"
)]
# Consult Gemini
result = await self.gemini.consult_gemini(
query=query,
context=context,
comparison_mode=comparison_mode,
force_consult=force_consult
)
# Format the response
return await self._format_gemini_response(result)
@self.server.call_tool()
async def gemini_status(arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Get Gemini integration status and statistics."""
return await self._get_gemini_status()
@self.server.call_tool()
async def toggle_gemini_auto_consult(arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Toggle automatic Gemini consultation on uncertainty detection."""
enable = arguments.get('enable', None)
if enable is None:
# Toggle current state
self.gemini.auto_consult = not self.gemini.auto_consult
else:
self.gemini.auto_consult = bool(enable)
status = "enabled" if self.gemini.auto_consult else "disabled"
return [types.TextContent(
type="text",
text=f"βœ… Gemini auto-consultation is now {status}"
)]
@self.server.call_tool()
async def clear_gemini_history(arguments: Dict[str, Any]) -> List[types.TextContent]:
"""Clear Gemini conversation history."""
result = self.gemini.clear_conversation_history()
return [types.TextContent(
type="text",
text=f"βœ… {result['message']}"
)]
async def _format_gemini_response(self, result: Dict[str, Any]) -> List[types.TextContent]:
"""Format Gemini consultation response for MCP output."""
output_lines = []
output_lines.append("πŸ€– Gemini Consultation Response")
output_lines.append("=" * 40)
output_lines.append("")
if result['status'] == 'success':
output_lines.append(f"βœ… Consultation ID: {result['consultation_id']}")
output_lines.append(f"⏱️ Execution time: {result['execution_time']:.2f}s")
output_lines.append("")
# Display the raw response (simplified format)
response = result.get('response', '')
if response:
output_lines.append("πŸ“„ Response:")
output_lines.append(response)
elif result['status'] == 'disabled':
output_lines.append("ℹ️ Gemini consultation is currently disabled")
output_lines.append("πŸ’‘ Enable with: toggle_gemini_auto_consult")
elif result['status'] == 'timeout':
output_lines.append(f"❌ {result['error']}")
output_lines.append("πŸ’‘ Try increasing the timeout or simplifying the query")
else: # error
output_lines.append(f"❌ Error: {result.get('error', 'Unknown error')}")
output_lines.append("")
output_lines.append("πŸ’‘ Troubleshooting:")
output_lines.append(" 1. Check if Gemini CLI is installed and in PATH")
output_lines.append(" 2. Verify Gemini CLI authentication")
output_lines.append(" 3. Check the logs for more details")
return [types.TextContent(type="text", text="\n".join(output_lines))]
async def _get_gemini_status(self) -> List[types.TextContent]:
"""Get Gemini integration status and statistics."""
output_lines = []
output_lines.append("πŸ€– Gemini Integration Status")
output_lines.append("=" * 40)
output_lines.append("")
# Configuration status
output_lines.append("βš™οΈ Configuration:")
output_lines.append(f" β€’ Enabled: {'βœ… Yes' if self.gemini.enabled else '❌ No'}")
output_lines.append(f" β€’ Auto-consult: {'βœ… Yes' if self.gemini.auto_consult else '❌ No'}")
output_lines.append(f" β€’ CLI command: {self.gemini.cli_command}")
output_lines.append(f" β€’ Timeout: {self.gemini.timeout}s")
output_lines.append(f" β€’ Rate limit: {self.gemini.rate_limit_delay}s")
output_lines.append("")
# Check if Gemini CLI is available
try:
# Test with a simple prompt rather than --version (which may not be supported)
check_process = await asyncio.create_subprocess_exec(
self.gemini.cli_command, "-p", "test",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await asyncio.wait_for(check_process.communicate(), timeout=10)
if check_process.returncode == 0:
output_lines.append("βœ… Gemini CLI is available and working")
# Try to get version info from help or other means
try:
help_process = await asyncio.create_subprocess_exec(
self.gemini.cli_command, "--help",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
help_stdout, _ = await help_process.communicate()
help_text = help_stdout.decode()
# Look for version in help output
if "version" in help_text.lower():
for line in help_text.split('\n'):
if 'version' in line.lower():
output_lines.append(f" {line.strip()}")
break
except Exception:
pass
else:
error_msg = stderr.decode() if stderr else "Unknown error"
output_lines.append("❌ Gemini CLI found but not working properly")
output_lines.append(f" Command tested: {self.gemini.cli_command}")
output_lines.append(f" Error: {error_msg}")
# Check for authentication issues
if "authentication" in error_msg.lower() or "api key" in error_msg.lower():
output_lines.append("")
output_lines.append("πŸ”‘ Authentication required:")
output_lines.append(" 1. Set GEMINI_API_KEY environment variable, or")
output_lines.append(" 2. Run 'gemini' interactively to authenticate with Google")
except asyncio.TimeoutError:
output_lines.append("❌ Gemini CLI test timed out")
output_lines.append(" This may indicate authentication is required")
except FileNotFoundError:
output_lines.append("❌ Gemini CLI not found in PATH")
output_lines.append(f" Expected command: {self.gemini.cli_command}")
output_lines.append("")
output_lines.append("πŸ“¦ Installation:")
output_lines.append(" npm install -g @google/gemini-cli")
output_lines.append(" OR")
output_lines.append(" npx @google/gemini-cli")
except Exception as e:
output_lines.append(f"❌ Error checking Gemini CLI: {str(e)}")
output_lines.append("")
# Consultation statistics
stats = self.gemini.get_consultation_stats()
output_lines.append("πŸ“Š Consultation Statistics:")
output_lines.append(f" β€’ Total consultations: {stats.get('total_consultations', 0)}")
completed = stats.get('completed_consultations', 0)
output_lines.append(f" β€’ Completed: {completed}")
if completed > 0:
avg_time = stats.get('average_execution_time', 0)
output_lines.append(f" β€’ Average time: {avg_time:.2f}s")
total_time = sum(
e.get("execution_time", 0)
for e in self.gemini.consultation_log
if e.get("status") == "success"
)
output_lines.append(f" β€’ Total time: {total_time:.2f}s")
output_lines.append("")
output_lines.append("πŸ’‘ Usage:")
output_lines.append(" β€’ Direct: Use 'consult_gemini' tool")
output_lines.append(" β€’ Auto: Enable auto-consult for uncertainty detection")
output_lines.append(" β€’ Toggle: Use 'toggle_gemini_auto_consult' tool")
return [types.TextContent(type="text", text="\n".join(output_lines))]
def detect_response_uncertainty(self, response: str) -> Tuple[bool, List[str]]:
"""
Detect uncertainty in a response for potential auto-consultation.
This is a wrapper around the GeminiIntegration's detection.
"""
return self.gemini.detect_uncertainty(response)
async def maybe_consult_gemini(self, response: str, context: str = "") -> Optional[Dict[str, Any]]:
"""
Check if response contains uncertainty and consult Gemini if needed.
Args:
response: The response to check for uncertainty
context: Additional context for the consultation
Returns:
Gemini consultation result if consulted, None otherwise
"""
if not self.gemini.auto_consult or not self.gemini.enabled:
return None
has_uncertainty, patterns = self.detect_response_uncertainty(response)
if has_uncertainty:
# Extract the main question or topic from the response
query = f"Please provide a second opinion on this analysis:\n\n{response}"
# Add uncertainty patterns to context
enhanced_context = f"{context}\n\nUncertainty detected in: {', '.join(patterns)}"
result = await self.gemini.consult_gemini(
query=query,
context=enhanced_context,
comparison_mode=True
)
return result
return None
def run(self):
"""Run the MCP server."""
async def main():
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="gemini-mcp-server",
server_version="1.0.0",
capabilities=self.server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
asyncio.run(main())
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="MCP Server with Gemini Integration")
parser.add_argument(
"--project-root",
type=str,
default=".",
help="Path to the project root directory"
)
parser.add_argument(
"--port",
type=int,
default=None,
help="Port for HTTP mode (if specified, runs as HTTP server instead of stdio)"
)
args = parser.parse_args()
# Check if running in container - exit with instructions if true
check_container_and_exit()
# If port is specified, run as HTTP server (for backward compatibility/testing)
if args.port:
print("Warning: Running in HTTP mode. For production, use stdio mode (no --port argument)")
# Import and run the HTTP server
try:
from gemini_mcp_server_http import run_http_server
run_http_server(args.port)
except ImportError:
print("Error: gemini_mcp_server_http.py not found", file=sys.stderr)
print("HTTP mode requires the HTTP server implementation file", file=sys.stderr)
sys.exit(1)
else:
# Run as stdio MCP server (recommended)
server = MCPServer(args.project_root)
server.run()
#!/usr/bin/env python3
"""
HTTP-based Gemini MCP Server
Provides REST API interface for Gemini consultation (for testing/development)
"""
import os
import sys
from pathlib import Path
from typing import Any, Dict, Optional
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import uvicorn
# Check if running in container BEFORE any other imports or operations
def check_container_and_exit():
"""Check if running in a container and exit immediately if true."""
if os.path.exists("/.dockerenv") or os.environ.get("CONTAINER_ENV"):
print("ERROR: Gemini MCP Server cannot run inside a container!", file=sys.stderr)
print(
"The Gemini CLI requires Docker access and must run on the host system.",
file=sys.stderr,
)
print("Please launch this server directly on the host with:", file=sys.stderr)
print(" python gemini_mcp_server_http.py", file=sys.stderr)
sys.exit(1)
# Perform container check immediately
check_container_and_exit()
# Now import the integration module
from gemini_integration import get_integration
# Initialize FastAPI app
app = FastAPI(
title="Gemini MCP Server (HTTP Mode)",
version="1.0.0",
description="HTTP interface for Gemini CLI integration"
)
class ConsultRequest(BaseModel):
query: str
context: Optional[str] = ""
comparison_mode: Optional[bool] = True
force: Optional[bool] = False
class ToggleRequest(BaseModel):
enable: Optional[bool] = None
class ConsultResponse(BaseModel):
status: str
response: Optional[str] = None
error: Optional[str] = None
consultation_id: Optional[str] = None
execution_time: Optional[float] = None
timestamp: Optional[str] = None
# Initialize Gemini integration
project_root = Path(os.environ.get("PROJECT_ROOT", "."))
config = {
'enabled': os.getenv('GEMINI_ENABLED', 'true').lower() == 'true',
'auto_consult': os.getenv('GEMINI_AUTO_CONSULT', 'true').lower() == 'true',
'cli_command': os.getenv('GEMINI_CLI_COMMAND', 'gemini'),
'timeout': int(os.getenv('GEMINI_TIMEOUT', '60')),
'rate_limit_delay': float(os.getenv('GEMINI_RATE_LIMIT', '2')),
'max_context_length': int(os.getenv('GEMINI_MAX_CONTEXT', '4000')),
'log_consultations': os.getenv('GEMINI_LOG_CONSULTATIONS', 'true').lower() == 'true',
'model': os.getenv('GEMINI_MODEL', 'gemini-2.5-flash'),
}
# Get the singleton instance
gemini = get_integration(config)
@app.get("/")
async def root():
"""Root endpoint showing server info"""
return {
"name": "Gemini MCP Server (HTTP Mode)",
"version": "1.0.0",
"mode": "http",
"endpoints": {
"health": "/health",
"tools": "/mcp/tools",
"consult": "/mcp/tools/consult_gemini",
"status": "/mcp/tools/gemini_status",
"toggle": "/mcp/tools/toggle_gemini_auto_consult",
"clear_history": "/mcp/tools/clear_gemini_history"
}
}
@app.get("/health")
async def health():
"""Health check endpoint"""
return {"status": "healthy", "mode": "http"}
@app.get("/mcp/tools")
async def list_tools():
"""List available MCP tools"""
return {
"tools": [
{
"name": "consult_gemini",
"description": "Consult Gemini CLI for a second opinion or validation",
"parameters": {
"query": {"type": "string", "required": True},
"context": {"type": "string", "required": False},
"comparison_mode": {"type": "boolean", "required": False, "default": True},
"force": {"type": "boolean", "required": False, "default": False}
}
},
{
"name": "gemini_status",
"description": "Get Gemini integration status and statistics",
"parameters": {}
},
{
"name": "toggle_gemini_auto_consult",
"description": "Toggle automatic Gemini consultation on uncertainty detection",
"parameters": {
"enable": {"type": "boolean", "required": False}
}
},
{
"name": "clear_gemini_history",
"description": "Clear Gemini conversation history",
"parameters": {}
}
]
}
@app.post("/mcp/tools/consult_gemini", response_model=ConsultResponse)
async def consult_gemini_endpoint(request: ConsultRequest):
"""Consult Gemini for a second opinion"""
if not request.query:
raise HTTPException(status_code=400, detail="Query parameter is required")
result = await gemini.consult_gemini(
query=request.query,
context=request.context,
comparison_mode=request.comparison_mode,
force_consult=request.force
)
return ConsultResponse(**result)
@app.get("/mcp/tools/gemini_status")
async def gemini_status_endpoint():
"""Get Gemini integration status and statistics"""
import asyncio
# Build status information
status = {
"configuration": {
"enabled": gemini.enabled,
"auto_consult": gemini.auto_consult,
"cli_command": gemini.cli_command,
"timeout": gemini.timeout,
"rate_limit": gemini.rate_limit_delay,
"model": gemini.model
},
"statistics": gemini.get_consultation_stats()
}
# Check CLI availability
try:
check_process = await asyncio.create_subprocess_exec(
gemini.cli_command, "-p", "test",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await asyncio.wait_for(check_process.communicate(), timeout=5)
status["cli_available"] = check_process.returncode == 0
if not status["cli_available"]:
status["cli_error"] = stderr.decode() if stderr else "Unknown error"
except Exception as e:
status["cli_available"] = False
status["cli_error"] = str(e)
return status
@app.post("/mcp/tools/toggle_gemini_auto_consult")
async def toggle_auto_consult_endpoint(request: ToggleRequest):
"""Toggle automatic Gemini consultation"""
if request.enable is None:
# Toggle current state
gemini.auto_consult = not gemini.auto_consult
else:
gemini.auto_consult = bool(request.enable)
return {
"status": "success",
"auto_consult": gemini.auto_consult,
"message": f"Gemini auto-consultation is now {'enabled' if gemini.auto_consult else 'disabled'}"
}
@app.post("/mcp/tools/clear_gemini_history")
async def clear_history_endpoint():
"""Clear Gemini conversation history"""
result = gemini.clear_conversation_history()
return result
def run_http_server(port: int):
"""Run the HTTP server"""
host = os.environ.get("GEMINI_MCP_HOST", "127.0.0.1")
print(f"Starting Gemini MCP Server in HTTP mode on {host}:{port}")
print("WARNING: HTTP mode is for testing only. Use stdio mode for production.")
print(f"Access the API at: http://{host}:{port}")
print(f"Health check: http://{host}:{port}/health")
print(f"API docs: http://{host}:{port}/docs")
uvicorn.run(app, host=host, port=port)
if __name__ == "__main__":
# Check if running in container
check_container_and_exit()
# Default port from environment or 8006
default_port = int(os.environ.get("GEMINI_MCP_PORT", "8006"))
# Run server
run_http_server(default_port)
#!/bin/bash
# Start Gemini MCP Server in either stdio or HTTP mode
# Default to stdio mode
MODE="stdio"
# Check for --http flag
if [[ "$1" == "--http" ]]; then
MODE="http"
fi
# Set default environment variables if not already set
export GEMINI_ENABLED="${GEMINI_ENABLED:-true}"
export GEMINI_AUTO_CONSULT="${GEMINI_AUTO_CONSULT:-true}"
export GEMINI_CLI_COMMAND="${GEMINI_CLI_COMMAND:-gemini}"
export GEMINI_TIMEOUT="${GEMINI_TIMEOUT:-200}"
export GEMINI_RATE_LIMIT="${GEMINI_RATE_LIMIT:-2}"
export GEMINI_MODEL="${GEMINI_MODEL:-gemini-2.5-flash}"
export GEMINI_MCP_PORT="${GEMINI_MCP_PORT:-8006}"
export GEMINI_MCP_HOST="${GEMINI_MCP_HOST:-127.0.0.1}"
# Show current configuration
echo "πŸ€– Gemini MCP Server Startup"
echo "============================"
echo "Mode: $MODE"
echo "Environment:"
echo " GEMINI_ENABLED=$GEMINI_ENABLED"
echo " GEMINI_AUTO_CONSULT=$GEMINI_AUTO_CONSULT"
echo " GEMINI_CLI_COMMAND=$GEMINI_CLI_COMMAND"
echo " GEMINI_MODEL=$GEMINI_MODEL"
echo ""
if [[ "$MODE" == "http" ]]; then
echo "Starting HTTP server on $GEMINI_MCP_HOST:$GEMINI_MCP_PORT..."
echo ""
# Start HTTP server
python3 gemini_mcp_server.py --port $GEMINI_MCP_PORT
# If server stops, show how to test it
echo ""
echo "Server stopped. To test the HTTP server:"
echo " curl http://$GEMINI_MCP_HOST:$GEMINI_MCP_PORT/health"
echo " curl http://$GEMINI_MCP_HOST:$GEMINI_MCP_PORT/mcp/tools"
else
echo "stdio mode instructions:"
echo ""
echo "To use with Claude Code, add this to your MCP settings:"
echo ""
echo '{'
echo ' "mcpServers": {'
echo ' "gemini": {'
echo ' "command": "python3",'
echo " \"args\": [\"$(pwd)/gemini_mcp_server.py\", \"--project-root\", \".\"],"
echo " \"cwd\": \"$(pwd)\","
echo ' "env": {'
echo ' "GEMINI_ENABLED": "true",'
echo ' "GEMINI_AUTO_CONSULT": "true",'
echo ' "GEMINI_CLI_COMMAND": "gemini"'
echo ' }'
echo ' }'
echo ' }'
echo '}'
echo ""
echo "Or run directly for testing:"
echo " python3 gemini_mcp_server.py --project-root ."
echo ""
echo "For HTTP mode, run: $0 --http"
fi
#!/usr/bin/env python3
"""
Test script for Gemini MCP Server
Tests both stdio and HTTP modes
"""
import argparse
import asyncio
import json
import os
import sys
import requests
def test_http_server(verbose=False):
"""Test HTTP server endpoints"""
host = os.environ.get("GEMINI_MCP_HOST", "127.0.0.1")
port = int(os.environ.get("GEMINI_MCP_PORT", "8006"))
base_url = f"http://{host}:{port}"
print(f"πŸ§ͺ Testing HTTP server at {base_url}")
print("=" * 50)
# Test 1: Health check
print("\n1. Testing health endpoint...")
try:
response = requests.get(f"{base_url}/health", timeout=5)
if response.status_code == 200:
print("βœ… Health check passed")
if verbose:
print(f" Response: {response.json()}")
else:
print(f"❌ Health check failed: {response.status_code}")
return False
except Exception as e:
print(f"❌ Cannot connect to server: {e}")
print("\nMake sure the HTTP server is running:")
print(" python3 gemini_mcp_server.py --port 8006")
return False
# Test 2: Root endpoint
print("\n2. Testing root endpoint...")
try:
response = requests.get(base_url)
if response.status_code == 200:
print("βœ… Root endpoint accessible")
if verbose:
print(f" Endpoints: {json.dumps(response.json()['endpoints'], indent=2)}")
else:
print(f"❌ Root endpoint failed: {response.status_code}")
except Exception as e:
print(f"❌ Root endpoint error: {e}")
# Test 3: List tools
print("\n3. Testing tools listing...")
try:
response = requests.get(f"{base_url}/mcp/tools")
if response.status_code == 200:
tools = response.json()["tools"]
print(f"βœ… Found {len(tools)} tools:")
for tool in tools:
print(f" - {tool['name']}: {tool['description']}")
else:
print(f"❌ Tools listing failed: {response.status_code}")
except Exception as e:
print(f"❌ Tools listing error: {e}")
# Test 4: Gemini status
print("\n4. Testing Gemini status...")
try:
response = requests.get(f"{base_url}/mcp/tools/gemini_status")
if response.status_code == 200:
status = response.json()
print("βœ… Status endpoint working")
print(f" Gemini enabled: {status['configuration']['enabled']}")
print(f" Auto-consult: {status['configuration']['auto_consult']}")
print(f" CLI available: {status.get('cli_available', 'Unknown')}")
if not status.get('cli_available') and verbose:
print(f" CLI error: {status.get('cli_error', 'Unknown')}")
else:
print(f"❌ Status check failed: {response.status_code}")
except Exception as e:
print(f"❌ Status check error: {e}")
# Test 5: Simple consultation (if Gemini is available)
print("\n5. Testing Gemini consultation...")
try:
payload = {
"query": "What is 2+2?",
"context": "This is a test query",
"comparison_mode": False
}
response = requests.post(
f"{base_url}/mcp/tools/consult_gemini",
json=payload,
timeout=30
)
if response.status_code == 200:
result = response.json()
if result["status"] == "success":
print("βœ… Consultation successful")
if verbose:
print(f" Response preview: {result['response'][:100]}...")
print(f" Execution time: {result.get('execution_time', 0):.2f}s")
else:
print(f"⚠️ Consultation returned status: {result['status']}")
if result.get("error"):
print(f" Error: {result['error']}")
else:
print(f"❌ Consultation failed: {response.status_code}")
if verbose:
print(f" Response: {response.text}")
except Exception as e:
print(f"❌ Consultation error: {e}")
print("\n" + "=" * 50)
print("βœ… HTTP server tests completed")
return True
async def test_stdio_server(verbose=False):
"""Test stdio server basic connectivity"""
print("πŸ§ͺ Testing stdio server")
print("=" * 50)
print("\n1. Testing basic MCP protocol...")
print(" Note: Full stdio testing requires MCP client setup")
# Try to import and initialize the server
try:
from gemini_mcp_server import MCPServer
server = MCPServer()
print("βœ… Server initialization successful")
# Test configuration loading
print("\n2. Testing configuration...")
print(f" Gemini enabled: {server.gemini.enabled}")
print(f" Auto-consult: {server.gemini.auto_consult}")
print(f" CLI command: {server.gemini.cli_command}")
# Test uncertainty detection
print("\n3. Testing uncertainty detection...")
test_phrases = [
"I'm not sure about this approach",
"This is definitely the right way",
"We should consider multiple options"
]
for phrase in test_phrases:
has_uncertainty, patterns = server.detect_response_uncertainty(phrase)
status = "βœ… Uncertain" if has_uncertainty else "❌ Certain"
print(f" '{phrase[:30]}...' -> {status}")
if verbose and has_uncertainty:
print(f" Patterns: {', '.join(patterns)}")
print("\n" + "=" * 50)
print("βœ… stdio server tests completed")
print("\nFor full stdio testing, configure Claude Code with:")
print(' python3 gemini_mcp_server.py --project-root .')
except ImportError as e:
print(f"❌ Cannot import server: {e}")
print("\nMake sure you're in the correct directory with:")
print(" - gemini_mcp_server.py")
print(" - gemini_integration.py")
return False
except Exception as e:
print(f"❌ Server initialization error: {e}")
return False
return True
def main():
parser = argparse.ArgumentParser(description="Test Gemini MCP Server")
parser.add_argument(
"--mode",
choices=["http", "stdio"],
default="stdio",
help="Server mode to test (default: stdio)"
)
parser.add_argument(
"--verbose",
"-v",
action="store_true",
help="Show detailed output"
)
args = parser.parse_args()
print("πŸ€– Gemini MCP Server Test Suite")
print("==============================")
print(f"Testing mode: {args.mode}")
print("")
if args.mode == "http":
success = test_http_server(args.verbose)
else:
success = asyncio.run(test_stdio_server(args.verbose))
sys.exit(0 if success else 1)
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""Test Gemini state management with automatic history inclusion."""
import asyncio
import sys
from pathlib import Path
from gemini_integration import get_integration
async def test_automatic_state():
"""Test if Gemini automatically maintains state through history."""
print("πŸ§ͺ Testing Automatic Gemini State Management")
print("=" * 50)
# Use singleton
gemini = get_integration({
'enabled': True,
'cli_command': 'gemini',
'timeout': 30,
'include_history': True, # This enables automatic history inclusion
'max_history_entries': 10,
'debug_mode': False
})
# Clear any existing history
gemini.clear_conversation_history()
print("\n1️⃣ First Question: What is 2+2?")
print("-" * 30)
try:
response1 = await gemini.consult_gemini(
query="What is 2+2?",
context="", # No context needed
force_consult=True
)
if response1.get('status') == 'success':
response_text = response1.get('response', '')
print(f"βœ… Success! Gemini responded.")
# Find and show the answer
lines = response_text.strip().split('\n')
for line in lines:
if '4' in line or 'four' in line.lower():
print(f"πŸ“ Found answer: {line.strip()[:100]}...")
break
else:
print(f"❌ Error: {response1.get('error', 'Unknown error')}")
return
except Exception as e:
print(f"❌ Exception: {e}")
return
print(f"\nπŸ“Š Conversation history size: {len(gemini.conversation_history)}")
print("\n2️⃣ Second Question: What is that doubled?")
print(" (No context provided - relying on automatic history)")
print("-" * 30)
try:
# This time, provide NO context at all - let the history do the work
response2 = await gemini.consult_gemini(
query="What is that doubled?",
context="", # Empty context - history should provide the context
force_consult=True
)
if response2.get('status') == 'success':
response_text = response2.get('response', '')
print(f"βœ… Success! Gemini responded.")
# Check if it understood the context
if '8' in response_text or 'eight' in response_text.lower():
print("πŸŽ‰ STATE MAINTAINED! Gemini understood 'that' referred to 4")
print("πŸ“ Found reference to 8 in the response")
# Find and show where 8 appears
for line in response_text.split('\n'):
if '8' in line or 'eight' in line.lower():
print(f"πŸ“ Context: {line.strip()[:100]}...")
break
else:
print("⚠️ Gemini may not have maintained state properly")
print("πŸ“ Response doesn't clearly reference 8")
print(f"First 200 chars: {response_text[:200]}...")
else:
print(f"❌ Error: {response2.get('error', 'Unknown error')}")
except Exception as e:
print(f"❌ Exception: {e}")
print(f"\nπŸ“Š Final conversation history size: {len(gemini.conversation_history)}")
# Test with history disabled
print("\n3️⃣ Testing with History Disabled")
print("-" * 30)
# Disable history
gemini.include_history = False
gemini.clear_conversation_history()
# Ask first question
await gemini.consult_gemini(
query="What is 3+3?",
context="",
force_consult=True
)
# Ask follow-up
response3 = await gemini.consult_gemini(
query="What is that tripled?",
context="",
force_consult=True
)
if response3.get('status') == 'success':
response_text = response3.get('response', '')
if '18' in response_text or 'eighteen' in response_text.lower():
print("❌ UNEXPECTED: Found 18 even without history!")
else:
print("βœ… EXPECTED: Without history, Gemini doesn't understand 'that'")
# Show what Gemini says when it doesn't have context
print(f"Response preview: {response_text[:200]}...")
print("\nβœ… Test complete!")
if __name__ == "__main__":
asyncio.run(test_automatic_state())
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment