You're evaluating three different MCP server solutions for exposing your media services (Radarr, Sonarr, Plex, Prowlarr, etc.) as MCP tools:
- IBM mcp-context-forge ⭐ RECOMMENDED
- sitbon/magg
- apollographql/apollo-mcp-server
Solution | Stars | Language | License | Latest Release | Maturity |
---|---|---|---|---|---|
mcp-context-forge | 2,655 | Python | Apache 2.0 | v0.8.0 (Oct 2025) | Production-ready |
magg | 89 | Python | AGPL-3.0 | v0.10.1 (Aug 2025) | Active development |
apollo-mcp-server | 214 | Rust | MIT | v1.0.0 (Oct 2025) | Stable |
Purpose: MCP Gateway & Registry - Converts REST APIs to MCP format with enterprise-grade features
Architecture:
- Python-based server with multiple transport protocols (HTTP, JSON-RPC, WebSocket, SSE, stdio)
- Admin UI for real-time management
- Built-in authentication, retries, rate-limiting
- OpenTelemetry observability
Key Features:
- ✅ REST API to MCP conversion - Perfect for Radarr, Sonarr, Plex
- ✅ Admin web UI for management
- ✅ Production-grade security (JWT, OAuth, rate limiting)
- ✅ Docker deployment ready
- ✅ Federation across multiple services
- ✅ Built-in retry logic and error handling
- ✅ OpenTelemetry monitoring
- ✅ Virtual MCP server composition
- ✅ Multiple authentication methods
Deployment:
# Docker (easiest for your LXC setup)
docker run -p 3000:3000 ghcr.io/ibm/mcp-context-forge:latest
# Or PyPI
pip install mcp-context-forge
Your Use Case Fit: ⭐⭐⭐⭐⭐ (Perfect)
- REST API services (Radarr: 7878, Sonarr: 8989, Plex: 32400)
- Docker deployment in LXC 103 or new LXC 106
- Production security needs
- Admin UI for managing multiple services
- ngrok tunnel compatibility
Pros:
- ✅ Built by IBM - enterprise support and maintenance
- ✅ Specifically designed for REST API virtualization
- ✅ Most mature solution (2655 stars)
- ✅ Comprehensive security features
- ✅ Admin UI for easy management
- ✅ Excellent documentation
Cons:
⚠️ Larger footprint (more features = more resources)⚠️ May be overkill if you only need simple aggregation
Purpose: Meta-server that manages, aggregates, and proxies other MCP servers - "Package manager for LLM tools"
Architecture:
- Python-based aggregator
- Multiple transport modes (stdio, HTTP, hybrid)
- "Kit" system for dynamic tool loading
- FastMCP and Pydantic-based
Key Features:
- ✅ Dynamic MCP server management
- ✅ Real-time tool discovery
- ✅ Kit-based tool organization
- ✅ Smart configuration via MCP sampling
- ✅ Docker support
- ✅ RSA-based JWT authentication
- ✅ Multiple transport support
- ✅ Configuration hot-reload
⚠️ Requires existing MCP servers (doesn't convert REST APIs itself)
Deployment:
# PyPI
pip install magg
# Docker
docker run -p 8000:8000 sitbon/magg
Your Use Case Fit: ⭐⭐⭐ (Moderate)
- Would need separate MCP servers for each service first
- Then use Magg to aggregate them
- Two-tier architecture: REST→MCP servers→Magg→LLM
- More complex setup than context-forge
Pros:
- ✅ Dynamic tool management (install/uninstall at runtime)
- ✅ "Kit" system for organizing related tools
- ✅ Lightweight compared to context-forge
- ✅ Active development (updated Oct 2025)
- ✅ AGPL-3.0 license (copyleft)
Cons:
⚠️ Does NOT convert REST APIs to MCP - you'd need to build those first⚠️ Smaller community (89 stars)⚠️ AGPL-3.0 license requires sharing modifications⚠️ Adds extra layer of complexity⚠️ Less documentation than context-forge
Architecture if using Magg:
Radarr REST API → Custom MCP server ─┐
Sonarr REST API → Custom MCP server ─┤
Plex REST API → Custom MCP server ───┼→ Magg → ngrok → OpenAI
Prowlarr REST API → Custom MCP server┘
Purpose: Exposes GraphQL operations as MCP tools - GraphQL-first approach
Architecture:
- Rust-based server
- Designed for Apollo GraphQL ecosystem
- Requires GraphQL schema and operations
Key Features:
- ✅ GraphQL to MCP conversion
- ✅ Built in Rust (performance)
- ✅ MIT license
- ✅ Standardized API access
⚠️ Requires GraphQL (Radarr/Sonarr/Plex use REST, not GraphQL)
Deployment:
# Build from source
cargo build --release
Your Use Case Fit: ⭐ (Poor)
- Radarr, Sonarr, and Plex use REST APIs, not GraphQL
- You'd need to set up a GraphQL wrapper layer first
- Designed for Apollo GraphQL ecosystem
- Wrong tool for REST API services
Pros:
- ✅ Built in Rust (high performance)
- ✅ Official Apollo project
- ✅ MIT license (permissive)
- ✅ GraphQL native (if you have GraphQL APIs)
Cons:
- ❌ Your services don't use GraphQL
⚠️ Would require GraphQL wrapper layer⚠️ Designed for specific ecosystem (Apollo)⚠️ Requires Rust toolchain to build⚠️ Less flexible for REST APIs
Architecture if using Apollo MCP:
Radarr REST API ─┐
Sonarr REST API ─┼→ GraphQL Wrapper → Apollo MCP → ngrok → OpenAI
Plex REST API ───┘
Feature | mcp-context-forge | magg | apollo-mcp-server |
---|---|---|---|
REST API Conversion | ✅ Built-in | ❌ Requires separate servers | ❌ GraphQL only |
Docker Deployment | ✅ Ready | ✅ Ready | |
Admin UI | ✅ Yes | ❌ No | ❌ No |
Security Features | ✅ JWT, OAuth, rate limiting | ✅ JWT | |
ngrok Compatible | ✅ Yes | ✅ Yes | ✅ Yes |
Setup Complexity | ⭐⭐ (Medium) | ⭐⭐⭐⭐ (High) | ⭐⭐⭐⭐⭐ (Very High) |
Documentation | ⭐⭐⭐⭐⭐ (Excellent) | ⭐⭐⭐ (Good) | ⭐⭐ (Limited) |
Maintenance Burden | ⭐ (Low - IBM) | ⭐⭐⭐ (Medium) | ⭐⭐⭐⭐ (High) |
Resource Usage | Medium | Low | Low |
Community Support | Large (2655 stars) | Small (89 stars) | Medium (214 stars) |
License | Apache 2.0 (permissive) | AGPL-3.0 (copyleft) | MIT (permissive) |
-
✅ Built for Your Exact Use Case:
- Converts REST APIs directly to MCP format
- No need for intermediate servers or wrappers
- Your services (Radarr, Sonarr, Plex) all use REST APIs
-
✅ Production-Ready Security:
- JWT authentication
- Rate limiting
- OAuth support
- Perfect for exposing services publicly via ngrok
-
✅ Easy Management:
- Admin web UI
- Real-time monitoring
- OpenTelemetry observability
- No command-line management needed
-
✅ Docker Integration:
- Official Docker images
- Perfect for LXC 103 or new LXC 106
- Docker Compose ready
-
✅ Enterprise Support:
- Maintained by IBM
- Largest community (2655 stars)
- Best documentation
- Regular updates (v0.8.0 in Oct 2025)
- Missing Core Feature: Does NOT convert REST APIs to MCP
- You'd need to build 4+ separate MCP servers first (Radarr, Sonarr, Plex, Prowlarr)
- Then aggregate them with Magg
- 2x the complexity with no clear benefit
- AGPL-3.0 license requires sharing modifications
Use magg if: You already have multiple MCP servers and want to dynamically manage/aggregate them.
- Wrong Protocol: Designed for GraphQL, your services use REST
- Would require building GraphQL wrapper layer first
- 3x the complexity: REST → GraphQL → MCP → LLM
- Rust build toolchain required
- Narrow use case (Apollo GraphQL ecosystem)
Use apollo-mcp-server if: You have Apollo GraphQL APIs and want to expose them as MCP tools.
┌─────────────────────────────────────────────────────┐
│ LXC 103 (192.168.1.175) │
│ │
│ ┌──────────────────────────────────────────────┐ │
│ │ Docker Container: mcp-context-forge │ │
│ │ Port: 3000 │ │
│ │ │ │
│ │ ┌─────────────────────────────────────┐ │ │
│ │ │ Virtual MCP Servers: │ │ │
│ │ │ │ │ │
│ │ │ • Radarr (→ localhost:7878) │ │ │
│ │ │ • Sonarr (→ localhost:8989) │ │ │
│ │ │ • Plex (→ 192.168.1.219:32400) │ │ │
│ │ │ • Prowlarr (→ localhost:9696) │ │ │
│ │ │ • qBittorrent (→ localhost:8080) │ │ │
│ │ │ • SABnzbd (→ localhost:8085) │ │ │
│ │ └─────────────────────────────────────┘ │ │
│ │ │ │
│ │ [Admin UI: localhost:3000/admin] │ │
│ └──────────────────────────────────────────────┘ │
│ ↓ │
│ Port 3000 exposed │
└─────────────────────────┬───────────────────────────┘
↓
ngrok tunnel
↓
Public HTTPS
↓
OpenAI Agent Builder / Claude Desktop
# 1. SSH into LXC 103
ssh [email protected]
pct enter 103
# 2. Create directory for mcp-context-forge
mkdir -p /opt/mcp-context-forge
cd /opt/mcp-context-forge
# 3. Create docker-compose.yml
cat > docker-compose.yml <<'EOF'
version: '3.8'
services:
mcp-context-forge:
image: ghcr.io/ibm/mcp-context-forge:latest
ports:
- "3000:3000"
environment:
- JWT_SECRET=your-secret-key-here
- ADMIN_ENABLED=true
volumes:
- ./config:/app/config
- ./data:/app/data
restart: unless-stopped
networks:
- mcp-network
networks:
mcp-network:
driver: bridge
EOF
# 4. Create configuration directory
mkdir -p config data
# 5. Start the service
docker-compose up -d
# 6. Install ngrok
curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | \
sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && \
echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | \
sudo tee /etc/apt/sources.list.d/ngrok.list && \
sudo apt update && sudo apt install ngrok
# 7. Configure ngrok with your auth token
ngrok config add-authtoken YOUR_NGROK_TOKEN
# 8. Create ngrok tunnel
ngrok http 3000 --log=stdout
Once mcp-context-forge is running, access the Admin UI at http://192.168.1.175:3000/admin
and add your services:
Radarr:
{
"name": "radarr",
"type": "rest",
"baseUrl": "http://localhost:7878/api/v3",
"authentication": {
"type": "apikey",
"header": "X-Api-Key",
"value": "YOUR_RADARR_API_KEY"
}
}
Sonarr:
{
"name": "sonarr",
"type": "rest",
"baseUrl": "http://localhost:8989/api/v3",
"authentication": {
"type": "apikey",
"header": "X-Api-Key",
"value": "YOUR_SONARR_API_KEY"
}
}
Plex:
{
"name": "plex",
"type": "rest",
"baseUrl": "http://192.168.1.219:32400",
"authentication": {
"type": "header",
"header": "X-Plex-Token",
"value": "NNHuTaV8e1wy78cdWYVX"
}
}
Once configured, your LLM will have access to tools like:
radarr_list_movies()
- List all movies in libraryradarr_search_movie(title)
- Search for movieradarr_add_movie(title, quality_profile)
- Add movie to download queueradarr_get_queue()
- Check download queue
sonarr_list_series()
- List all TV seriessonarr_search_series(title)
- Search for seriessonarr_add_series(title, quality_profile)
- Add series to download queuesonarr_get_episodes(series_id)
- List episodes
plex_list_libraries()
- List all Plex librariesplex_search(query)
- Search Plex contentplex_get_recently_added()
- Recently added contentplex_play_status()
- Current playback status
User: "What movies do I have?" LLM: calls radarr_list_movies() → "You have 11 movies including..."
User: "Add The Matrix to my collection" LLM: calls radarr_search_movie("The Matrix") → calls radarr_add_movie() → "Added The Matrix (1999) to download queue"
User: "What's currently downloading?" LLM: calls radarr_get_queue() and sonarr_get_queue() → "Currently downloading: Dragon Ball Super episode 5..."
If you decide you want the dynamic tool management features of Magg, you'd need this architecture:
Tier 1: Create MCP Servers (one per service)
# Create simple Python MCP servers using fastmcp
pip install fastmcp
# radarr_server.py
from fastmcp import FastMCP
mcp = FastMCP("radarr")
# ... implement REST API calls as MCP tools
# sonarr_server.py
from fastmcp import FastMCP
mcp = FastMCP("sonarr")
# ... implement REST API calls as MCP tools
Tier 2: Run Magg to Aggregate
# Install Magg
pip install magg
# Configure Magg to discover your MCP servers
magg run --http --port 3000
Complexity: 🔴🔴🔴🔴 (Much higher)
- Need to write 4+ MCP servers
- Maintain each server's codebase
- Configure Magg to aggregate them
- Debug multiple moving parts
Benefit: Dynamic tool loading/unloading at runtime (rarely needed)
Reasons:
- Single-tier architecture (REST → MCP → LLM)
- Built-in REST API conversion
- Production-grade security
- Admin UI for management
- IBM maintenance and support
- Docker-ready
- Perfect for your ZimaBoard Proxmox setup
Setup Time: ~30 minutes Maintenance: Low (Docker compose + ngrok) Complexity: ⭐⭐ (Medium)
ssh [email protected]
pct enter 103
cd /opt && git clone https://github.com/IBM/mcp-context-forge.git
cd mcp-context-forge && docker-compose up -d
Generated: October 12, 2025 Comparison: IBM mcp-context-forge vs sitbon/magg vs apollographql/apollo-mcp-server Use Case: Expose Radarr, Sonarr, Plex, Prowlarr as MCP tools on ZimaBoard Proxmox Recommendation: ✅ IBM mcp-context-forge (REST API conversion, production security, Docker ready)