Skip to content

Instantly share code, notes, and snippets.

@thgaskell
Last active April 8, 2025 04:34
Show Gist options
  • Save thgaskell/eb1c126c7addffcfcf574bb9be70a564 to your computer and use it in GitHub Desktop.
Save thgaskell/eb1c126c7addffcfcf574bb9be70a564 to your computer and use it in GitHub Desktop.
Claude Code + Aider MCP Server + GitHub Copilot Provider

Claude Code + Aider MCP Server + GitHub Copilot Provider

Last updated April 7, 2025

  1. Clone disler/aider-mcp-server:
git clone [email protected]:disler/aider-mcp-server.git
  1. Install dependencies using uv:
cd aider-mcp-server
uv sync
  1. Override the LiteLLM dependency with the one from the experimental branch:
uv pip install git+https://github.com/BerriAI/litellm.git@litellm_dev_03_05_2025_contributor_prs
  1. Replace the aider model metadata files to support the new models.
.venv/lib/python3.12/site-packages/aider/resources/model-metadata.json
{
  "deepseek-reasoner": {
    "max_tokens": 8192,
    "max_input_tokens": 64000,
    "max_output_tokens": 8192,
    "input_cost_per_token": 0.00000055,
    "input_cost_per_token_cache_hit": 0.00000014,
    "cache_read_input_token_cost": 0.00000014,
    "cache_creation_input_token_cost": 0.0,
    "output_cost_per_token": 0.00000219,
    "litellm_provider": "deepseek",
    "mode": "chat",
    "supports_assistant_prefill": true,
    "supports_prompt_caching": true
  },
  "openrouter/deepseek/deepseek-r1": {
    "max_tokens": 8192,
    "max_input_tokens": 64000,
    "max_output_tokens": 8192,
    "input_cost_per_token": 0.00000055,
    "input_cost_per_token_cache_hit": 0.00000014,
    "cache_read_input_token_cost": 0.00000014,
    "cache_creation_input_token_cost": 0.0,
    "output_cost_per_token": 0.00000219,
    "litellm_provider": "openrouter",
    "mode": "chat",
    "supports_assistant_prefill": true,
    "supports_prompt_caching": true
  },
  "openrouter/deepseek/deepseek-r1:free": {
    "max_tokens": 8192,
    "max_input_tokens": 64000,
    "max_output_tokens": 8192,
    "input_cost_per_token": 0.0,
    "input_cost_per_token_cache_hit": 0.0,
    "cache_read_input_token_cost": 0.0,
    "cache_creation_input_token_cost": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "openrouter",
    "mode": "chat",
    "supports_assistant_prefill": true,
    "supports_prompt_caching": true
  },
  "fireworks_ai/accounts/fireworks/models/deepseek-r1": {
    "max_tokens": 160000,
    "max_input_tokens": 128000,
    "max_output_tokens": 20480,
    "litellm_provider": "fireworks_ai",
    "input_cost_per_token": 0.000008,
    "output_cost_per_token": 0.000008,
    "mode": "chat"
  },
  "fireworks_ai/accounts/fireworks/models/deepseek-v3": {
    "max_tokens": 128000,
    "max_input_tokens": 100000,
    "max_output_tokens": 8192,
    "litellm_provider": "fireworks_ai",
    "input_cost_per_token": 0.0000009,
    "output_cost_per_token": 0.0000009,
    "mode": "chat"
  },
  "o3-mini": {
    "max_tokens": 100000,
    "max_input_tokens": 200000,
    "max_output_tokens": 100000,
    "input_cost_per_token": 0.0000011,
    "output_cost_per_token": 0.0000044,
    "cache_read_input_token_cost": 0.00000055,
    "litellm_provider": "openai",
    "mode": "chat",
    "supports_function_calling": true,
    "supports_parallel_function_calling": true,
    "supports_vision": true,
    "supports_prompt_caching": true,
    "supports_system_messages": true,
    "supports_response_schema": true
  },
  "openrouter/openai/o3-mini": {
    "max_tokens": 100000,
    "max_input_tokens": 200000,
    "max_output_tokens": 100000,
    "input_cost_per_token": 0.0000011,
    "output_cost_per_token": 0.0000044,
    "cache_read_input_token_cost": 0.00000055,
    "litellm_provider": "openrouter",
    "mode": "chat",
    "supports_function_calling": true,
    "supports_parallel_function_calling": true,
    "supports_vision": true,
    "supports_prompt_caching": true,
    "supports_system_messages": true,
    "supports_response_schema": true
  },
  "openrouter/openai/gpt-4o-mini": {
    "max_tokens": 16384,
    "max_input_tokens": 128000,
    "max_output_tokens": 16384,
    "input_cost_per_token": 0.00000015,
    "output_cost_per_token": 0.0000006,
    "input_cost_per_token_batches": 0.000000075,
    "output_cost_per_token_batches": 0.0000003,
    "cache_read_input_token_cost": 0.000000075,
    "litellm_provider": "openrouter",
    "mode": "chat",
    "supports_function_calling": true,
    "supports_parallel_function_calling": true,
    "supports_response_schema": true,
    "supports_vision": true,
    "supports_prompt_caching": true,
    "supports_system_messages": true
  },
  "github_copilot/gpt-3.5-turbo": {
    "max_tokens": 4096,
    "max_input_tokens": 16384,
    "max_output_tokens": 4096,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true
  },
  "github_copilot/gpt-4": {
    "max_tokens": 8192,
    "max_input_tokens": 32768,
    "max_output_tokens": 8192,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_function_calling": true,
    "supports_system_messages": true,
    "supports_repo_map": true
  },
  "github_copilot/gpt-4o": {
    "max_tokens": 32768,
    "max_input_tokens": 128000,
    "max_output_tokens": 32768,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_function_calling": true,
    "supports_vision": true,
    "supports_system_messages": true,
    "supports_repo_map": true
  },
  "github_copilot/gpt-4o-mini": {
    "max_tokens": 16384,
    "max_input_tokens": 128000,
    "max_output_tokens": 16384,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true
  },
  "github_copilot/o1-ga": {
    "max_tokens": 128000,
    "max_input_tokens": 128000,
    "max_output_tokens": 4096,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true,
    "supports_repo_map": true
  },
  "github_copilot/o3-mini": {
    "max_tokens": 100000,
    "max_input_tokens": 200000,
    "max_output_tokens": 100000,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_function_calling": true,
    "supports_vision": true,
    "supports_system_messages": true,
    "supports_repo_map": true,
    "supports_response_schema": true
  },
  "github_copilot/claude-3.5-sonnet": {
    "max_tokens": 200000,
    "max_input_tokens": 180000,
    "max_output_tokens": 20000,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true,
    "supports_repo_map": true,
    "supports_vision": true
  },
  "github_copilot/claude-3.5-haiku": {
    "max_tokens": 200000,
    "max_input_tokens": 180000,
    "max_output_tokens": 20000,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true,
    "supports_repo_map": true,
    "supports_vision": true
  },
  "github_copilot/claude-3.7-sonnet": {
    "max_tokens": 200000,
    "max_input_tokens": 180000,
    "max_output_tokens": 20000,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true,
    "supports_repo_map": true,
    "supports_vision": true
  },
  "github_copilot/claude-3.7-sonnet-thought": {
    "max_tokens": 200000,
    "max_input_tokens": 180000,
    "max_output_tokens": 20000,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true,
    "supports_repo_map": true,
    "supports_vision": true
  },
  "github_copilot/gemini-2.0-flash": {
    "max_tokens": 1048576,
    "max_input_tokens": 1044480,
    "max_output_tokens": 4096,
    "input_cost_per_token": 0.0,
    "output_cost_per_token": 0.0,
    "litellm_provider": "github_copilot",
    "mode": "chat",
    "supports_system_messages": true,
    "supports_repo_map": true,
    "supports_vision": true
  }
}
.venv/lib/python3.12/site-packages/aider/resources/model-settings.yml
- name: gpt-3.5-turbo
  weak_model_name: gpt-4o-mini
  reminder: sys

- name: gpt-3.5-turbo-0125
  weak_model_name: gpt-4o-mini
  reminder: sys

- name: gpt-3.5-turbo-1106
  weak_model_name: gpt-4o-mini
  reminder: sys

- name: gpt-3.5-turbo-0613
  weak_model_name: gpt-4o-mini
  reminder: sys

- name: gpt-3.5-turbo-16k-0613
  weak_model_name: gpt-4o-mini
  reminder: sys

- name: gpt-4-turbo-2024-04-09
  edit_format: udiff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys

- name: gpt-4-turbo
  edit_format: udiff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys

- name: openai/gpt-4o
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true
  editor_edit_format: editor-diff

- name: openai/gpt-4o-2024-08-06
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true

- name: gpt-4o-2024-08-06
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true

- name: gpt-4o-2024-11-20
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true

- name: openai/gpt-4o-2024-11-20
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true

- name: gpt-4o
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true
  editor_edit_format: editor-diff

- name: gpt-4o-mini
  weak_model_name: gpt-4o-mini
  lazy: true
  reminder: sys

- name: openai/gpt-4o-mini
  weak_model_name: openai/gpt-4o-mini
  lazy: true
  reminder: sys

- name: gpt-4-0125-preview
  edit_format: udiff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true

- name: gpt-4-1106-preview
  edit_format: udiff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys

- name: gpt-4-vision-preview
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  reminder: sys

- name: gpt-4-0314
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true

- name: gpt-4-0613
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  reminder: sys

- name: gpt-4-32k-0613
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  reminder: sys

- name: claude-3-opus-20240229
  edit_format: diff
  weak_model_name: claude-3-5-haiku-20241022
  use_repo_map: true

- name: openrouter/anthropic/claude-3-opus
  edit_format: diff
  weak_model_name: openrouter/anthropic/claude-3-5-haiku
  use_repo_map: true

- name: claude-3-sonnet-20240229
  weak_model_name: claude-3-5-haiku-20241022

- name: claude-3-5-sonnet-20240620
  edit_format: diff
  weak_model_name: claude-3-5-haiku-20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
    max_tokens: 8192
  cache_control: true
  editor_model_name: claude-3-5-sonnet-20240620
  editor_edit_format: editor-diff

- name: anthropic/claude-3-5-sonnet-20240620
  edit_format: diff
  weak_model_name: anthropic/claude-3-5-haiku-20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
    max_tokens: 8192
  cache_control: true
  editor_model_name: anthropic/claude-3-5-sonnet-20240620
  editor_edit_format: editor-diff

- name: anthropic/claude-3-5-sonnet-20241022
  edit_format: diff
  weak_model_name: anthropic/claude-3-5-haiku-20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
    max_tokens: 8192
  cache_control: true
  editor_model_name: anthropic/claude-3-5-sonnet-20241022
  editor_edit_format: editor-diff

- name: bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
  edit_format: diff
  weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
    max_tokens: 8192
  cache_control: true
  editor_model_name: bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
  editor_edit_format: editor-diff

- name: anthropic/claude-3-5-sonnet-latest
  edit_format: diff
  weak_model_name: anthropic/claude-3-5-haiku-20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
    max_tokens: 8192
  cache_control: true
  editor_model_name: anthropic/claude-3-5-sonnet-20241022
  editor_edit_format: editor-diff

- name: claude-3-5-sonnet-20241022
  edit_format: diff
  weak_model_name: claude-3-5-haiku-20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
    max_tokens: 8192
  cache_control: true
  editor_model_name: claude-3-5-sonnet-20241022
  editor_edit_format: editor-diff

- name: anthropic/claude-3-haiku-20240307
  weak_model_name: anthropic/claude-3-haiku-20240307
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
  cache_control: true

- name: anthropic/claude-3-5-haiku-20241022
  edit_format: diff
  weak_model_name: anthropic/claude-3-5-haiku-20241022
  use_repo_map: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
  cache_control: true

- name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
  edit_format: diff
  weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
  use_repo_map: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
  cache_control: true

- name: claude-3-5-haiku-20241022
  edit_format: diff
  weak_model_name: claude-3-5-haiku-20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
  cache_control: true

- name: vertex_ai/claude-3-5-haiku@20241022
  edit_format: diff
  weak_model_name: vertex_ai/claude-3-5-haiku@20241022
  use_repo_map: true
  extra_params:
    max_tokens: 4096

- name: claude-3-haiku-20240307
  weak_model_name: claude-3-haiku-20240307
  examples_as_sys_msg: true
  extra_params:
    extra_headers:
      anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
  cache_control: true

- name: openrouter/anthropic/claude-3.5-sonnet
  edit_format: diff
  weak_model_name: openrouter/anthropic/claude-3-5-haiku
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  cache_control: true
  editor_model_name: openrouter/anthropic/claude-3.5-sonnet
  editor_edit_format: editor-diff

- name: openrouter/anthropic/claude-3.5-sonnet:beta
  edit_format: diff
  weak_model_name: openrouter/anthropic/claude-3-5-haiku:beta
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  cache_control: true
  editor_model_name: openrouter/anthropic/claude-3.5-sonnet:beta
  editor_edit_format: editor-diff

- name: vertex_ai/claude-3-5-sonnet@20240620
  edit_format: diff
  weak_model_name: vertex_ai/claude-3-5-haiku@20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  editor_model_name: vertex_ai/claude-3-5-sonnet@20240620
  editor_edit_format: editor-diff

- name: vertex_ai/claude-3-5-sonnet-v2@20241022
  edit_format: diff
  weak_model_name: vertex_ai/claude-3-5-haiku@20241022
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  editor_model_name: vertex_ai/claude-3-5-sonnet-v2@20241022
  editor_edit_format: editor-diff

- name: vertex_ai/claude-3-opus@20240229
  edit_format: diff
  weak_model_name: vertex_ai/claude-3-5-haiku@20241022
  use_repo_map: true

- name: vertex_ai/claude-3-sonnet@20240229
  weak_model_name: vertex_ai/claude-3-5-haiku@20241022

- name: command-r-plus
  weak_model_name: command-r-plus
  use_repo_map: true

- name: command-r-08-2024
  weak_model_name: command-r-08-2024
  use_repo_map: true

- name: command-r-plus-08-2024
  weak_model_name: command-r-plus-08-2024
  use_repo_map: true

- name: groq/llama3-70b-8192
  edit_format: diff
  weak_model_name: groq/llama3-8b-8192
  examples_as_sys_msg: true

- name: openrouter/meta-llama/llama-3-70b-instruct
  edit_format: diff
  weak_model_name: openrouter/meta-llama/llama-3-70b-instruct
  examples_as_sys_msg: true

- name: gemini/gemini-1.5-pro-002
  edit_format: diff
  use_repo_map: true

- name: gemini/gemini-1.5-flash-002

- name: gemini/gemini-1.5-pro
  edit_format: diff-fenced
  use_repo_map: true

- name: gemini/gemini-1.5-pro-latest
  edit_format: diff-fenced
  use_repo_map: true

- name: gemini/gemini-1.5-pro-exp-0827
  edit_format: diff-fenced
  use_repo_map: true

- name: gemini/gemini-exp-1206
  edit_format: diff
  use_repo_map: true

- name: gemini/gemini-exp-1114
  edit_format: diff
  use_repo_map: true

- name: gemini/gemini-exp-1121
  edit_format: diff
  use_repo_map: true

- name: vertex_ai/gemini-pro-experimental
  edit_format: diff-fenced
  use_repo_map: true

- name: gemini/gemini-1.5-flash-exp-0827

- name: gemini/gemini-2.0-flash-exp
  edit_format: diff
  use_repo_map: true

- name: gemini/gemini-2.0-flash
  edit_format: diff
  use_repo_map: true

- name: openrouter/deepseek/deepseek-r1
  edit_format: diff
  weak_model_name: openrouter/deepseek/deepseek-chat
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true
  use_temperature: false
  editor_model_name: openrouter/deepseek/deepseek-chat
  editor_edit_format: editor-diff

- name: openrouter/deepseek/deepseek-r1:free
  edit_format: diff
  weak_model_name: openrouter/deepseek/deepseek-r1:free
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true
  use_temperature: false
  editor_model_name: openrouter/deepseek/deepseek-r1:free
  editor_edit_format: editor-diff

- name: deepseek/deepseek-reasoner
  edit_format: diff
  weak_model_name: deepseek/deepseek-chat
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true
  use_temperature: false
  editor_model_name: deepseek/deepseek-chat
  editor_edit_format: editor-diff

- name: deepseek/deepseek-chat
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true

- name: deepseek/deepseek-coder
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true

- name: deepseek-chat
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192

- name: deepseek-coder
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true

- name: openrouter/deepseek/deepseek-coder
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true

- name: openrouter/deepseek/deepseek-chat
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true

- name: openrouter/openai/gpt-4o
  edit_format: diff
  weak_model_name: openrouter/openai/gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  examples_as_sys_msg: true
  editor_edit_format: editor-diff

- name: openai/o1-mini
  weak_model_name: openai/gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  editor_model_name: openai/gpt-4o
  editor_edit_format: editor-diff

- name: azure/o1-mini
  weak_model_name: azure/gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  editor_model_name: azure/gpt-4o
  editor_edit_format: editor-diff

- name: o1-mini
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff

- name: openai/o1-preview
  edit_format: diff
  weak_model_name: openai/gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  editor_model_name: openai/gpt-4o
  editor_edit_format: editor-diff

- name: azure/o1-preview
  edit_format: diff
  weak_model_name: azure/gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  editor_model_name: azure/gpt-4o
  editor_edit_format: editor-diff

- name: azure/o1
  edit_format: diff
  weak_model_name: azure/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  streaming: false
  editor_model_name: azure/gpt-4o
  editor_edit_format: editor-diff

- name: o1-preview
  edit_format: architect
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff

- name: openrouter/openai/o1-mini
  weak_model_name: openrouter/openai/gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  streaming: false
  editor_model_name: openrouter/openai/gpt-4o
  editor_edit_format: editor-diff

- name: openrouter/openai/o1-preview
  edit_format: diff
  weak_model_name: openrouter/openai/gpt-4o-mini
  use_repo_map: true
  use_system_prompt: false
  use_temperature: false
  streaming: false
  editor_model_name: openrouter/openai/gpt-4o
  editor_edit_format: editor-diff

- name: openrouter/openai/o1
  edit_format: diff
  weak_model_name: openrouter/openai/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  streaming: false
  editor_model_name: openrouter/openai/gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: openai/o1
  edit_format: diff
  weak_model_name: openai/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  streaming: false
  editor_model_name: openai/gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: o1
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  streaming: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: openrouter/qwen/qwen-2.5-coder-32b-instruct
  edit_format: diff
  weak_model_name: openrouter/qwen/qwen-2.5-coder-32b-instruct
  use_repo_map: true
  editor_model_name: openrouter/qwen/qwen-2.5-coder-32b-instruct
  editor_edit_format: editor-diff

- name: openrouter/deepseek/deepseek-r1-distill-llama-70b
  edit_format: diff
  weak_model_name: openrouter/deepseek/deepseek-chat
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true
  use_temperature: false
  editor_model_name: openrouter/deepseek/deepseek-chat
  editor_edit_format: editor-diff

- name: fireworks_ai/accounts/fireworks/models/deepseek-r1
  edit_format: diff
  weak_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
  use_repo_map: true
  use_temperature: false
  streaming: true
  editor_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
  editor_edit_format: editor-diff
  remove_reasoning: think
  extra_params:
      max_tokens: 160000

- name: fireworks_ai/accounts/fireworks/models/deepseek-v3
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true
  extra_params:
      max_tokens: 128000

- name: openai/o3-mini
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: o3-mini
  edit_format: diff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: openrouter/openai/o3-mini
  edit_format: diff
  weak_model_name: openrouter/openai/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  editor_model_name: openrouter/openai/gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: azure/o3-mini
  edit_format: diff
  weak_model_name: azure/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  editor_model_name: azure/gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "

- name: github_copilot/gpt-3.5-turbo
  weak_model_name: gpt-4o-mini
  reminder: sys
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/gpt-4
  edit_format: udiff
  weak_model_name: gpt-4o-mini
  use_repo_map: true
  lazy: true
  reminder: sys
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/gpt-4o
  edit_format: diff
  weak_model_name: github_copilot/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/gpt-4o-mini
  weak_model_name: github_copilot/gpt-4o-mini
  lazy: true
  reminder: sys
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/o1-ga
  edit_format: diff
  weak_model_name: github_copilot/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  streaming: false
  editor_model_name: gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/o3-mini
  edit_format: diff
  weak_model_name: azure/gpt-4o-mini
  use_repo_map: true
  use_temperature: false
  editor_model_name: azure/gpt-4o
  editor_edit_format: editor-diff
  system_prompt_prefix: "Formatting re-enabled. "
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/claude-3.5-sonnet
  edit_format: diff
  weak_model_name: github_copilot/claude-3.5-haiku
  use_repo_map: true
  examples_as_sys_msg: true
  cache_control: true
  editor_model_name: github_copilot/claude-3.5-sonnet
  editor_edit_format: editor-diff
  extra_params:
    max_tokens: 8192
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/claude-3.7-sonnet
  edit_format: diff
  weak_model_name: github_copilot/claude-3.5-sonnet
  use_repo_map: true
  examples_as_sys_msg: true
  cache_control: true
  editor_model_name: github_copilot/claude-3.7-sonnet
  editor_edit_format: editor-diff
  extra_params:
    max_tokens: 8192
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/claude-3.7-sonnet-thought
  edit_format: diff
  weak_model_name: github_copilot/claude-3.7-sonnet
  use_repo_map: true
  examples_as_sys_msg: true
  cache_control: true
  editor_model_name: github_copilot/claude-3.7-sonnet-thought
  editor_edit_format: editor-diff
  extra_params:
    max_tokens: 8192
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat

- name: github_copilot/gemini-2.0-flash
  edit_format: diff
  use_repo_map: true
  extra_params:
    extra_headers:
      editor-version: Neovim/0.9.0
      Copilot-Integration-Id: vscode-chat
  1. The first time you use a GitHub model, you will need to verify your device. Run the patched version of aider with debugging enabled:
LITELLM_LOG=DEBUG aider --model github_copilot/claude-3.7-sonnet

This should print out a the GitHub device login url as well and the verification code:

Please visit https://github.com/login/device and enter code XXXX-XXXX to authenticate.

LiteLLM will write your access token to ~/.config/litellm. Once you've set up the access token you can close Aider at this point.

  1. To add aider-mcp-server to your project with Claude Code, navigate to your project directory and run:
claude mcp add aider-mcp-server -s local \
  -- \
  uv --directory "<REPLACE_WITH_AIDER_MPC_SERVER_PROJECT_PATH>" \
  run --no-sync aider-mcp-server \
  --editor-model "github_copilot/claude-3.7-sonnet" \
  --current-working-dir "<REPLACE_WITH_CURRENT_PROJECT_PATH>"

Important

You MUST provide the --no-sync flag when running uv. Else uv will replace the patched version of LiteLLM with the stable version.

Note

You can change the --editor-model to any model you want Aider to default to.

You're done!

If everything is configured correctly, Claude Code should able to use any of the GitHub Copilot models available to you through Aider!

Remember, you'll need to specify in your Claude Code session to use the aider-mcp-server tool.

> For coding always use the aider-mcp-server tool. Create a Hello World program using any language of your choice. Verify that it
  works.

⏺ I'll use the aider tool to create a simple Hello World program in JavaScript. Let me do that for you.

⏺ aider-mcp-server:aider_ai_cod(ai_coding_prompt: "Create a simple Hello World program in JavaScript. The program should print …
  e (MCP)                      \"Hello, World!\" to the console.", relative_editable_files: ["helloWorld.js"])
  ⎿  {
       "success": true,
       "diff": "diff --git a/helloWorld.js b/helloWorld.js\nindex e69de29..cbe196f 100644\n--- a/helloWorld.js\n+++
     b/helloWorld.js\n@@ -0,0 +1,2 @@\n+// Simple Hello World program\n+console.log(\"Hello, World!\");\n"
     }

⏺ Let's verify that it works by running the JavaScript file:

⏺ Bash(node helloWorld.js)…
  ⎿  Hello, World!

⏺ The program works correctly, printing "Hello, World!" to the console.

Relevant links

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment