Skip to content

Instantly share code, notes, and snippets.

@bu5hm4nn
Created February 2, 2026 11:27
Show Gist options
  • Select an option

  • Save bu5hm4nn/b8403f9a4be0a357a4bad079b0ba51fc to your computer and use it in GitHub Desktop.

Select an option

Save bu5hm4nn/b8403f9a4be0a357a4bad079b0ba51fc to your computer and use it in GitHub Desktop.
Setting up Qodo PR-Agent with Abacus AI on GitLab CI

Setting Up Qodo PR-Agent with Abacus AI on GitLab CI

This guide explains how to configure Qodo PR-Agent to use Abacus AI's RouteLLM API for automated code reviews in GitLab merge requests.

The Challenge

Qodo PR-Agent uses litellm for LLM routing. While Abacus AI provides an OpenAI-compatible API, configuring it through litellm's openai provider doesn't work correctly because:

  1. The openai provider prepends openai/ to model names (e.g., openai/gpt-5.1-codex-max)
  2. Abacus AI rejects prefixed model names with "Invalid model" errors
  3. The openai__api_base setting doesn't properly override the default OpenAI endpoint

The Solution: Use hosted_vllm Provider

The hosted_vllm provider in litellm is designed for OpenAI-compatible endpoints and:

  • Reads HOSTED_VLLM_API_BASE and HOSTED_VLLM_API_KEY environment variables
  • Strips the hosted_vllm/ prefix before sending requests to the API
  • Works seamlessly with Abacus AI's RouteLLM endpoint

Prerequisites

  1. GitLab Personal Access Token with api scope
  2. Abacus AI API Key from abacus.ai

Add these as CI/CD variables in GitLab (Settings > CI/CD > Variables):

  • GITLAB_PERSONAL_ACCESS_TOKEN
  • ABACUS_API_KEY

GitLab CI Configuration

Add this job to your .gitlab-ci.yml:

stages:
  - review

qodo_review:
  stage: review
  image:
    name: codiumai/pr-agent:latest
    entrypoint: [""]  # Required: Override default entrypoint
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
  script:
    - cd /app
    - echo "Running Qodo PR-Agent review..."

    # Construct MR URL
    - export MR_URL="$CI_MERGE_REQUEST_PROJECT_URL/merge_requests/$CI_MERGE_REQUEST_IID"

    # Configure GitLab provider
    - export config__git_provider="gitlab"
    - export gitlab__url="$CI_SERVER_PROTOCOL://$CI_SERVER_HOST"
    - export gitlab__PERSONAL_ACCESS_TOKEN="$GITLAB_PERSONAL_ACCESS_TOKEN"

    # Configure Abacus AI via hosted_vllm provider
    - export HOSTED_VLLM_API_BASE="https://routellm.abacus.ai/v1"
    - export HOSTED_VLLM_API_KEY="$ABACUS_API_KEY"
    - export config__model="hosted_vllm/gpt-5.1-codex-max"
    - export config__custom_model_max_tokens="272000"  # Required for unknown models
    - export config__fallback_models="[]"  # Disable fallbacks

    # PR-Agent settings (optional)
    - export pr_reviewer__require_score_review="false"
    - export pr_reviewer__require_tests_review="true"
    - export pr_reviewer__require_security_review="true"
    - export pr_reviewer__num_code_suggestions="4"

    # Run review
    - python -m pr_agent.cli --pr_url="$MR_URL" review
  allow_failure: true  # Don't block MR on review failures

Configuration Explained

Setting Purpose
entrypoint: [""] Overrides the container's default CLI entrypoint to allow running shell commands
HOSTED_VLLM_API_BASE Abacus AI's OpenAI-compatible endpoint
HOSTED_VLLM_API_KEY Your Abacus AI API key
config__model Model with hosted_vllm/ prefix (stripped before API call)
config__custom_model_max_tokens Token limit for models not in litellm's default list
config__fallback_models="[]" Prevents fallback to default models like o4-mini

Available Abacus AI Models

Some models available on Abacus AI RouteLLM:

Model Context Window
gpt-5.1-codex-max 272k tokens
gpt-5.2-codex 128k tokens
gpt-5.1-codex 128k tokens

Check Abacus AI documentation for the latest model list.

Troubleshooting

Error: "Invalid model: openai/model-name"

You're using the wrong provider. Switch from openai__* settings to HOSTED_VLLM_* environment variables.

Error: "Model X is not defined in MAX_TOKENS"

Add config__custom_model_max_tokens with the model's context window size.

Error: 404 Not Found

Ensure HOSTED_VLLM_API_BASE is set to https://routellm.abacus.ai/v1 (with /v1).

Error: 403 Unauthorized on fallback model

Set config__fallback_models="[]" to disable fallbacks to models you don't have access to.

Verifying It Works

The job logs should show:

Reviewing PR: https://gitlab.com/your/repo/merge_requests/1 ...
Tokens: XXXXX, total tokens over limit: 32000, pruning diff.
Setting review labels: ['Review effort X/5']
Persistent mode - updating comment ... to latest review message

Why Not Use the openai Provider?

The openai provider in litellm:

  1. Doesn't strip model prefixes before API calls
  2. Has hardcoded logic expecting api.openai.com
  3. Doesn't properly read openai__api_base for custom endpoints

The hosted_vllm provider was specifically designed for self-hosted or third-party OpenAI-compatible endpoints.

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment