Skip to content

Instantly share code, notes, and snippets.

@jdubois
Last active April 23, 2026 20:21
Show Gist options
  • Select an option

  • Save jdubois/2703bb3d03c8efa6b9f17cebf2cfbe2e to your computer and use it in GitHub Desktop.

Select an option

Save jdubois/2703bb3d03c8efa6b9f17cebf2cfbe2e to your computer and use it in GitHub Desktop.
Best approaches to use GitHub Copilot and Java

Comparing Approaches for GitHub Copilot + Java

Overview

When GitHub Copilot analyzes or generates Java code, the approach used for code intelligence directly impacts performance (speed of interaction), output quality (correctness and idiomatic usage of Java, Spring, and related frameworks), resource consumption (tokens and premium requests), and cost.

This document compares six approaches across these dimensions.


Executive Summary

Comparison Table

Criteria Copilot CLI + LSP VS Code + Copilot IntelliJ + Copilot Plugin AI Assistant + ACP (Copilot) Copilot CLI + IntelliJ MCP Copilot CLI Bare
Output quality ⭐⭐ Good ⭐⭐ Good ⭐⭐⭐ Very good ⭐⭐⭐ Best ⭐⭐⭐ Best ⭐ Lowest
Performance (speed) ⭐⭐⭐ Fast ⭐⭐⭐ Fast ⭐⭐⭐ Fast ⭐⭐ Medium ⭐⭐ Medium ⭐ Slowest
Token efficiency ⭐⭐⭐ Best ⭐⭐⭐ Very good ⭐⭐⭐ Very good ⭐⭐ Medium ⭐⭐ Medium ⭐ Worst
Premium requests Low Low-Medium Low-Medium Medium-High Medium-High Highest
Semantic accuracy High High High Highest Highest Low (text-based)
Setup complexity Moderate Low (install plugins) Low (install plugin) Moderate-Heavy Heavy None
Framework awareness Basic Basic Deep (IntelliJ) Deep (Spring, etc.) Deep (Spring, etc.) None
Autonomy Fully autonomous Developer-in-the-loop Developer-in-the-loop Semi-autonomous (agent mode) Fully autonomous Fully autonomous
CI/CD compatible Yes No No No Yes Yes
Agent mode N/A Yes Limited Yes N/A N/A

Recommendations

  • Copilot CLI + LSP (JDTLS) — best for autonomous, scriptable Java analysis. Optimal token efficiency with semantic accuracy, low premium request consumption.
  • VS Code + Copilot — best for interactive development in VS Code. The developer acts as an intelligent filter, keeping context relevant and resource usage low.
  • IntelliJ IDEA + Copilot Plugin — best for enterprise Java developers already using IntelliJ who want Copilot's AI with IntelliJ's superior Java tooling, in a developer-in-the-loop workflow.
  • JetBrains AI Assistant + ACP (Copilot) — best for developers who want the highest-quality output by combining Copilot's AI models with IntelliJ's deep code intelligence tools and agent mode. The most feature-rich approach but with higher setup complexity and resource consumption.
  • Copilot CLI + IntelliJ MCP — justified when you need fully autonomous, deep framework-aware analysis without developer interaction.
  • Copilot CLI Bare — fallback when no other option is available. Produces the lowest-quality output and consumes the most resources.

Why Does the Approach Affect Output Quality?

The quality of AI-generated Java code depends heavily on the context the AI model receives. Better context leads to better code:

  • Framework awareness matters — an AI that can resolve Spring beans, understand @Transactional semantics, or trace dependency injection will generate code that follows framework conventions correctly. Without this, the AI may produce code that compiles but misuses frameworks (e.g., wrong scope for a Spring bean, missing @Entity annotations, incorrect JPA relationships).
  • Project structure awareness matters — an AI that sees the full project dependency tree, module structure, and existing patterns will produce code consistent with the project's architecture. Without this, it may reinvent existing utilities or use incompatible library versions.
  • Inspection and diagnostic feedback matters — approaches that feed compilation errors and inspection warnings back to the AI enable self-correction loops, resulting in code that compiles on the first try and follows project-specific coding standards.
  • Semantic accuracy prevents drift — text-based search (grep) can't distinguish a method call from a comment mentioning the same word, leading the AI to make incorrect assumptions about the codebase. Semantic tools eliminate this noise.

Detailed Approach Analysis

1. Copilot CLI + LSP Server (JDTLS) — Most Token-Efficient ✅

How it works: The assistant communicates with a Java Language Server (Eclipse JDTLS) that provides semantic code intelligence via the Language Server Protocol.

Key operations and their token cost:

Operation What's returned Token cost
goToDefinition File path + line number ~10 tokens
findReferences List of locations ~10-50 tokens
hover Type signature + Javadoc ~50-200 tokens
documentSymbol Compact outline of a file ~50-300 tokens
incomingCalls Precise list of callers ~20-100 tokens
rename All locations updated in one operation ~50-200 tokens

Strengths:

  • Responses are minimal and precise by design
  • Semantic understanding — distinguishes method calls from comments and strings
  • Incremental queries — fetch only what you need
  • No false positives in search results
  • Low premium request consumption — compact responses mean fewer round-trips

Weaknesses:

  • Requires JDTLS to be installed and running
  • Limited to what LSP protocol exposes (no deep framework awareness)
  • Output quality is good but not optimal — the AI lacks Spring/framework-specific context that would help generate idiomatic code

Performance: Fast — LSP responses are near-instantaneous (sub-second), and minimal data transfer keeps round-trips quick.

Premium requests: Low — each LSP call returns a small, targeted result, keeping the number of model invocations minimal.


2. VS Code + Java Extension Pack + GitHub Copilot — Low Token Usage

How it works: GitHub Copilot runs inside VS Code, which has its own Java language server (Red Hat's Java extension, powered by Eclipse JDTLS). Copilot automatically leverages the editor's language services — open file context, diagnostics, symbols, and type information — without explicit tool calls.

Key mechanisms:

Mechanism What Copilot receives Token cost
Active file context Current file content (auto-attached) Proportional to file size
Workspace symbols Symbols from open/related files Low — editor pre-filters
Diagnostics Errors/warnings from the language server ~10-50 tokens
Inline completions Type-ahead suggestions (no analysis tokens) Zero (separate model call)
Chat with #file / #selection User-selected context User-controlled

Strengths:

  • Automatic context — Copilot gets LSP-quality information (types, diagnostics, symbols) without explicit queries
  • User-curated context — developers attach relevant files/selections, reducing noise
  • Visual feedback — developers can verify and guide the AI using the editor UI
  • No setup beyond plugins — Java extension installs JDTLS automatically
  • Rich ecosystem — test runners, debuggers, and Git integration feed additional context

Weaknesses:

  • Open file bias — Copilot primarily sees open/active files; may miss relevant code in closed files
  • Less autonomous — relies on the developer to navigate and provide context
  • Variable token cost — depends heavily on how many files are open and how the developer uses @workspace, #file, etc.
  • Not scriptable — can't be automated in CI/CD or batch workflows
  • Basic Java intelligence — VS Code's Java support is solid but less deep than IntelliJ's for framework-level analysis (Spring, Jakarta EE)

Performance: Fast — the developer controls the pace, and LSP-backed context is served quickly.

Premium requests: Low to medium — inline completions use a separate model call (not counted as premium), but chat interactions consume premium requests. The developer's guidance reduces unnecessary requests.

Output quality: Good — the AI receives accurate type information and diagnostics, producing correct code. However, framework-specific patterns (Spring configuration, JPA mappings) may not be as idiomatic as with IntelliJ-backed approaches because VS Code's Java support lacks deep framework awareness.


3. IntelliJ IDEA + GitHub Copilot Plugin — Low Token Usage

How it works: The GitHub Copilot plugin for JetBrains IDEs installs directly into IntelliJ IDEA. It provides inline code completions and a chat panel, receiving context from IntelliJ's platform — the active file, open files, diagnostics, and project structure. This is architecturally similar to VS Code + Copilot but benefits from IntelliJ's deeper Java-specific intelligence.

Key mechanisms:

Mechanism What Copilot receives Token cost
Active file context Current file content (auto-attached) Proportional to file size
Editor context Selected code, cursor position, surrounding code Low — IDE pre-filters
Diagnostics Errors/warnings from IntelliJ's inspection engine ~10-50 tokens
Inline completions Type-ahead suggestions (no analysis tokens) Zero (separate model call)
Chat with file references User-selected context User-controlled

Strengths:

  • Deep Java intelligence — IntelliJ's code analysis is more sophisticated than VS Code's for Java (Spring bean resolution, framework-aware inspections, more accurate type inference)
  • Higher output quality — IntelliJ's richer diagnostics (Spring-specific inspections, JPA validation, Hibernate checks) feed the AI better context, leading to more idiomatic framework usage in generated code
  • Familiar environment — most enterprise Java developers already use IntelliJ IDEA
  • No additional setup beyond the plugin — install the GitHub Copilot plugin, sign in, and it works
  • Rich ecosystem — IntelliJ's built-in test runners, profilers, database tools, and Git integration provide additional context

Weaknesses:

  • Same open-file bias as VS Code — Copilot primarily sees open/active files
  • Developer-in-the-loop — relies on the developer to navigate and provide context
  • Not scriptable — can't be automated in CI/CD or batch workflows
  • Plugin is separate from JetBrains AI — the GitHub Copilot plugin and JetBrains AI Assistant are separate plugins; they don't share context or capabilities
  • No MCP server access — unlike the ACP approach (below), the Copilot plugin does not get access to IntelliJ's bundled MCP server tools

Performance: Fast — same developer-paced interaction as VS Code, with IntelliJ's language services responding in sub-second times.

Premium requests: Low to medium — same profile as VS Code + Copilot. Inline completions are separate; chat interactions consume premium requests.

Output quality: Very good — IntelliJ's deeper inspections catch Spring misconfigurations, JPA mapping errors, and framework anti-patterns that VS Code would miss. The AI receives these as diagnostics and can self-correct, producing more idiomatic Java/Spring code.


4. JetBrains AI Assistant + ACP with GitHub Copilot — Best Output Quality

How it works: JetBrains AI Assistant supports the Agent Client Protocol (ACP) — a standardized protocol for agent-editor communication, analogous to how LSP standardized language server integration. Through ACP, GitHub Copilot can be connected as an external agent to the AI Assistant's chat interface. When the "Pass IntelliJ MCP server" setting is enabled, the agent gains access to all of IntelliJ's code intelligence tools via the bundled MCP server.

This creates a unique hybrid: GitHub Copilot's AI models + IntelliJ's deep code intelligence tools, with agent mode support for multi-step, multi-file tasks.

┌──────────────────────────────────────────────────────┐
│                   IntelliJ IDEA                      │
│                                                      │
│  ┌─────────────────┐     ┌─────────────────────────┐ │
│  │  AI Assistant    │────▶│  ACP Agent (Copilot)    │ │
│  │  Chat UI         │◀────│  (external process)     │ │
│  └────────┬────────┘     └─────────────────────────┘ │
│           │                                          │
│           ▼                                          │
│  ┌─────────────────┐     ┌─────────────────────────┐ │
│  │  IntelliJ MCP   │     │  Custom MCP Servers     │ │
│  │  Server (built-  │     │  (optional)             │ │
│  │  in tools)       │     │                         │ │
│  └─────────────────┘     └─────────────────────────┘ │
└──────────────────────────────────────────────────────┘

Key mechanisms:

Mechanism What the agent receives Token cost
ACP chat messages User prompts + IDE context Varies
IntelliJ MCP tools Full tool suite (see approach #5) ~50-500 tokens per call
Agent mode execution Multi-step task orchestration Varies per task
Custom MCP servers Additional external tools Varies

Strengths:

  • Highest output quality — the agent can actively query IntelliJ's code intelligence: check for compilation errors via get_file_problems, understand symbol types via get_symbol_info, verify project dependencies via get_project_dependencies, and validate against IntelliJ's 800+ Java inspections. This feedback loop means generated code is more likely to compile, follow framework conventions, and be consistent with the existing codebase
  • Best of both worlds — Copilot's models with IntelliJ's code intelligence tooling
  • Deep framework awareness via MCP tools — the agent can query get_symbol_info, get_file_problems, get_project_dependencies, etc.
  • Agent mode — supports multi-step, multi-file tasks with autonomous execution
  • Standardized protocol — ACP is designed for interoperability; any ACP-compatible agent works without custom integration
  • Flexible model choice — can combine multiple AI providers via BYOK, OAuth, and ACP simultaneously

Weaknesses:

  • Higher resource consumption — agent mode with MCP tools means more round-trips and more premium requests per task
  • Emerging ecosystem — ACP is new; GitHub Copilot's ACP compatibility may require manual acp.json configuration
  • Complex setup — requires both the JetBrains AI Assistant plugin and the ACP agent configuration
  • Requires IntelliJ IDEA running — same constraint as the IntelliJ MCP approach
  • Two-plugin conflict potential — if both the GitHub Copilot plugin and AI Assistant with Copilot-via-ACP are installed, they operate independently

Performance: Medium — agent mode tasks involve multiple MCP tool calls and model invocations, adding latency. Simple chat queries are fast, but complex multi-file tasks may take significantly longer than developer-guided approaches.

Premium requests: Medium to high — agent mode autonomously makes multiple model calls (one per reasoning step), and each MCP tool invocation may trigger follow-up model calls to process results. A single complex task can consume 5-20+ premium requests.

Output quality: Best — this is the only approach where the AI can autonomously verify its own output against IntelliJ's full inspection suite, check compilation, validate framework usage, and iterate. The self-correction loop produces code that is most likely to be correct, idiomatic, and consistent with the project.


5. Copilot CLI + IntelliJ IDEA MCP Server — Medium Token Usage

How it works: The assistant connects to a running IntelliJ IDEA instance via the Model Context Protocol (MCP), leveraging IntelliJ's code intelligence engine. Since IntelliJ IDEA 2025.2, the IDE ships with a bundled MCP server that exposes a rich set of tools.

Key operations and their token cost:

Operation What's returned Token cost
get_symbol_info Symbol declaration, type, docs (like Quick Documentation) ~100-500 tokens
get_file_problems Errors/warnings via IntelliJ inspections ~50-300 tokens
search_in_files_by_regex Regex search results using IntelliJ's engine ~100-500 tokens
search_in_files_by_text Text search results using IntelliJ's engine ~100-500 tokens
get_project_dependencies List of all project dependencies ~50-200 tokens
get_project_modules List of project modules with types ~50-200 tokens
find_files_by_glob File search by glob pattern ~20-100 tokens
get_file_text_by_path File contents Proportional to file size
replace_text_in_file Targeted find-and-replace result ~20-50 tokens
execute_run_configuration Run config output (exit code, stdout) ~100-1000 tokens
get_run_configurations Available run configurations ~50-200 tokens
list_directory_tree Directory tree in pseudo-graphic format ~50-500 tokens
reformat_file Apply IntelliJ code formatting ~10-20 tokens

Strengths:

  • Deepest semantic understanding (Spring bean resolution, framework-aware analysis)
  • Can answer complex queries in fewer round-trips
  • Rich contextual information per response
  • Same self-correction capability as ACP approach — can verify output against IntelliJ inspections

Weaknesses:

  • Responses are more verbose — includes extra metadata and context
  • JSON-wrapped responses add overhead
  • Requires IntelliJ IDEA to be running
  • Heaviest setup cost

Performance: Medium — MCP tool calls add latency (each involves a round-trip to the running IDE), but the rich responses reduce the total number of iterations needed.

Premium requests: Medium to high — each MCP tool call generates data that the model must process, leading to follow-up model invocations. Similar to ACP but without agent mode's autonomous multi-step orchestration.

Output quality: Best — same access to IntelliJ's inspections and code intelligence as the ACP approach. The AI can verify generated code, check for framework misconfigurations, and iterate. The difference from ACP is in workflow (fully autonomous CLI vs. semi-autonomous agent mode), not in the quality ceiling.


6. Copilot CLI Bare (grep/glob/view only) — Most Token-Expensive ❌

How it works: The assistant uses only text-based tools — file search by name (glob), content search by regex (grep), and raw file reading (view).

Strengths:

  • Zero setup — works out of the box
  • No external dependencies
  • Simple and predictable

Weaknesses:

  • No semantic understanding — text search matches comments, strings, and variable names indiscriminately
  • Must read entire files or large ranges to understand structure
  • More round-trips required — iterative grep → view → grep chains to trace code flows
  • False positives require reading extra code to disambiguate
  • Rename/refactoring is error-prone without semantic analysis
  • No self-correction — cannot verify generated code against inspections or compilation

Performance: Slowest — requires the most round-trips and reads the most raw text. A task that takes 2-3 tool calls with LSP may take 10-20 with grep/view.

Premium requests: Highest — every round-trip is a premium request, and the lack of semantic tools means more iterations are needed. A single code analysis task can consume 3-10x more premium requests than LSP-backed approaches.

Output quality: Lowest — without semantic understanding, the AI may misinterpret the codebase (confusing comments with code, missing framework conventions, using wrong patterns). Generated code is more likely to have compilation errors, framework misusage, or inconsistency with existing code patterns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment