(generated by Copilot looking through microsoft/vscode repo)
Agent skills are not tools themselves. They are on-demand knowledge files (SKILL.md) that Copilot exposes to the LLM through additional prompt instructions and a dependency on the readFile tool. Here's exactly how it works:
Skills are SKILL.md files discovered from configured folder locations:
| Source | File Pattern |
|---|---|
| Repo (Copilot) | {repo}/.github/skills/*/SKILL.md |
| Repo (Agents) | {repo}/.agents/skills/*/SKILL.md |
| Repo (Claude) | {repo}/.claude/skills/*/SKILL.md |
| User (Copilot) | ~/.copilot/skills/*/SKILL.md |
| User (Claude) | ~/.claude/skills/*/SKILL.md |
| Plugins | {pluginRoot}/skills/*/SKILL.md |
The PromptFilesLocator.findAgentSkills() scans these directories, looking for subdirectories that contain a SKILL.md file:
// src/vs/workbench/contrib/chat/common/promptSyntax/utils/promptFilesLocator.ts#L692-L716
private async findAgentSkillsInFolder(uri: URI, token: CancellationToken): Promise<URI[]> {
const result: URI[] = [];
const stat = await this.fileService.resolve(uri);
if (stat.isDirectory && stat.children) {
for (const child of stat.children) {
if (child.isDirectory) {
const skillFile = joinPath(child.resource, SKILL_FILENAME);
const skillStat = await this.fileService.resolve(skillFile);
if (skillStat.isFile) {
result.push(skillStat.resource);
}
}
}
}
return result;
}Source:
promptFilesLocator.tsL692–L716
Each SKILL.md has YAML frontmatter that gets parsed into an IAgentSkill:
// src/vs/workbench/contrib/chat/common/promptSyntax/service/promptsService.ts#L336-L371
export interface IAgentSkill {
readonly uri: URI;
readonly storage: PromptsStorage;
readonly name: string;
readonly description: string | undefined;
readonly disableModelInvocation: boolean; // if true, LLM can't auto-load it
readonly userInvocable: boolean; // if false, hidden from / menu
readonly when?: ContextKeyExpression; // conditional availability
readonly pluginUri?: URI;
readonly extension?: IExtensionDescription;
}The frontmatter looks like:
---
name: skill-name
description: "What this skill does"
user-invocable: true
disable-model-invocation: false
---
Skill prompt content here...Names are sanitized (XML tags stripped, truncated to 64 chars), descriptions are truncated to 1024 chars, and the skill name must match the folder name — otherwise it's rejected.
// src/vs/workbench/contrib/chat/common/promptSyntax/service/promptsServiceImpl.ts#L1083-L1098
// Validate that the sanitized name matches the parent folder name (per agentskills.io specification)
const folderName = getSkillFolderName(uri);
if (sanitizedName !== folderName) {
this.logger.error(`[validateAndSanitizeSkillFile] Agent skill name "${sanitizedName}" does not match folder name "${folderName}": ${uri}`);
throw new SkillNameMismatchError(uri, sanitizedName, folderName);
}Skills are provided to the LLM entirely via injected prompt text — they are not registered as tools/functions. The logic lives in ComputeAutomaticInstructions._getCustomizationsIndex().
Here's exactly what gets injected into the prompt. There are two modes:
The LLM receives this XML-structured text as part of the system context:
<skills>
Here is a list of skills that contain domain specific knowledge on a variety of topics.
Each skill comes with a description of the topic and a file path that contains the detailed instructions.
When a user asks you to perform a task that falls within the domain of a skill, use the #tool:readFile tool to acquire the full instructions from the file URI.
<skill>
<name>testing</name>
<description>Best practices for writing unit tests</description>
<file>/path/to/workspace/.github/skills/testing/SKILL.md</file>
</skill>
<skill>
<name>api-design</name>
<description>Guidelines for REST API design</description>
<file>/path/to/workspace/.github/skills/api-design/SKILL.md</file>
</skill>
</skills>When this experimental setting is enabled, the LLM instead receives much more forceful instructions:
<skills>
Skills provide specialized capabilities, domain knowledge, and refined workflows for producing
high-quality outputs. Each skill folder contains tested instructions for specific domains like
testing strategies, API design, or performance optimization. Multiple skills can be combined
when a task spans different domains.
BLOCKING REQUIREMENT: When a skill applies to the user's request, you MUST load and read the
SKILL.md file IMMEDIATELY as your first action, BEFORE generating any other response or taking
action on the task. Use #tool:readFile to load the relevant skill(s).
NEVER just mention or reference a skill in your response without actually reading it first.
If a skill is relevant, load it before proceeding.
How to determine if a skill applies:
1. Review the available skills below and match their descriptions against the user's request
2. If any skill's domain overlaps with the task, load that skill immediately
3. When multiple skills apply (e.g., a flowchart in documentation), load all relevant skills
Examples:
- "Help me write unit tests for this module" -> Load the testing skill via #tool:readFile FIRST, then proceed
- "Optimize this slow function" -> Load the performance-profiling skill via #tool:readFile FIRST, then proceed
- "Add a discount code field to checkout" -> Load both the checkout-flow and form-validation skills FIRST
Available skills:
<skill>
<name>testing</name>
<description>Best practices for writing unit tests</description>
<file>/path/to/workspace/.github/skills/testing/SKILL.md</file>
</skill>
</skills>The setting itself is defined as an experimental auto-mode experiment:
Source:
chat.contribution.tsL1049–L1060
The entire skill system depends on the readFile tool being available and enabled. If the readFile tool is not in the enabled tools list, the skill index is not generated at all:
// src/vs/workbench/contrib/chat/common/promptSyntax/computeAutomaticInstructions.ts#L313-L333
private _getTool(referenceName: string): { tool: IToolData; variable: string } | undefined {
if (!this._enabledTools) {
return undefined;
}
const tool = this._languageModelToolsService.getToolByName(referenceName);
if (tool && this._enabledTools[tool.id]) {
return { tool, variable: `#tool:${this._languageModelToolsService.getFullReferenceName(tool)}` };
}
return undefined;
}
// In _getCustomizationsIndex:
const readTool = this._getTool('readFile');
// ...
if (readTool) {
// Only then do we build the skills index
}So the flow is: the LLM sees the skill catalog in the prompt → recognizes a skill matches the user's request → calls the readFile tool with the SKILL.md file path → receives the full skill content → follows those instructions.
Not all discovered skills make it into the prompt. Several filters are applied:
| Filter | What it does |
|---|---|
disableModelInvocation: true |
Skill is excluded from the LLM's catalog (manual /name only) |
when clause |
Context key expression must evaluate to true |
chat.useAgentSkills setting |
Master toggle — if false, findAgentSkills() returns undefined and no skills are shown |
| Debug-only skills | The troubleshoot skill requires debug logging settings to be enabled |
// src/vs/workbench/contrib/chat/common/promptSyntax/computeAutomaticInstructions.ts#L382-L396
const modelInvocableSkills = agentSkills?.filter(skill => {
if (skill.disableModelInvocation) { return false; }
if (skill.when && !this._contextKeyService.contextMatchesRules(skill.when)) { return false; }
if ((!isDebugLogEnabled || !isFileLoggingEnabled) && skill.uri.path.includes(TROUBLESHOOT_SKILL_PATH)) { return false; }
return true;
});Skills are also forwarded to the Copilot extension via $acceptSkills, so the chat participant itself is aware of available skills:
// src/vs/workbench/api/browser/mainThreadChatAgents2.ts#L228-L236
private async _pushSkills(): Promise<void> {
const skills = await this._promptsService.findAgentSkills(CancellationToken.None) ?? [];
const dtos: ISkillDto[] = skills.map(skill => ({ uri: skill.uri }));
this._proxy.$acceptSkills(dtos);
}Skills are NOT tools. They are a prompt-engineering pattern implemented as:
- A catalog of
<skill>entries injected into the LLM's context as XML-structured text, containing each skill'sname,description, andfilepath - Natural language instructions telling the LLM to use the
readFiletool to load a skill's full content when the user's request matches a skill's domain - Optional stronger adherence instructions that escalate from "you should use skills when relevant" to "BLOCKING REQUIREMENT: you MUST load the SKILL.md IMMEDIATELY as your first action"
The LLM never "calls" a skill — it reads the skill catalog from its prompt, decides if a skill is relevant, then uses the existing readFile tool to load the SKILL.md file, and follows those instructions for the rest of its response.
| File | Purpose |
|---|---|
computeAutomaticInstructions.ts |
Builds the <skills> XML index and injects it into the LLM prompt |
promptsServiceImpl.ts |
Parses SKILL.md frontmatter, validates name/description, caches results |
promptsService.ts |
Defines IAgentSkill interface and IPromptsService contract |
promptFilesLocator.ts |
Discovers SKILL.md files from configured folder locations |
copilot-customizations-spec.md |
Spec document covering all customization surfaces including skills |
mainThreadChatAgents2.ts |
Pushes discovered skills to the Copilot extension host |
chat.contribution.ts |
Registers chat.useAgentSkills and chat.experimental.useSkillAdherencePrompt settings |
config.ts |
Configuration key constants for skill locations and toggles |
All links reference commit ffa49fc in microsoft/vscode.