Build it live. Start with an empty terminal. End with a working AI pipeline. The audience watches the entire thing come together, step by step.
Core Theme: Software = Instructions + Tools. Watch it emerge.
Terminal, empty, cursor blinking.
Say this:
"I'm going to show you where we're headed. Then we'll build our way there together."
Run this one command:
$ ma task.copilot.mdWatch it execute. AI agent runs. Output appears.
Then say:
"That markdown file just ran an AI agent. It piped data through intelligence and produced a result. But that's not magic - it's patterns. Let me show you how to build this from scratch."
Clear the terminal.
"Let's start from zero."
Type this live:
$ copilotShow the interactive mode. Exit it.
"Interactive mode is fine for exploration. But we can't script it. We can't automate it. Let's fix that."
Type:
$ copilot -p "What's the capital of France?"Watch it respond.
"The
-pflag. Non-interactive. Now we can script this. But look at that output - there's formatting, spinners, decoration. We need raw text."
Type:
$ copilot -p "What's the capital of France?" -s"Silent mode. Just the answer. Just text. And text... can be piped."
Key point:
-p = prompt (non-interactive)
-s = silent (raw output)
Start building:
$ echo "hello world" | copilot -p "count the words" -s"Data in. Intelligence out. Same pipe operator Unix has used for 50 years."
Build up:
$ cat package.json | copilot -p "What does this project do?" -sThen the real power:
$ git diff | copilot -p "Summarize these changes" -s"Your git changes, summarized by AI. One line."
Keep building:
$ git diff | copilot -p "Review for bugs" -s$ npm test 2>&1 | copilot -p "Why did this fail?" -sThe pattern is now clear:
data | copilot -p "instruction" -s
"Same pattern as grep, awk, sed. You already know this."
Build the chain live:
First, show single output:
$ git diff | copilot -p "List changes as bullet points" -sThen chain it:
$ git diff | copilot -p "List changes as bullets" -s | copilot -p "Rate each: low/medium/high risk" -sKeep building:
$ git diff | \
copilot -p "List changes as bullets" -s | \
copilot -p "Rate each: low/medium/high risk" -s | \
copilot -p "Format as PR description" -sShow the flow:
raw diff → summarize → evaluate → format
"Three AI calls. One pipeline. Each stage refines the output. This is intelligence as a stream processor."
Show the models:
$ copilot -p "Is this task SIMPLE or COMPLEX?" -s --model claude-haiku-4.5"Haiku. Fast. Cheap. 10x less cost, 5x faster. Perfect for triage."
$ copilot -p "Design a caching architecture" -s --model claude-opus-4.5"Opus. Deep. Thorough. When you need the heavy lifting."
Build the escalation pattern live:
#!/bin/bash
TASK="Refactor the authentication module"
COMPLEXITY=$(echo "$TASK" | copilot -p "SIMPLE or COMPLEX? One word only." -s --model claude-haiku-4.5)
if [ "$COMPLEXITY" = "COMPLEX" ]; then
echo "$TASK" | copilot -p "Execute this task" -s --model claude-opus-4.5
else
echo "$TASK" | copilot -p "Execute this task" -s --model claude-haiku-4.5
fi"Use the fast model to decide if you need the powerful model. Route intelligently."
The model ladder:
claude-haiku-4.5 → Fast, cheap (triage, classification)
claude-sonnet-4 → Balanced (daily work)
claude-opus-4.5 → Deep analysis (architecture, complex problems)
Show controlled access:
$ copilot -p "Show me recent commits" -s --allow-tool 'shell(git:*)'"We gave it permission to run git commands. Only git."
More granular:
$ copilot -p "Analyze this codebase" -s --allow-tool 'shell(git:*)' --allow-tool 'Read'"Read files. Run git. Nothing else."
The trust ladder:
# Level 1: Pure pipes (safest) - no tool access
cat code.ts | copilot -p "Find bugs" -s
# Level 2: Scoped tools
copilot -p "Check test coverage" -s --allow-tool 'shell(npm:test)'
# Level 3: Full autonomy (supervised)
copilot -p "Fix all linting errors" -s --allow-all-tools"The pipe doesn't grant permissions. You do. You control the boundaries."
Now combine everything:
Type this script live:
#!/bin/bash
# code-review-pipeline.sh
echo "Running AI code review..."
git diff main..HEAD | \
copilot -p "List every change with file:line format" -s \
--model claude-haiku-4.5 | \
copilot -p "For each: bug risk, perf impact, security concern" -s \
--model claude-sonnet-4 | \
copilot -p "Format as markdown table sorted by risk" -s \
--model claude-sonnet-4 > review.md
echo "Review saved to review.md"Run it. Show the output.
"We just built an automated code review pipeline. From scratch. In 10 lines."
Look at the pipeline we built.
"Okay, this is powerful. But be honest - will you remember these flag combinations next week? Will your team?"
Pain points:
- Complex flag combinations
- Project-specific configurations
- The same pipelines, repeated
- Sharing patterns... via Slack?
Transition:
"What if your pipelines could be... files? What if they could live in your repo?"
Create a file live:
$ touch review.copilot.mdOpen it and type:
---
model: claude-sonnet-4
allow-tool:
- 'shell(git:*)'
- Read
silent: true
---
Review these code changes for:
1. Bug risks
2. Performance issues
3. Security concerns
Prioritize by severity.Save it. Then run:
$ git diff | ma review.copilot.mdWatch it work.
"Same pipeline. Now it's a file. Now it's version-controlled. Now it's shareable."
Create another file:
---
model: claude-sonnet-4
silent: true
args:
- base_branch
- focus_area
---
## Gather Context
!`git diff {{ base_branch | default: "main" }}..HEAD`
## Analyze
Review changes with focus on {{ focus_area | default: "all areas" }}.
For each change:
- Risk level (low/medium/high)
- Category (bug fix, feature, refactor)
- Concerns if any
## Output
Format as a PR description with summary and risk table.Run with arguments:
$ ma pr-review.copilot.md --base_branch develop --focus_area security"Arguments. Templates. File imports. Your AI workflows, as files."
Show the old vs new:
Old way:
Learn tools → Memorize flags → Write scripts → Maintain scripts → Forget flags → Re-learn
New way:
Write markdown → Describe intent → Run it → Version it → Share it
The equation we started with:
Software = Instructions + Tools
"Your prompts are your codebase now. Version them. Test them. Pipe them."
What you can do today:
- First step:
copilot -p "your question" -s - Add a pipe:
git diff | copilot -p "review" -s - Chain it: Add
| copilot -p "next step" -s - Save it: Create one
.copilot.mdfile for something you do daily - Share it: Commit it. Your team will thank you.
- GitHub Copilot CLI:
gh copilot - markdown-agent:
github.com/johnlindquist/agents - These slides + demos: [gist link]
"We started with an empty terminal. We ended with versioned, shareable AI pipelines. The pipe carried data for 50 years. Now it carries intelligence. Same operator. New power. Start building."
| Section | Time | Cumulative |
|---|---|---|
| Hook: Show destination, then reset | 2 min | 2 min |
| First Brick: -p and -s flags | 2 min | 4 min |
| The Pipe: Data meets intelligence | 3 min | 7 min |
| Chaining: Intelligence pipelines | 3 min | 10 min |
| Model Selection: Escalation | 2 min | 12 min |
| Permissions: Trust boundaries | 2 min | 14 min |
| Real Pipeline: Build it together | 3 min | 17 min |
| Resolution: markdown-agent reveal | 2 min | 19 min |
| Closing | 1 min | 20 min |
-p+-sunlocks scripting - Non-interactive, raw output- Pipes work with AI - Same
|operator, now with intelligence - Chain AI calls -
copilot | copilot | copilotis valid - Pick your model - Haiku (fast), Sonnet (balanced), Opus (deep)
- Control permissions -
--allow-toolsets boundaries - Markdown captures patterns - Version control your AI workflows
- Terminal with large font (24pt minimum)
- Git repo with staged changes for
git diffdemos - Empty directory for "building from scratch" feel
-
copilotCLI installed and authenticated -
ma(markdown-agent) installed - Practice the typing speed - not too fast
- Backup: Have all commands in a notes file to copy/paste if needed
- Test all commands work before the talk
- Have a sample
task.copilot.mdfor the opening hook
---
model: claude-sonnet-4
silent: true
---
Analyze this codebase and provide:
1. A one-sentence summary
2. The main technologies used
3. One suggestion for improvement
!`git log --oneline -5`
!`find . -name "*.ts" | head -5 | xargs head -20`---
model: claude-sonnet-4
allow-tool:
- 'shell(git:*)'
- Read
silent: true
---
Review these code changes for:
1. Bug risks
2. Performance issues
3. Security concerns
Prioritize by severity.---
model: claude-sonnet-4
silent: true
args:
- base_branch
- focus_area
---
## Gather Context
!`git diff {{ base_branch | default: "main" }}..HEAD`
## Analyze
Review changes with focus on {{ focus_area | default: "all areas" }}.
For each change:
- Risk level (low/medium/high)
- Category (bug fix, feature, refactor)
- Concerns if any
## Output
Format as a PR description with summary and risk table."That markdown file just ran an AI agent. It's not magic - it's patterns."
"Silent mode. Just the answer. Just text. And text can be piped."
"Same pattern as grep, awk, sed. You already know this."
"Three AI calls. One pipeline. Intelligence as a stream processor."
"Use the fast model to decide if you need the powerful model."
"The pipe doesn't grant permissions. You do."
"We just built an automated code review pipeline. From scratch. In 10 lines."
"Your prompts are your codebase now. Version them. Test them. Pipe them."
"We started with an empty terminal. We ended with versioned, shareable AI pipelines."
Empty Terminal (nothing)
↓
First Command (copilot -p)
↓
Silent Mode (pipeable output)
↓
First Pipe (data | intelligence)
↓
Chained Pipes (intelligence | intelligence)
↓
Model Selection (right tool for the job)
↓
Permission Boundaries (safe automation)
↓
Full Pipeline (everything together)
↓
The Problem (will you remember this?)
↓
markdown-agent (save it as a file)
↓
Versioned AI Workflows (the destination)
The audience journey:
- Start skeptical: "This is just another AI demo"
- Build curiosity: "Okay, that pipe thing is interesting"
- Feel competent: "I could actually do this"
- See the vision: "This changes how I work"
Talk outline for Microsoft AI Dev Days Theme: Live Coding Walkthrough - Build AI Pipelines from Scratch Duration: 15-20 minutes