Skip to content

Instantly share code, notes, and snippets.

@stevebrownlee
Last active September 10, 2025 11:44
Show Gist options
  • Save stevebrownlee/fb6904ecb87417fe50860b8a238d80b0 to your computer and use it in GitHub Desktop.
Save stevebrownlee/fb6904ecb87417fe50860b8a238d80b0 to your computer and use it in GitHub Desktop.
Outline of agentic AI and generative AI toolset

Workshop for Agentic AI

Things to integrate in the future

Overview

This workshop will run for 4 or 5 weeks, depending on the engagement level of the attendees and how long the group projects take.

The purpose of this workshop is to coach people on the basic usage of an agentic AI tool integrated with an IDE. You will be showing the learners how to use two tools:

  1. The Cursor IDE using the latest free model. Update below when this changes
    • Current free use model: gpt-4.1
  2. (Optional) You can also show how to use Claude Code if you are willing to use your existing Claude/Anthropic subscription, if you have one.
  3. RooCode with the RooFlow addition. The LLM you will be using in the workshop is Gemini free tier since the basic model is free to use.

Your weekly goals are detailed below. How you achieve these goals is up to you.

Section 1: AI Fundamentals and Cursor IDE Introduction

Overall Goals

  • Discuss how LLMs are simply prediction generators, not artificial intelligence, and how humans still have to make judgements based on the prediction
  • Introduce students to LLM fundamentals and AI-assisted development concepts
  • Get students comfortable with Cursor IDE using the GPT 4.1 model and basic prompt engineering
  • Demonstrate the importance of context in professional AI-assisted development

Specific Tasks

  1. AI/LLM Fundamentals Introduction
  • Explain what LLMs are and how they work at a basic level
  • Discuss model capabilities and limitations
  • Compare different LLM providers (Gemini, GPT, Claude) and their strengths
  • Explain why GPT 4.1 is being used for this workshop (cost-effectiveness)
  1. Cursor IDE Setup and Usage
  • Demo the Cursor IDE and its AI features
  • Walk through installation and initial setup with GPT 4.1
  • Show inline AI suggestions, chat features, and code generation capabilities
  • Demonstrate cost considerations and usage limits
  1. Context and Prompt Engineering Basics
  • Explain the importance of context for professional development
  • Demonstrate low-context vs. high-context prompts with side-by-side comparisons in Cursor
  • Use a test code block example to show how context engineering impacts code quality
  • Show different ways to interact with AI in Cursor (inline, chat, selection-based)
  1. Agentic
  • Overview of Memories feature

Instructor Guidance

  • Demo Cursor so learners understand what to expect, and what the goal is for the course.
  • Show the benefits of IDE-integrated AI vs. standalone tools. Talk about how important context is for a professional software developer expecting high quality output.
  • Give a demonstration showing the difference in quality with a low-context prompt and a high-context prompt within Cursor.
  • Show different interaction modes (chat, inline suggestions, code completion).
  • If you have time, break the attendees into teams of 2 or 3 and have them collaborate on building a simple web application using Cursor's AI features. Just decide on something simple, like a task management application—it doesn't really matter.

Section 2: Advanced AI Tools - RooCode and RooFlow Introduction

Overall Goals

  • Introduce more sophisticated agentic AI tools
  • Demonstrate the benefits of specialized AI development modes
  • Establish security best practices for API usage
  • Show advanced context management capabilities

Specific Tasks

  1. RooCode Setup and Security

    • Demo the RooCode tool and its integration with VS Code
    • Walk through installation and initial setup
    • Demonstrate API key security practices (environment variables, .gitignore)
    • Show cost comparisons between different LLM providers
    • Compare RooCode capabilities to Cursor's approach
  2. RooCode Modes and RooFlow Introduction

    • Complete the high-context prompt exercise from Section 1 if not finished
    • Explain token limits and context windows and their ramifications
    • Review each RooCode custom mode and their intended use cases
    • Introduce RooFlow and its purpose for maintaining context across conversations
    • Explain how the local memory bank works and what gets stored
  3. Architect Mode Implementation

    • Introduce Architect mode and its role in development strategy
    • Demonstrate interactive strategy development for a new feature
    • Use Code mode to implement feature to show quality improvement
    • Address LLM hallucinations when they occur during the process
    • Emphasize the critical importance of code review and debugging

Group Work

  • Form teams to practice Architect mode with different feature scenarios
  • Have teams present their strategy documentation before coding begins
  • Create a friendly competition element with team presentations and winner selection

Instructor Guidance

  • Explain token limits and context windows - beginners don't understand the ramifications of having very long conversations with an LLM.
  • Introduce RooCode and show how it differs from Cursor's approach.
  • Demonstrate API key security practices.
  • Introduce RooFlow and and it's memory bank feature and demonstrate how it increases context across multiple prompts.
  • Introduce and use Architect mode interactively to develop strategy and documentation for new feature.
  • Use Code mode to implement feature to demonstrate quality improvement. During this process, if the LLM hallucinates, discuss it with the attendees.
  • Make sure you have an extended conversation in Architect mode to highlight how important validation of the strategy is.
  • Once the Code phase is complete, make sure you review and debug any remaining issues. Be very clear that this part of the process is critical and should never be skipped.

Section 3: Iterative Development

Overall Goals

  • Teach iterative development strategies using multiple AI modes
  • Demonstrate how to maintain oversight and control in AI-assisted development
  • Introduce advanced orchestration concepts

Specific Tasks

  1. Iterative Development with Ask Mode
    • Demonstrate how Architect and Ask modes work together for stage-by-stage development. Show how to prompt the LLM to provide one step at a time rather than one, large output with a numbered list of tasks
    • Show how this approach keeps developers involved in the process instead of having the agent do all work
  2. Cross-task context
    • Show leaners how to activate the memory bank feature of RooFlow
    • Show them the markdown files that get created

Group Work

  • Have teams work through iterative development exercises using Ask mode
  • Organize discussions about when to use different modes

Instructor Guidance

Use Architect and Ask mode to demonstrate how the two can be used to iteratively build a project in stages so that each step is implemented and the code can be reviewed before moving on to the next. This can be a wonderful strategy for beginners since they are involved in the process instead of having the agent do all of the work with no oversight or interaction. Give them another exercise where they will build a small project with Ask mode. When you develop the strategy, store it in a file called projectBrief.md.

Then create a new task, enter this prompt, and the learners can watch what it generates.

Activate the memory bank for this project. You can use @/projectBrief.md file as initial context

Then show them the files that get created in the memory-bank direcetory in their project. Also make sure to show/tell them to add the memory-bank directory to the .gitignore file

Assign the learners the task of iteratively building a personal portfolio site using the Astro framework using Architect and Ask mode.

After the learners have worked on their task, close the breakout rooms 15-20 minutes before the end of the session. Create a new task and enter in the prompt:

Section 4: Working with Existing Codebases

Overall Goals

  • Teach students how to use agentic AI tools with existing projects
  • Demonstrate strategies for understanding and modifying legacy code
  • Show how to maintain code consistency and style when adding features
  • Practice debugging and refactoring existing code with AI assistance

Specific Tasks

  1. Codebase Analysis and Understanding

    • Demonstrate how to use Ask mode to analyze existing code structure
    • Show techniques for understanding unfamiliar codebases with AI assistance
    • Explain how to identify code patterns and architectural decisions
    • Practice documenting existing code functionality
  2. Feature Addition and Modification

    • Use Architect mode to develop a strategy for a new feature for the project you've decided to work on.
    • Demonstrate how to maintain existing code style and patterns
    • Show how to identify potential breaking changes before implementation
    • Develop a separate markdown file that is a list of clearly defined tasks that should be implemented algorithmically
    • Provide that list to Orchestrator mode and instruct it to imlement the tasks
    • Practice integrating new code with existing functionality
  3. Debugging and Refactoring

    • Use AI tools to identify and fix bugs in existing code
    • Show how to improve code quality incrementally
    • Practice writing tests for existing untested code

Group Work

  • Provide teams with a moderately complex existing codebase to analyze
  • Have teams add a new feature to the existing project while maintaining code consistency
  • Organize code review sessions focusing on integration quality

Instructor Guidance

Provide teams with an existing React application that has some technical debt and missing features. Have them use Ask mode to understand the codebase structure and identify areas for improvement. Then use Architect mode to plan additions or modifications that fit the existing patterns. Emphasize the importance of understanding before modifying, and show how AI can help with both analysis and implementation while maintaining consistency with existing code.

Section 5: AI-Driven Project

Overall Goals

  • Apply all learned skills to a comprehensive project
  • Demonstrate real-world application of agentic AI tools

Specific Tasks

  1. Capstone Project Implementation
  • Teams build a project equivalent to a NSS client-side capstone project:
    • Full CRUD implementation
    • Using json-server as the data source
    • React-based using either Vite or Next.js
    • Teams must pick a CSS Framework to use (Reactstrap, Bulma, Radix, etc.)
    • Must include tests written to validate AI-produced code
    • Export and submit all AI conversations used during development
    • Submit written lessons learned document reflecting on the AI development process
  • Dedicate the last hour for presentations
  • Declare a winner or let attendees vote on best project

Group Work

  • Form project teams and facilitate project development
  • Have teams present their projects and demonstrate their AI development process

Instructor Guidance

This is going to be a judgement call on your part. If you have time this week, consider showing the attendees the Context Portal MCP server feature of RooFlow. Explain how it uses a vector database (and what that means) instead of markdown files to work as a RAG tool. This week is dedicated to all of the teams using their agentic AI skills to build a project that is equivalent to a NSS client-side capstone project. Dedicate the last hour of this week for presentations, and either you can declare a winner, or let the attendees vote on who did the best job.

@stevebrownlee
Copy link
Author

LLMs as prediction engines

At their core, LLMs are indeed sophisticated prediction engines. They're trained on vast amounts of text to predict the most likely next token (word, punctuation, etc.) given the preceding context. This process, repeated iteratively, generates coherent responses. The training involves learning statistical patterns in language - which words tend to follow others, how ideas connect, what constitutes grammatically and semantically appropriate responses.

This prediction-based foundation has several important implications:

Limitations of pure prediction:

  • LLMs can generate plausible-sounding but factually incorrect information because they're optimizing for linguistic plausibility rather than truth
  • They may reproduce biases present in training data without understanding context or appropriateness
  • Novel reasoning beyond pattern recognition remains questionable - they excel at recombining learned patterns but may struggle with genuinely novel problem-solving
  • They lack persistent memory, goals, or understanding of real-world consequences

The human judgment layer:

This is where human oversight becomes crucial. Humans must evaluate LLM outputs for accuracy, appropriateness, ethical implications, and relevance to actual needs. We serve as the critical thinking layer that:

  • Fact-checks information against reliable sources
  • Considers context the model might miss
  • Applies domain expertise the model lacks
  • Makes value judgments about appropriate use
  • Understands real-world implications and consequences

The intelligence question:

Whether this constitutes "intelligence" depends partly on how we define the term. If intelligence requires consciousness, intentionality, or deep understanding of meaning, then LLMs may fall short. But if we define intelligence more functionally - as the ability to process information and generate useful, contextually appropriate responses - then the distinction becomes murkier.

LLMs are powerful cognitive tools that augment human intelligence rather than replace human judgment. They excel at pattern recognition, information synthesis, and generating starting points for human analysis, but they require human oversight to transform their predictions into reliable, ethical, and purposeful outcomes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment