-
Install the AI Web Chat template:
dotnet new install Microsoft.Extensions.AI.Templates
-
Open Visual Studio 2022 and click "Create a new project"
Speaker note: Emphasize this is using the new AI templates in Visual Studio 2022
-
Search for and select "AI Chat Web App" template
-
Set AI options:
- AI service provider: "GitHub Models"
- Vector store: "Qdrant"
- Check "Use Aspire orchestration"
Speaker note: Explain that GitHub Models is free for development, while Qdrant is a popular vector database for semantic search
-
Create a GitHub token:
- Go to https://github.com/settings/tokens
- Generate new token (classic)
- Add a description like "AI Models Access"
- Click "Generate token"
- Copy the token (you won't be able to see it again)
Speaker note: Mention that this token lets you access GitHub's AI models without additional cost
-
Configure the connection string:
- Right-click
GenAiLab.AppHost
→ "Manage User Secrets" - Add the connection string:
{ "ConnectionStrings:openai": "Endpoint=https://models.inference.ai.azure.com;Key=YOUR-GITHUB-TOKEN" }
Speaker note: Point out that the endpoint structure is compatible with Azure OpenAI, making migration seamless
- Right-click
-
Ensure Docker Desktop is running
Speaker note: Docker is required for the Qdrant vector database container
-
Run the application (F5)
Speaker note: Watch for certificate trust prompts that may appear beneath the browser window if using devbox or new install
-
Explore the .NET Aspire dashboard:
- Review the running services:
aichatweb-app
,vectordb
- Check endpoints and logs
Speaker note: Highlight how Aspire simplifies managing distributed applications. Aspire really simplifies building AI apps with .NET
- Review the running services:
-
Test the AI functionality:
- Ask "What PDF documents do you have information about?"
- Try "Tell me about survival kits"
Speaker note: Demonstrate how the app combines semantic search with AI to provide context-aware responses
-
SemanticSearchRecord.cs
- Shows vector data structure with attributes for vector DB storagepublic class SemanticSearchRecord { [VectorStoreRecordKey] public required Guid Key { get; set; } [VectorStoreRecordData(IsFilterable = true)] public required string FileName { get; set; } [VectorStoreRecordVector(1536, DistanceFunction.CosineSimilarity)] public ReadOnlyMemory<float> Vector { get; set; } }
Speaker note: Point out how attributes define key, filterable fields, and vector dimensions
-
Web Program.cs - Services Setup - Key AI and vector services registration
// AI services openai.AddChatClient("gpt-4o-mini").UseFunctionInvocation(); openai.AddEmbeddingGenerator("text-embedding-3-small"); // Vector collections builder.Services.AddQdrantCollection<Guid, IngestedChunk>("data-genailab-chunks"); builder.Services.AddQdrantCollection<Guid, IngestedDocument>("data-genailab-documents");
Speaker note: Highlight how few lines are needed to set up complex AI services
-
Chat.razor
- Blazor component using IChatClient[Inject] private IChatClient ChatClient { get; set; } = default!; private async Task HandleUserMessageAsync(string userMessage) { var response = await ChatClient.GetResponseAsync( SystemPrompt, chatHistory.Select(m => new ChatMessage(m.Role, m.Content)).ToArray()); }
Speaker note: Show how the abstraction simplifies AI interaction in UI components
-
PDF Ingestion - CreateChunksForDocumentAsync - The core document processing function
public async Task<IEnumerable<IngestedChunk>> CreateChunksForDocumentAsync(IngestedDocument document) { var chunks = SplitDocumentIntoChunks(document.Content); foreach (var (chunk, pageNumber) in chunks) { ingestedChunks.Add(new IngestedChunk { Key = Guid.NewGuid(), DocumentId = document.DocumentId, Text = chunk, PageNumber = pageNumber }); } return ingestedChunks; // Vector embeddings generated automatically when stored }
Speaker note: Explain how chunking improves relevance and vectors are auto-generated
- Use conference specific PDFs instead Before running the application for the first time, replace the PDFs in
ChatApp.Web\wwwroot\Data
with PDFs that are specific to the conference or local attractions and ask questions about that content.
Note that if you're using GitHub Models, you can exceed your token limit pretty quickly if you use too many big PDFs. You can use an Azure OpenAI deployment if you want to ingest more PDFs here.
- Show off the Qdrant dashboard The Qdrant resource will have a dashboard link in the .NET Aspire dashboard. You can explore the vector data there. Practice this first if you're going to do it to get familiar with it.