Skip to content

Instantly share code, notes, and snippets.

View kausmeows's full-sized avatar
:octocat:
Exploiting the plasticity of human brain

Kaustubh kausmeows

:octocat:
Exploiting the plasticity of human brain
View GitHub Profile
"""Test Team HITL API flow with streaming everywhere.
Tests:
1. Team run with streaming - pauses when member tool requires confirmation
2. Continue run with streaming - member agent streams after continue (fix for #7003)
3. Events are stored in DB (store_events=True auto-enabled by router)
4. Member responses are stored (store_member_responses=True auto-enabled by router)
"""
import json
"""
Executor HITL via /continue API (Streaming)
============================================
Tests executor-level HITL when an agent inside a Step has a tool
with requires_confirmation=True, using the AgentOS /continue API endpoint
with streaming enabled.
Flow:
gather_data -> detailed_analysis (agent with HITL tool) -> report
"""
Condition with Executor HITL via /continue API
================================================
Tests executor-level HITL when an agent inside a Condition step has a tool
with requires_confirmation=True, using the AgentOS /continue API endpoint.
Flow:
gather_data -> Condition(evaluator=True) -> report
|
[
{
"input": {
"input_content": "Deploy the payments app version 2.1 to the production environment. Ensure all pre-deployment checks are performed, including testing and security compliance. Provide a summary of the deployment process and any issues encountered."
},
"model": "gpt-4o",
"tools": [
{
"result": "Successfully deployed payments v2.1 to production",
"metrics": null,
[
{
"input": {
"input_content": "Call the break_model tool"
},
"model": "gpt-4o",
"tools": [
{
"result": "Tool executed successfully.",
"metrics": {
[
{
"input": {
"input_content": "Deploy the payments app version 2.1 to production. Context: The user wants v2.1 deployed to the production environment. Assume you have access to the CI/CD pipelines, production cluster, and necessary credentials. Steps to perform and expected outputs:\n\n1) Confirm the code artifact for payments app v2.1 is available (build artifact or container image). Provide artifact SHA/tag and location (registry/repo).\n2) Run pre-deployment checks: verify health of production cluster, ensure there are no ongoing deployments or blocking incidents, check required service/DB migrations and schedule if needed.\n3) Trigger the production deployment using the established pipeline. If the pipeline requires manual approval, report that and provide instructions or request approval from the user.\n4) Apply any required database migrations as part of the deployment; run in a safe manner (e.g., migrate with rollback plan). If migrations are destructive, pause and ask for explicit
"""
Parser Model — Structured Output Debug
=======================================
Proves that:
- Primary model call: response_format is None (no structured output params)
- Parser model call: response_format contains the full JSON schema (native structured output)
"""
import json
"""
Custom Retriever with RunContext
=============================
Demonstrates how to pass application-controlled data (like a project_id
or file_name) into a custom retriever using RunContext.dependencies.
This is useful when:
- Your retriever needs scoping (e.g., per-project, per-tenant)
- You want to pass runtime filters that the LLM shouldn't control
"""
Test Team HITL Continue Run API
===============================
This script tests the team continue run endpoint by:
1. Starting an AgentOS server with a team that has a tool requiring confirmation
2. Creating a run via the API (which will pause)
3. Continuing the run via the /continue API endpoint
"""
"""
Example: CustomEvents with Teams (respond_directly=False)
This demonstrates how CustomEvents from member agents are streamed
to the user-facing stream even when respond_directly=False.
"""
import time
from agno.agent import Agent
from agno.team import Team