Skip to content

Instantly share code, notes, and snippets.

@decagondev
Last active April 29, 2025 15:37
Show Gist options
  • Save decagondev/524896291bdaa15792d1798e4707e312 to your computer and use it in GitHub Desktop.
Save decagondev/524896291bdaa15792d1798e4707e312 to your computer and use it in GitHub Desktop.

When building a supervisor for a system that includes a large language model (LLM), the preferred approach usually depends on the requirements for reliability, transparency, and control.

Preferred Approach: Deterministic Logic Outside the Graph

  • Reason: Deterministic logic offers predictable and debuggable behavior, which is crucial when supervising, orchestrating, or enforcing policies around the use of LLMs.
  • Use Cases: Supervising task delegation, routing, retries, safety checks, compliance enforcement, logging, and fallback mechanisms.
  • Advantages:
    • Easier to test and verify.
    • Transparent decision-making.
    • Easier integration with existing systems.

Example (Python):

class Supervisor:
    def __init__(self, llm):
        self.llm = llm

    def route_task(self, task):
        if task['type'] == 'summarization':
            return self.llm.summarize(task['content'])
        elif task['type'] == 'classification':
            return self.llm.classify(task['content'])
        else:
            raise ValueError("Unsupported task type")

When to Use an LLM Supervisor

  • Reason: In cases requiring adaptive reasoning, natural language interpretation, or nuanced judgment, a second LLM can serve as a higher-level agent or "critic."
  • Use Cases: Multi-agent architectures, self-reflection, subjective scoring or ranking, creative validation.
  • Drawbacks:
    • Harder to guarantee consistent behavior.
    • Difficult to debug or trace errors.

Example (LLM as Supervisor):

system_prompt = "You are a supervisor. Decide if the assistant's answer is correct and helpful."
def llm_supervisor(supervisor_llm, user_query, assistant_answer):
    evaluation_input = f"User query: {user_query}\nAssistant answer: {assistant_answer}"
    return supervisor_llm.evaluate(system_prompt + "\n" + evaluation_input)

Hybrid Approaches

  • Use deterministic logic as the outer supervisor and invoke LLMs as subcomponents for decisions that require flexibility or understanding of natural language.
  • Example: Use rules to filter candidate outputs and an LLM to rank them based on task relevance.

Architecture Diagram

graph TD
  A[User Input] --> B[Deterministic Supervisor]
  B -->|Summarization| C[LLM - Summarizer]
  B -->|Classification| D[LLM - Classifier]
  B -->|Needs Evaluation| E[LLM Supervisor]
  E --> F[Decision: Accept / Reject / Revise]
  F --> G[Final Output]
  C --> G
  D --> G
Loading

This diagram represents a hybrid supervisory model, where deterministic logic handles routing and task delegation, while LLMs are used for complex evaluation and content generation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment