Created
October 27, 2024 00:42
-
-
Save amosgyamfi/91269371808a5a18c53c076d122179ff to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# We are now heading towards the future where AI agents will be used to perform many of our daily routine tasks. Anthropic, Microsoft Copilot Studio, IBM Agent. Swarm provides a simple and clean framework for building and running agents with pure Python code. | |
from swarm import Swarm, Agent | |
# Define your preferred model to use. If you dont specify a model, the default will be gpt-4o. So you can override it with a smaller model if you want. | |
mini_model = "gpt-4o-mini" | |
# Start an agent: Initialize a swarm client. This is the entry point for all agents. Behind the scenes, it will manage the API calls to the LLM provider. | |
client = Swarm() | |
# Define a function that can transfer the conversation to agent B. Manager agent | |
def transfer_to_agent_b(): | |
return agent_b | |
# Entry point agent for initiating the conversation. Fundamentally, an agent holds instructions, functions and extra settings. | |
agent_a = Agent( | |
name="Agent A", | |
model=mini_model, | |
instructions="You are a helpful agent.", | |
functions=[transfer_to_agent_b], | |
) | |
# Agent B | |
agent_b = Agent( | |
name="Agent B", | |
model=mini_model, | |
# instructions="Always reply with 'I am Amos'", | |
instructions="Only speak in Finnish.", | |
) | |
# Call the run function with the starting agent and an initial message. Run the agent: This is the main function that will start the conversation. It takes an agent and a list of messages as input. | |
response = client.run( | |
agent=agent_a, | |
messages=[{"role": "user", "content": "I want to talk to agent B."}], | |
# To understand what is happening behind the scenes, set debug to True to see the raw API response from the LLM provider. In this case, the initial prompt is sent to agent A. Then agent A uses the function transfer_to_agent_b to transfer the conversation to agent B. This action switchs to agent B and displays the response from agent B to the user. | |
# debug=True, | |
) | |
# Print the response from agent A | |
print(response.messages[-1]["content"]) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment