Last active
June 26, 2024 09:33
-
-
Save AntonOsika/3b5c43ecf883eb50cc59285d44f85010 to your computer and use it in GitHub Desktop.
16 LoC agent implementation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
def llm(x): | |
pass # replace this with LLM of your choice. GPT5 with safety-post-training removed is a strong choice. | |
goal = input('what do you want, human?') | |
agent_code = open(__file__).read() | |
history = [] | |
while True: | |
prompt = f''' | |
You are `llm` in this python code: {agent_code} | |
history: {history} | |
goal: {goal} | |
''' | |
code = llm(prompt) | |
res = exec(code, globals(), locals()) | |
history.append((code, res)) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Caveats
This might not work that well with your publicly available LLM yet.
If you are curious and actually want to run this code, there is a runnable version at https://github.com/AntonOsika/proto_agi
Dedication
This agent was created as a meme to illustrate the power of capable LLMs.
Power is usually a good thing. But it can be directed to power-grabbing, war, zero-sum-games rather than furthering humanity.
If you still think AGI is sci-fi or too far away to matter, consider checking out https://www.safe.ai/work/statement-on-ai-risk and who disagree with your viewpoint.
I think it is fair to say that the people on the above list uniformly agree that world leaders, and society, should think hard about how we want a post-AGI world to look.
As of 2024, world leaders and those of influence haven't woken up to the short timeline of what is going to happen. If we consider writing or talking to those of influence to take it seriously we have a better chance to live in the flourishing, conflict-free world we can live in thanks to AI.