Skip to content

Instantly share code, notes, and snippets.

@0xdevalias
Last active November 18, 2023 08:12
Show Gist options
  • Save 0xdevalias/4ce1ecd18b3a20ea6a9e58b1a2881875 to your computer and use it in GitHub Desktop.
Save 0xdevalias/4ce1ecd18b3a20ea6a9e58b1a2881875 to your computer and use it in GitHub Desktop.
A place to dump my random thoughts/musings on AI agent swarms (aka: AI agents that can build and interact with other agents)

AI Agent Swarm Musings

A place to dump my random thoughts/musings on AI agent swarms (aka: AI agents that can build and interact with other agents)

Table of Contents

Musings

Exploring Advanced AI Agent Integration: Security, Sandboxing, and Hierarchical Permissions

G: Yeah... wtf... much useful, if not a little scary (i very much checked each request before I let it send it :p)

G: Though because you have a proxy in between, you can add layers of "don't do stupid" within that

J: Absolutely

G: Specifically thinking in the space of like.. how to sandbox it so that it only can interact with it's own things, not any.

Off the top of my head, thinking openai organization could limit scope, as I don't think we can limit scope too much of API keys

J: Yea can also probably create a rule set on the proxy

G: Potentially... though that would also have potential risks of basically sandbox escaping if you do anything wrong or don't perfectly predict something being possible.

So better to limit it at a higher level if possible.

G: Another thought in this space. When creating agents you can set up to 16 key value metadata pairs. Keys can be up to 64chars, values can be 512 chars

https://platform.openai.com/docs/api-reference/assistants/object

So you could use that to add some level of 'hierachy' / 'permissions' / etc to them.

A basic implementation could just be basic unverified text + blind trust, but that would be vulnerable to any new agent adding it's own metadata (unless you block that and strip it at the proxy level for all agent create/edit/etc calls)

Going 1 step further though, you could do similar to how jwt tokens/local cookies/etc often do, which is basically adding a cryptographically signed hash over it that allows you to verify that the metadata is authentic. You could probably literally use jwt libs/mechanics for that even. Then the proxy can add it's signed 'RBAC' metadata key to agents it creates, and check that + the 'identity' of the caller to decide whether it's allowed to interact with that agent, etc.

Not currently sure how you would identify the 'caller' though.. particularly in agent -> agent comms.. But that could be something they need to provide, and could be baked into their system prompt/the way they talk to the proxy/etc.

(this is sort of getting into the space of if agents were able to talk to the skynet agent themselves, if they can't, then this would be getting a little overkill maybe)

J: My thinking so far is around a Central parent gpt that allows quick build/calling/routing/editing

G: Mm, makes sense. Potentially separate building and calling at some stage, as then you can prob limit risk/etc

ChatGPT agent + oAuth + OpenAI token management for 'assistant builder API'

J: Want me to try oauth actually?

G: How you gonna set it up?

I don't believe we can create oauth secrets/for openid account, so it would need to be something like google login or whatever.

At which point it would no doubt auth me and let me in, but then you'd still need to handle the openai api key somewhere, which would prob mean your app needing to persist it somewhere for me, etc

But.. since you're talking through a proxy... just remove auth entirely

And somehow bake into the api spec that it needs to ask the user for the token and provide that somehow

Ideally it will work magically; but if not you can just fake it by adding a new param to all the API's in the spec that it needs to use

J: Could definitely add it as a new parameter

And have the proxy do the legwork to convert it to a header

G: Ya, that's the fallback though.

I'd remove it from the agent config and just try telling the chat to add the key

See if it will just behave without needing to get too explicit

Musing on enabling direct ChatGPT 'agent' communication via ChatGPT private API's + proxy

G: Definitely cool idea. Haven't currently let it percolate to think if I have any particular real world use cases for it beyond vaguely gimmicky at this stage though.

It's sort of just doing what the current ChatGPT agent builder does.. just on the assistant API (+a few extra bits about cross-agent comms)

Which makes me think... you could just make ChatGPT agents aware of the ChatGPT private API's too potentially..

Could be interesting to see if you can manually provide the auth key/whatever and then have it create a new chat from it's own chat

Because then not paying API credits for it :p

Although.. you'll hit the cloudflare "are you a bot" stuff probably

You would need. A paid service to ensure you don't get the bot block stuff...

Or one of my hyperfocus deepdives tangentially related to reverse engineering web apps..

https://gist.github.com/0xdevalias/b34feb567bd50b37161293694066dd53#bypassing-cloudflare-akamai-etc

This being the simplest: https://github.com/FlareSolverr/FlareSolverr

(but 'heavier' as it uses a real browser rather than reverse engineering and cracking the JS test)

Though this was an awesomely in depth deep dive blog post and super interesting if you wanted to build stuff to reverse engineer/solve it: https://www.zenrows.com/blog/bypass-cloudflare

https://www.zenrows.com/blog/bypass-cloudflare#reverse-engineering-the-cloudflare-javascript-challenge

See Also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment