Skip to content

Instantly share code, notes, and snippets.

@schappim
Created February 3, 2025 07:24
Show Gist options
  • Save schappim/ad8b4953486617c7f813751c8ee99708 to your computer and use it in GitHub Desktop.
Save schappim/ad8b4953486617c7f813751c8ee99708 to your computer and use it in GitHub Desktop.
require "openai"
# Initialise the OpenAI client
client = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
# Helper method to call the API with a given model and prompt.
def call_llm(client, model:, prompt:, temperature: 0.7)
response = client.chat(
parameters: {
model: model,
messages: [{role: "user", content: prompt}],
temperature: temperature
}
)
response.dig("choices", 0, "message", "content").strip
end
# Define a simple LLM Node that supports forward, backward, and prompt update steps.
class LLMNode
attr_accessor :name, :prompt, :output, :gradient
def initialize(name, prompt)
@name = name
@prompt = prompt
@output = nil
@gradient = nil
@call_count = 0
end
# Forward pass: call the LLM with the current prompt.
def forward(client)
# Using GPT-3.5-turbo for the forward pass.
@output = call_llm(client, model: "gpt-4o", prompt: @prompt)
end
# Backward pass: generate a textual gradient given a target output.
def backward(client, target_output)
# Create a feedback prompt for the backward engine.
gradient_prompt = <<~PROMPT
Given the original prompt:
"#{@prompt}"
and the LLM output:
"#{@output}"
and knowing that the desired output should be similar to:
"#{target_output}"
provide a concise textual gradient describing how to modify the original prompt so that it is more likely to produce the desired output.
PROMPT
# Use GPT-4 as the backward engine.
@gradient = call_llm(client, model: "gpt-4o", prompt: gradient_prompt)
end
# Use the optimizer LLM to update the prompt based on the gradient.
def update_prompt(client)
optimizer_prompt = <<~PROMPT
Given the current prompt:
"#{@prompt}"
and the following textual gradient:
"#{@gradient}"
propose a revised prompt that should generate output closer to the desired target.
PROMPT
new_prompt = call_llm(client, model: "gpt-4o", prompt: optimizer_prompt)
@prompt = new_prompt
end
private
# Reuse the helper method inside the class.
def call_llm(client, model:, prompt:, temperature: 0.7)
@call_count += 1
puts "Calling LLM (#{@call_count}) with model: #{model}, prompt: #{prompt}, temperature: #{temperature}"
response = client.chat(
parameters: {
model: model,
messages: [{role: "user", content: prompt}],
temperature: temperature
}
)
response.dig("choices", 0, "message", "content").strip
end
end
# --- Example usage ---
# Create an LLM node with an initial prompt.
node = LLMNode.new("Role Generator", "List roles/titles of people who use cold outreach in B2B marketing?")
puts "Initial prompt:\n#{node.prompt}\n\n"
# Forward pass: get the current LLM output.
output = node.forward(client)
puts "LLM output:\n#{node.output}\n\n"
# Assume we have a target description for what we would like to see.
target_output = "An array of roles in array format, such as: ['Sales Development Representative', 'Account Executive', 'Marketing Manager']"
# Backward pass: generate a textual gradient.
gradient = node.backward(client, target_output)
puts "Textual gradient:\n#{node.gradient}\n\n"
# Update the prompt using the optimizer.
node.update_prompt(client)
puts "Updated prompt:\n#{node.prompt}\n\n"
# Optionally, you can repeat the forward-backward-update cycle in a loop for iterative improvements.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment