This example illustrates a way to utilize a function dynamically while querying an OpenAI GPT model. It uses the newly released functions
support in the completion endpoints OpenAI provides.
The general concept is based on using a decorator to extract information from a function so it can be presented to the language model for use, and then pass the result of that function back to the completion endpoint for language augmentation.
In general, a wide variety of functions can be swapped in for use by the model. By changing the get_top_stories
function, plus the prompt in run_conversation
, you should be able to get the model to run your function without changing any of the other code.
To use this, create a config.py
file and add a variable with your OpenAI token: