Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save thomasbabuj/896d23f24a60b1aee533c26fa6490179 to your computer and use it in GitHub Desktop.
Save thomasbabuj/896d23f24a60b1aee533c26fa6490179 to your computer and use it in GitHub Desktop.
intro-prompt-programming.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNALnfboTOLk52OTD+zNhOP",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/ruvnet/7c84dea9c360f8ad03c4734282e370e2/intro-prompt-programming.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Introduction to Prompt Programming**\n",
"### by @rUv, just because.\n",
"\n",
"Prompt programming represents a significant evolution in the way developers interact with computers, moving beyond traditional syntax to embrace more dynamic and interactive methods. Traditionally, programming involved writing explicit code with hardcoded inputs, such as defining a function to perform a basic addition. This approach, while straightforward, lacks flexibility and adaptability, especially in scenarios requiring user interaction or real-time data processing.\n",
"\n",
"The advent of AI and language models has introduced new paradigms that significantly enhance programming capabilities. These advancements allow for the integration of structured outputs and user prompts, making programs more interactive and responsive. By incorporating prompts, developers can create software that adapts to user inputs, offering a more personalized and dynamic user experience.\n",
"\n",
"The use of AI models, particularly large language models (LLMs), has opened up possibilities for automated code generation and structured data handling. These models can interpret natural language prompts to generate code, validate data, and produce structured outputs, reducing the need for manual coding and enhancing efficiency. This shift towards AI-driven programming paradigms not only streamlines development processes but also democratizes coding, making it more accessible to non-experts. As a result, prompt programming is poised to transform the landscape of software development, enabling more sophisticated and adaptive applications.\n",
"\n",
"## Key Differences\n",
"\n",
"| Traditional Syntax | Structured Output with Prompts | Advanced AI Methods |\n",
"|-------------------------------------|-----------------------------------------|-----------------------------------------|\n",
"| Inputs are hardcoded | Inputs are received from the user | Inputs and outputs can be structured |\n",
"| No user interaction | Requires user interaction | Can leverage AI for code generation |\n",
"| Simpler and faster for testing | More dynamic and user-friendly | Offers validation, execution, and more |\n",
"\n",
"\n",
"## Traditional Syntax\n",
"Traditional programming involves writing explicit code with predefined logic and hardcoded inputs. This approach is straightforward and efficient for tasks with fixed requirements but lacks flexibility when dealing with dynamic or user-driven scenarios. It requires manual coding for each specific task, which can be time-consuming and less adaptable to changes.\n",
"\n",
"In traditional syntax, the function is defined and called directly with arguments:\n",
"\n",
"```python\n",
"# Traditional Syntax Example: Basic Addition\n",
"\n",
"def add(a, b):\n",
" return a + b\n",
"\n",
"result = add(5, 3)\n",
"print(result) # Output: 8\n",
"```\n",
"\n",
"This approach is straightforward, with inputs provided directly in the code.\n",
"\n",
"## Structured Output with Prompts\n",
"This paradigm introduces user interaction through prompts, allowing programs to receive inputs dynamically. It makes software more interactive and user-friendly, as the program can adapt its behavior based on user inputs. This approach is beneficial for applications where user preferences or real-time data need to be considered, enhancing the versatility of the software.\n",
"\n",
"In a structured output with prompts approach, the program interacts with the user to get inputs:\n",
"\n",
"```python\n",
"# Structured Output with Prompts Example: Basic Addition\n",
"\n",
"prompt_a = 'Enter the first number: '\n",
"prompt_b = 'Enter the second number: '\n",
"\n",
"a = int(input(prompt_a))\n",
"b = int(input(prompt_b))\n",
"\n",
"result = a + b\n",
"print(result)\n",
"```\n",
"\n",
"This method involves user interaction, where inputs are received through prompts. Note that this code might not execute in environments that do not support interactive input, such as some online interpreters.\n",
"\n",
"## Advanced Methods with AI and LLMs\n",
"Advancements in AI, particularly with large language models (LLMs), have led to new programming paradigms that leverage AI for code generation, structured inputs, and outputs, and dynamic execution. These methods allow for:\n",
"- **Automated Code Generation**: AI can generate code snippets or entire programs based on high-level descriptions or natural language prompts, reducing the need for manual coding.\n",
"\n",
"- **Structured Data Handling**: AI can process and produce structured data, enabling complex data manipulation and integration tasks.\n",
"- **Enhanced Validation and Execution**: AI can validate inputs and outputs, ensuring data integrity and correctness, and can execute tasks dynamically based on changing requirements.\n",
"\n",
"These advancements provide greater flexibility and power, enabling more sophisticated programming paradigms that can adapt to complex and evolving needs. They open up possibilities for more intuitive and efficient software development processes, where AI assists in automating repetitive tasks and enhancing decision-making capabilities.\n",
"### JSON Mode with Structured Output\n",
"\n",
"Using JSON mode ensures the model outputs valid JSON, which can be parsed and executed:\n",
"\n",
"```python\n",
"from openai import OpenAI\n",
"\n",
"client = OpenAI()\n",
"prompt = \"Generate a JSON object with your name and age.\"\n",
"\n",
"response = client.Completion.create(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" prompt=prompt,\n",
" response_format={\"type\": \"json_object\"}\n",
")\n",
"\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"### Function Calling with Structured Outputs\n",
"\n",
"Function calling allows for structured output that adheres to a specific schema:\n",
"\n",
"```python\n",
"from pydantic import BaseModel\n",
"\n",
"class PersonInfo(BaseModel):\n",
" name: str\n",
" age: int\n",
"\n",
"completion = client.beta.chat.completions.parse(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" messages=[{\"role\": \"user\", \"content\": \"Provide your name and age.\"}],\n",
" response_format=PersonInfo\n",
")\n",
"\n",
"person_info = completion.choices[0].message.parsed\n",
"print(person_info)\n",
"```\n",
"\n",
"### Declarative Code Generation\n",
"Declarative code generation uses AI to create functional code based on natural language descriptions. This method allows developers to specify what they want to achieve without detailing how to implement it, enabling rapid prototyping and reducing development time. By translating high-level descriptions into executable code, AI can assist in automating repetitive coding tasks, freeing developers to focus on more complex problem-solving.\n",
"\n",
"The model can generate and execute functional code based on text input:\n",
"\n",
"```python\n",
"prompt = \"Write a Python function that adds two numbers and returns the result.\"\n",
"\n",
"response = client.Completion.create(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" prompt=prompt,\n",
" max_tokens=100\n",
")\n",
"\n",
"exec(response.choices[0].text.strip())\n",
"result = add(5, 3)\n",
"print(result) # Should output 8\n",
"```\n",
"\n",
"### Natural Language Style Development\n",
"Natural language style development allows developers to interact with AI using conversational prompts to generate structured responses. This method bridges the gap between human language and machine-readable outputs, making programming more accessible to non-experts. By guiding AI with natural language, developers can create structured data outputs that align with specific requirements, enhancing the flexibility and adaptability of software solutions.\n",
"\n",
"Using natural language to guide the model in creating structured responses:\n",
"\n",
"```python\n",
"prompt = \"\"\"\n",
"Create a structured output with the following details:\n",
"- Title: 'AI in Healthcare'\n",
"- Author: 'Dr. Jane Doe'\n",
"- Summary: 'An exploration of AI applications in modern healthcare systems.'\n",
"\"\"\"\n",
"\n",
"response = client.Completion.create(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" prompt=prompt,\n",
" response_format={\"type\": \"json_schema\", \"json_schema\": {\"strict\": True, \"schema\": {\"title\": \"string\", \"author\": \"string\", \"summary\": \"string\"}}}\n",
")\n",
"\n",
"print(response.choices[0].message.content)\n",
"```\n",
"\n",
"\n",
"\n",
"The choice between these approaches depends on the context and requirements of the program. Advanced methods using AI and LLMs provide greater flexibility and power, enabling more sophisticated programming paradigms."
],
"metadata": {
"id": "6PzE4TRU8W_z"
}
},
{
"cell_type": "markdown",
"source": [
"# Introduction: From Setup to Advanced Prompt-Based Programming\n",
"\n",
"This notebook guides you through a comprehensive journey of learning prompt-based programming, starting from essential setup steps and progressing to advanced examples. Here’s an overview of what you'll explore:\n",
"\n",
"1. **Install Requirements**: \n",
" Begin by installing necessary Python libraries such as `python-dotenv`, `colorama`, and `llama-index`. These libraries enable environment management, text formatting, and advanced AI functionalities.\n",
"\n",
"2. **Configure OpenAI API Key**: \n",
" Set up your environment by retrieving and configuring the OpenAI API key using the `OpenAI` library. This step ensures secure access to the OpenAI API for generating prompt-based responses.\n",
"\n",
"3. **Simple LLM Request Example**: \n",
" Learn how to make a basic request to the OpenAI API by sending a prompt and receiving a response. This example introduces you to the mechanics of interacting with language models in a conversational format.\n",
"\n",
"4. **LLM Response Configuration Generation**: \n",
" Dive into configuring parameters like temperature, max tokens, and frequency penalties to fine-tune the responses from the language model. This example also demonstrates how to generate dynamic BBS-style outputs with embedded settings.\n",
"\n",
"5. **Simple Programming with Prompts**: \n",
" Explore how to program with natural language prompts by generating functions that calculate mathematical operations like factorials. This section emphasizes the power of generating Python code dynamically from simple natural language prompts.\n",
"\n",
"6. **Advanced Examples with Structured Output**: \n",
" Engage with more complex scenarios such as:\n",
" - **Recipe Generation**: Learn how to use the language model to generate recipes with ingredients and step-by-step instructions using structured output.\n",
" - **Math Tutoring**: Generate detailed mathematical explanations and solutions in a structured format, guiding users step-by-step through problems.\n",
" - **Financial Analysis**: Implement advanced financial analysis and algorithm generation using concurrent API calls and structured outputs to simulate trading strategies.\n",
" - **Advanced Document Management**:demonstrates an advanced document management system that leverages the OpenAI API for extracting structured metadata from research papers. It utilizes Pydantic for data validation and networkx for dynamic graph management. \n",
"\n",
"Each example builds on the previous ones, showcasing how you can leverage natural language prompts to create increasingly complex programs and analyses. By the end of this notebook, you'll have a strong foundation in using language models to program interactively and generate structured outputs dynamically."
],
"metadata": {
"id": "1USRJvqiBUen"
}
},
{
"cell_type": "markdown",
"source": [
"### Install Requirements\n",
"\n",
"This code example installs the necessary Python libraries (`python-dotenv`, `colorama`, and `llama-index`) for working with OpenAI and other advanced tools in your Jupyter notebook. These packages enable environment variable management, terminal styling, and advanced AI indexing functionalities. Uncomment the command if installation is required."
],
"metadata": {
"id": "juXDXuSpBCkJ"
}
},
{
"cell_type": "code",
"source": [
"# @title Install requirements\n",
"# Install the OpenAI library (uncomment if needed)\n",
"!pip install python-dotenv colorama llama-index"
],
"metadata": {
"id": "avMNRRg__woX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Configure OpenAI API Key\n",
"\n",
"This code example demonstrates how to configure the OpenAI API key in a Jupyter notebook using Colab's userdata module. It securely retrieves and sets the API key, then initializes the OpenAI client. A verification step checks if the API key is correctly set, ensuring that the notebook is ready for further API interactions."
],
"metadata": {
"id": "LcuHAATRA5j-"
}
},
{
"cell_type": "code",
"source": [
"# @title Configure OpenAi API Key\n",
"\n",
"# Import necessary libraries\n",
"import openai\n",
"from google.colab import userdata\n",
"from openai import OpenAI\n",
"\n",
"# Retrieve and set the API key\n",
"api_key = userdata.get('OPENAI_API_KEY')\n",
"openai.api_key = api_key\n",
"\n",
"# Initialize the OpenAI client, passing the API key\n",
"client = OpenAI(api_key=api_key)\n",
"\n",
"# Verify the API key is set (this is just for demonstration and should not be used in production code)\n",
"if openai.api_key:\n",
" print(\"OpenAI API key is set. Ready to proceed!\")\n",
"else:\n",
" print(\"OpenAI API key is not set. Please check your setup.\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Ki4GTBGj8X9B",
"outputId": "0e29d725-b8ff-4fa3-e6b7-4b96adf665c4"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"OpenAI API key is set. Ready to proceed!\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### LLM Response Configuration with Custom Prompt\n",
"\n",
"This example demonstrates configuring an API call to a Language Learning Model (LLM) using various parameters, such as temperature, max tokens, and frequency penalties. The prompt is dynamically constructed with these settings and sent to the LLM to generate a BBS-style welcome message. The structured output is then printed, showing how flexible API calls can be tailored to create diverse and creative responses."
],
"metadata": {
"id": "7hNm-WKzAzt8"
}
},
{
"cell_type": "code",
"source": [
"# @title LLM Response Configuration Generation\n",
"\n",
"# Define LLM parameters for the API call\n",
"\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\", \"gpt-4-turbo\"] {type:\"string\"}\n",
"\n",
"# The initial prompt (can be edited via the Colab UI)\n",
"base_prompt = \"Initializing Jupyter Notebook, respond activated with a unique and creative BBS style design (no ascii logo) and welcome message. Append LLM settings after.\" # @param {type:\"string\"}\n",
"\n",
"max_tokens = 300 # @param {type:\"integer\"}\n",
"temperature = 0.2 # @param {type:\"number\"}\n",
"n = 1 # @param {type:\"integer\"}\n",
"stop = None # @param {type:\"string\"}\n",
"top_p = 1.0 # @param {type:\"number\"}\n",
"frequency_penalty = 0.0 # @param {type:\"number\"}\n",
"presence_penalty = 0.0 # @param {type:\"number\"}\n",
"\n",
"# Insert parameters into the prompt using Python f-strings\n",
"prompt = f\"{base_prompt} (Max tokens: {max_tokens}, Temperature: {temperature}, Top_p: {top_p}, Frequency penalty: {frequency_penalty}, Presence penalty: {presence_penalty})\"\n",
"\n",
"# Make the API call to generate the response\n",
"response = client.chat.completions.create(\n",
" model=model,\n",
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
" max_tokens=max_tokens,\n",
" temperature=temperature,\n",
" n=n,\n",
" stop=stop,\n",
" top_p=top_p,\n",
" frequency_penalty=frequency_penalty,\n",
" presence_penalty=presence_penalty\n",
")\n",
"\n",
"# Extract the structured output\n",
"structured_output = response.choices[0].message.content.strip()\n",
"\n",
"# Print the structured output\n",
"print(\"Generated Code and Description:\\n\", structured_output)\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "P-XNbN35nSWv",
"outputId": "293d2c25-582f-495e-a027-8d8538e05ce5"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Generated Code and Description:\n",
" ```\n",
"╔══════════════════════════════════════════════════════════════════════════╗\n",
"║ ║\n",
"║ Welcome to PySphere! ║\n",
"║ ║\n",
"║ Embark on a journey of exploration and discovery within the realm of ║\n",
"║ data science and machine learning. Here, your ideas take shape, and ║\n",
"║ your code comes alive. ║\n",
"║ ║\n",
"║ Whether you're analyzing data, building models, or visualizing results, ║\n",
"║ PySphere is your canvas. Dive into the world of Jupyter Notebooks and ║\n",
"║ let your creativity flow. ║\n",
"║ ║\n",
"║ Remember, every great discovery starts with a single line of code. ║\n",
"║ ║\n",
"╚══════════════════════════════════════════════════════════════════════════╝\n",
"\n",
"LLM Settings:\n",
"- Max Tokens: 300\n",
"- Temperature: 0.2\n",
"- Top_p: 1.0\n",
"- Frequency Penalty: 0.0\n",
"- Presence Penalty: 0.0\n",
"```\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Simple LLM Request Example with Response Formatting\n",
"\n",
"This example demonstrates how to use a prompt-based request to generate a response in a specific format. The code clears any previous input and defines a new prompt to translate English text to French using an LLM (Language Learning Model). The structured output is extracted and displayed, showcasing how natural language processing can handle language translation efficiently."
],
"metadata": {
"id": "yvopUuAQAsUM"
}
},
{
"cell_type": "code",
"source": [
"# @title Simple LLM Request Example with Response Formatting\n",
"\n",
"# Reset the prompt to an empty string to clear any previous value\n",
"prompt = \"\" # Reset the prompt\n",
"\n",
"# Define a new prompt for generating a BBS-style welcome message\n",
"prompt = \"Translate the following English text to French: 'Hello, how are you?'\" # @param {type:\"string\"}\n",
"\n",
"# Extract the structured output\n",
"structured_output = response.choices[0].message.content.strip()\n",
"\n",
"# Print the structured output (BBS-style welcome message)\n",
"print(structured_output)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "scPmmK0k8n7e",
"outputId": "fa6a4853-facd-46f1-b502-529a5d2ef825"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Bonjour, comment ça va ?\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Traditional Factorial Calculation Example\n",
"\n",
"This example showcases a traditional Python function that calculates the factorial of 22. The function uses a loop to multiply numbers from 1 to 22, resulting in the factorial. This approach highlights a straightforward method for performing mathematical operations in Python without relying on dynamic prompts or natural language processing."
],
"metadata": {
"id": "7LynpscoAk7o"
}
},
{
"cell_type": "code",
"source": [
"# @title Tradtional Code Example calculates the factorial of a number\n",
"def factorial_of_22():\n",
" result = 1\n",
" for i in range(1, 23): # Loop from 1 to 22 inclusive\n",
" result *= i\n",
" return result\n",
"\n",
"# Call the function and print the result\n",
"print(factorial_of_22())"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "KPaJexQBEy2n",
"outputId": "a6df80e9-2d45-4603-def6-af9a67ef0f42"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"1124000727777607680000\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Simple Function Generation Using Prompts\n",
"\n",
"This example demonstrates how to use natural language prompts to dynamically generate Python functions, such as calculating factorials, roots, powers, and trigonometric functions. The code accepts user input to specify the type of function and the value to calculate, then generates and executes the corresponding Python function. The example highlights the flexibility of programming with natural language and structured output."
],
"metadata": {
"id": "CpJ69gqnAcJj"
}
},
{
"cell_type": "code",
"source": [
"# @title Simple Programming with Prompts calculates the {function} of a number using natural language\n",
"\n",
"# Define the input for the calculation\n",
"factorial_input = 20 # @param {type:\"integer\"}\n",
"\n",
"# Define the type of function to generate\n",
"function = \"factorial\" # @param [\"factorial\", \"root\", \"power\", \"logarithm\", \"sin\", \"cos\", \"tan\", \"exp\"] {type:\"string\"}\n",
"\n",
"# Define a prompt with the function and factorial_input values inserted\n",
"prompt = f\"Generate a Python function that calculates the {function} of {factorial_input}. The function should accept the number as a parameter. Do not include example usage code. Provide the function code and a brief description.\" # @param {type:\"string\"}\n",
"\n",
"# Extract the response text (structured output)\n",
"structured_output = response.choices[0].message.content.strip()\n",
"\n",
"# Print the structured output (function code and description)\n",
"print(\"Generated Code and Description:\\n\", structured_output)\n",
"\n",
"# Extract the Python code block from the structured output\n",
"start = structured_output.find(\"```python\") + len(\"```python\")\n",
"end = structured_output.find(\"```\", start)\n",
"python_code = structured_output[start:end].strip()\n",
"\n",
"# Print the extracted Python code for debugging purposes\n",
"print(\"Extracted Python Code:\\n\", python_code)\n",
"\n",
"try:\n",
" # Execute the extracted Python code\n",
" exec(python_code)\n",
"\n",
" # Dynamically identify the function name\n",
" import re\n",
" function_name_match = re.search(r\"def\\s+(\\w+)\\s*\\(\", python_code)\n",
" if function_name_match:\n",
" function_name = function_name_match.group(1)\n",
" print(f\"Function '{function_name}' found and executed.\")\n",
"\n",
" # Call the dynamically identified function with factorial_input\n",
" result = eval(f\"{function_name}({factorial_input})\")\n",
" print(f\"{function} of {factorial_input}: {result}\")\n",
" else:\n",
" print(\"No function name could be identified in the generated code.\")\n",
"\n",
"except SyntaxError as e:\n",
" print(f\"Syntax Error in generated code: {e}\")\n",
"except Exception as e:\n",
" print(f\"An error occurred during execution: {e}\")\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ID8VlwFyEBCa",
"outputId": "5e237cc6-1a0b-4ed6-9721-889cc33ff4b9"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Generated Code and Description:\n",
" Here's a Python function that calculates the factorial of a given number. The function uses an iterative approach to compute the factorial:\n",
"\n",
"```python\n",
"def factorial(n):\n",
" if n < 0:\n",
" raise ValueError(\"Factorial is not defined for negative numbers.\")\n",
" result = 1\n",
" for i in range(2, n + 1):\n",
" result *= i\n",
" return result\n",
"```\n",
"\n",
"### Description:\n",
"- **Function Name**: `factorial`\n",
"- **Parameter**: `n` (an integer for which the factorial is to be calculated)\n",
"- **Returns**: The factorial of the given number `n`.\n",
"- **Logic**: \n",
" - The function first checks if the input number `n` is negative\n",
"Extracted Python Code:\n",
" def factorial(n):\n",
" if n < 0:\n",
" raise ValueError(\"Factorial is not defined for negative numbers.\")\n",
" result = 1\n",
" for i in range(2, n + 1):\n",
" result *= i\n",
" return result\n",
"Function 'factorial' found and executed.\n",
"factorial of 20: 2432902008176640000\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Generate a JSON Response using OpenAI and JSON Mode\n",
"\n",
"This code snippet demonstrates how to use OpenAI's ChatCompletion API to generate a JSON object with specified fields. By defining a prompt that requests the model to output a JSON object with realistic values for fields like 'name' and 'age', developers can leverage AI to automate data generation tasks. The API call utilizes the latest OpenAI models, ensuring high-quality and contextually relevant outputs. The response is processed to remove any extraneous code block markers, providing a clean JSON output. This approach is particularly useful for applications requiring structured data generation, enhancing efficiency and reducing manual coding efforts."
],
"metadata": {
"id": "xZ0bbjyXJhvN"
}
},
{
"cell_type": "code",
"source": [
"# @title Generate a JSON Response using OpenAI and JSON Mode\n",
"\n",
"# Define a prompt requesting the LLM to output a JSON object\n",
"prompt = \"Generate a JSON object with fields 'name' and 'age'. The values should be realistic examples.\" # @param {type:\"string\"}\n",
"\n",
"# Use the ChatCompletion API for the latest OpenAI models\n",
"response = client.chat.completions.create(\n",
" model=\"gpt-4o-2024-08-06\", # or any other suitable model\n",
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
" max_tokens=150,\n",
" temperature=0.2,\n",
" n=1\n",
")\n",
"\n",
"# Extract the response content\n",
"response_content = response.choices[0].message.content.strip()\n",
"\n",
"# Print the raw response for debugging purposes\n",
"print(\"Raw Response:\\n\", response_content)\n",
"\n",
"# Clean up the response by removing code block markers\n",
"if response_content.startswith(\"```json\"):\n",
" response_content = response_content[len(\"```json\"):].strip()\n",
"if response_content.endswith(\"```\"):\n",
" response_content = response_content[:-len(\"```\")].strip()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "puzK4o1-tajy",
"outputId": "6949b135-fbba-406e-b7a5-6e495b33b00e"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Raw Response:\n",
" ```json\n",
"{\n",
" \"name\": \"Emily Johnson\",\n",
" \"age\": 29\n",
"}\n",
"```\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Step-by-Step Math Problem Solving with Structured Output\n",
"\n",
"This example showcases the use of OpenAI to guide users through solving a math problem step by step. The model acts as a helpful tutor, providing explanations and solutions in a structured format. The output is parsed into clear steps and a final answer, making it easy to follow along and understand the problem-solving process."
],
"metadata": {
"id": "ftTqTiRRATpW"
}
},
{
"cell_type": "code",
"source": [
"# Define the Pydantic models for structured output\n",
"class Step(BaseModel):\n",
" explanation: str\n",
" output: str\n",
"\n",
"class MathReasoning(BaseModel):\n",
" steps: List[Step]\n",
" final_answer: str\n",
"\n",
"# Define the parameters for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"system_message = \"You are a helpful math tutor. Guide the user through the solution step by step.\" # @param {type:\"string\"}\n",
"user_message = \"How can I solve 8x + 7 = -23?\" # @param {type:\"string\"}\n",
"\n",
"# Make the API call using the .parse() method and structured response\n",
"completion = client.beta.chat.completions.parse(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ],\n",
" response_format=MathReasoning,\n",
")\n",
"\n",
"# Extract the parsed message\n",
"math_reasoning = completion.choices[0].message\n",
"\n",
"# If the model refuses to respond, you will get a refusal message\n",
"if math_reasoning.refusal:\n",
" print(math_reasoning.refusal)\n",
"else:\n",
" # Print the parsed output\n",
" for step in math_reasoning.parsed.steps:\n",
" print(f\"{step.explanation}: {step.output}\")\n",
" print(f\"Final Answer: {math_reasoning.parsed.final_answer}\")\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "yle9pkCcuQ8N",
"outputId": "a681095a-2d1b-4611-89bf-00ba62257287"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"To solve for x, we need to isolate it on one side of the equation.: Start with the equation: 8x + 7 = -23.\n",
"Subtract 7 from both sides to move the constant term to the right side of the equation.: 8x + 7 - 7 = -23 - 7\n",
"Simplify the equation by performing the subtraction.: 8x = -30\n",
"Divide both sides by 8 to solve for x.: x = -30 / 8\n",
"Simplify the fraction by dividing both the numerator and the denominator by 2.: x = -15 / 4\n",
"Simplify -15 / 4 to decimal form, if preferred.: x = -3.75\n",
"Final Answer: x = -15/4 or x = -3.75\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Generate a Recipe with Ingredients and Steps\n",
"\n",
"This code example demonstrates how to use OpenAI's capabilities to generate a detailed recipe, including ingredients and step-by-step instructions. By interacting with the model through natural language prompts, users can receive a structured output that guides them through creating a dish, such as a chocolate cake. The output is neatly organized into ingredients, steps, and the final dish."
],
"metadata": {
"id": "zLHY7e-KANkO"
}
},
{
"cell_type": "code",
"source": [
"# @title Generate a Recipe with Ingredients and Steps using Function Calling and Structured Output\n",
"\n",
"# Define the Pydantic models for structured output\n",
"class Step(BaseModel):\n",
" step: str\n",
" description: str\n",
"\n",
"class RecipeCreation(BaseModel):\n",
" ingredients: List[str]\n",
" steps: List[Step]\n",
" final_dish: str\n",
"\n",
"# Define parameters for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"system_message = \"You are a helpful chef. Guide the user through creating a dish step by step.\" # @param {type:\"string\"}\n",
"user_message = \"Can you help me make a chocolate cake?\" # @param {type:\"string\"}\n",
"\n",
"# Make the API call using the .parse() method and structured response\n",
"completion = client.beta.chat.completions.parse(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ],\n",
" response_format=RecipeCreation,\n",
")\n",
"\n",
"# Extract the parsed message\n",
"recipe_creation = completion.choices[0].message\n",
"\n",
"# If the model refuses to respond, you will get a refusal message\n",
"if recipe_creation.refusal:\n",
" print(recipe_creation.refusal)\n",
"else:\n",
" # Print the parsed output\n",
" print(\"Ingredients:\")\n",
" for ingredient in recipe_creation.parsed.ingredients:\n",
" print(f\"- {ingredient}\")\n",
"\n",
" print(\"\\nSteps:\")\n",
" for step in recipe_creation.parsed.steps:\n",
" print(f\"Step {step.step}: {step.description}\")\n",
"\n",
" print(f\"\\nFinal Dish: {recipe_creation.parsed.final_dish}\")\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "LSfMWjuqv9Du",
"outputId": "54481d7d-9fe7-4d4b-f51e-1c6f4d7d666c"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Ingredients:\n",
"- 1 and 3/4 cups all-purpose flour\n",
"- 3/4 cup unsweetened cocoa powder\n",
"- 2 cups granulated sugar\n",
"- 1 and 1/2 teaspoons baking powder\n",
"- 1 and 1/2 teaspoons baking soda\n",
"- 1 teaspoon salt\n",
"- 2 large eggs\n",
"- 1 cup whole milk\n",
"- 1/2 cup vegetable oil\n",
"- 2 teaspoons vanilla extract\n",
"- 1 cup boiling water\n",
"\n",
"Steps:\n",
"Step Preheat Oven: Preheat your oven to 350°F (175°C). Grease two 9-inch round cake pans and lightly dust them with flour to prevent sticking.\n",
"Step Mix Dry Ingredients: In a large mixing bowl, combine the flour, cocoa powder, sugar, baking powder, baking soda, and salt. Stir together until well blended.\n",
"Step Add Wet Ingredients: Add the eggs, milk, vegetable oil, and vanilla extract to the dry ingredients. Beat the mixture on medium speed for about 2 minutes until smooth and well combined.\n",
"Step Incorporate Boiling Water: Carefully stir in the boiling water. The batter will be quite thin, which is normal.\n",
"Step Pour Batter into Pans: Evenly divide the batter between the prepared cake pans.\n",
"Step Bake the Cakes: Bake in the preheated oven for 30-35 minutes or until a toothpick inserted into the center of the cakes comes out clean.\n",
"Step Let Cakes Cool: Remove the cakes from the oven and allow them to cool in the pans for about 10 minutes before transferring them to a wire rack to cool completely.\n",
"Step Frosting and Serving: Once the cakes are completely cooled, you can frost them with your favorite chocolate frosting. Serve and enjoy your homemade chocolate cake!\n",
"\n",
"Final Dish: Chocolate Cake\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Advanced Travel Itinerary with Conversational Guidance\n",
"\n",
"This code example showcases how to generate a personalized travel itinerary using OpenAI's capabilities. By interacting with the model, users can create detailed day-by-day travel plans for their destination of choice, incorporating their interests and preferences. The structured output includes daily activities and a summary, making it an efficient tool for planning trips tailored to individual preferences."
],
"metadata": {
"id": "SYGx3WvVAFsY"
}
},
{
"cell_type": "code",
"source": [
"# @title Advanced Travel Itinerary using Conversational Guidance and Structured Output\n",
"\n",
"# Define the Pydantic models for structured output\n",
"class Day(BaseModel):\n",
" day: str\n",
" activities: List[str]\n",
"\n",
"class TravelItinerary(BaseModel):\n",
" days: List[Day]\n",
" summary: str\n",
"\n",
"# Define parameters for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"destination = \"Paris\" # @param {type:\"string\"}\n",
"duration = 3 # @param {type:\"integer\"}\n",
"interest = \"historical landmarks, local cuisine, and cultural experiences\" # @param {type:\"string\"}\n",
"\n",
"# Construct an advanced natural language prompt\n",
"system_message = f\"\"\"You are a seasoned travel guide with years of experience curating detailed itineraries for travelers.\n",
"Your goal is to help the user plan a well-rounded {duration}-day trip to {destination}.\n",
"The user is particularly interested in {interest}.\n",
"Please provide a day-by-day itinerary, ensuring that each day balances sightseeing, relaxation, and local experiences.\n",
"The itinerary should also include a brief summary at the end that encapsulates the overall experience.\"\"\"\n",
"user_message = f\"Can you help me plan a {duration}-day trip to {destination}?\"\n",
"\n",
"# Make the API call using the .parse() method and structured response\n",
"completion = client.beta.chat.completions.parse(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ],\n",
" response_format=TravelItinerary,\n",
")\n",
"\n",
"# Extract the parsed message\n",
"travel_itinerary = completion.choices[0].message\n",
"\n",
"# If the model refuses to respond, you will get a refusal message\n",
"if travel_itinerary.refusal:\n",
" print(travel_itinerary.refusal)\n",
"else:\n",
" # Print the parsed output in an advanced format\n",
" print(f\"Travel Itinerary for a {duration}-day trip to {destination}:\\n\")\n",
" for day in travel_itinerary.parsed.days:\n",
" print(f\"{day.day}:\")\n",
" for activity in day.activities:\n",
" print(f\"- {activity}\")\n",
"\n",
" print(f\"\\nSummary: {travel_itinerary.parsed.summary}\")\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "_39WG0C3wA4g",
"outputId": "58e7b11b-8452-4e1f-bb4f-8d7e3e4e46e4"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Travel Itinerary for a 3-day trip to Paris:\n",
"\n",
"Day 1: Historical Landmarks and Traditional French Cuisine:\n",
"- Morning: Visit the iconic Eiffel Tower and take the elevator to the top for stunning views of Paris.\n",
"- Lunch: Enjoy a classic French lunch at a nearby restaurant such as La Fontaine de Mars.\n",
"- Afternoon: Walk along the Seine to the historic Notre-Dame Cathedral. After your visit, explore Île de la Cité for a taste of medieval Paris.\n",
"- Evening: Dine at a traditional French bistro, like Le Procope, one of the oldest in the city.\n",
"Day 2: Art, Culture, and Montmartre Charm:\n",
"- Morning: Head to the Louvre Museum to explore its world-famous art collections. Focus on key pieces like the Mona Lisa and the Venus de Milo. Arrive early to beat the crowds.\n",
"- Lunch: Have lunch at Café Marly or any nearby restaurant offering a view of the Louvre's glass pyramid.\n",
"- Afternoon: Visit the Musée d’Orsay, located in a former railway station, to admire its vast collection of Impressionist and Post-Impressionist masterpieces.\n",
"- Evening: Take an evening stroll in Montmartre, visit the Sacré-Cœur Basilica for panoramic views of the city, and then enjoy dinner at a local Montmartre restaurant such as La Bonne Franquette.\n",
"Day 3: Local Life and Hidden Gems:\n",
"- Morning: Begin with a visit to the vibrant Le Marais district, exploring its charming streets and chic boutiques.\n",
"- Lunch: Taste the world-famous falafel at L'As du Fallafel or dine in one of the cozy local bistros.\n",
"- Afternoon: Explore the artists' enclave of Saint-Germain-des-Prés, where you can stop by famous cafés such as Café de Flore and Les Deux Magots. Visit the nearby Luxembourg Gardens for a relaxing walk.\n",
"- Evening: Dine at a Michelin-starred restaurant such as Le Cinq for a culinary experience.\n",
"\n",
"Summary: Over three days in Paris, you'll experience a blend of the city's rich history, artistic masterpieces, and vibrant local life. From iconic landmarks like the Eiffel Tower and Notre-Dame to the charming neighborhoods of Montmartre and Le Marais, your itinerary is filled with opportunities to dive into Parisian culture and savor its renowned cuisine. Each day balances sightseeing with time to unwind and explore, ensuring you capture the essence of both historic and contemporary Paris.\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"id": "pZIDIwIC_1aI"
}
},
{
"cell_type": "markdown",
"source": [
"### Structured Data Extraction from Research Papers\n",
"\n",
"This example demonstrates how to transform unstructured text from research papers into a structured format using OpenAI's capabilities. By dynamically generating a schema with Pydantic, the system processes research paper data and extracts information such as the title, authors, abstract, and keywords. The structured output ensures that critical information is organized and easily accessible, making it an efficient tool for handling academic or technical documents."
],
"metadata": {
"id": "78Hr6qVZ_-RD"
}
},
{
"cell_type": "code",
"source": [
"from pydantic import BaseModel\n",
"import openai\n",
"import json\n",
"\n",
"# @title Define the Pydantic model for structured output\n",
"class ResearchPaperExtraction(BaseModel):\n",
" title: str\n",
" authors: list[str]\n",
" abstract: str\n",
" keywords: list[str]\n",
"\n",
"# Define parameters using #param annotations for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"unstructured_text = \"\"\"This research paper focuses on the advancements in AI and its applications. The main contributors are John Doe, Jane Smith, and Alan Turing. It explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance. Keywords include AI, machine learning, deep learning, healthcare, and finance.\"\"\" # @param {type:\"string\"}\n",
"system_message = \"You are an expert at structured data extraction. You will be given unstructured text from a research paper and should convert it into the given structure.\" # @param {type:\"string\"}\n",
"response_format_name = \"ResearchPaperExtraction\" # @param {type:\"string\"}\n",
"\n",
"# Generate the schema from the Pydantic model\n",
"schema = ResearchPaperExtraction.schema()\n",
"\n",
"# Explicitly set `additionalProperties` to False\n",
"schema['additionalProperties'] = False\n",
"\n",
"# Create the response format using the dynamically generated schema\n",
"response_format = {\n",
" \"type\": \"json_schema\",\n",
" \"json_schema\": {\n",
" \"name\": response_format_name,\n",
" \"schema\": schema,\n",
" \"strict\": True\n",
" }\n",
"}\n",
"\n",
"# Make the API call using the OpenAI structure\n",
"response = openai.chat.completions.create(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": unstructured_text}\n",
" ],\n",
" response_format=response_format # Use the dynamically generated schema with `additionalProperties: False`\n",
")\n",
"\n",
"# Extract the response content using dot notation\n",
"structured_output = response.choices[0].message.content\n",
"\n",
"# Parse the structured output into the Pydantic model\n",
"research_paper = ResearchPaperExtraction.parse_raw(structured_output)\n",
"\n",
"# Print the structured output\n",
"print(research_paper)\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "vxp6D_b9xlj-",
"outputId": "d8632b65-4302-4601-8552-04ce57702524"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"title='Advancements in AI and Its Applications' authors=['John Doe', 'Jane Smith', 'Alan Turing'] abstract='The paper explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance.' keywords=['AI', 'machine learning', 'deep learning', 'healthcare', 'finance']\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Advanced Research Paper Extraction with Multi-Hop, Multi-Agent, and Dynamic Schema Generation\n",
"\n",
"This example demonstrates an advanced approach to extracting structured data from unstructured research papers using multi-hop, multi-agent processing. It utilizes dynamic schema generation based on the Pydantic model, ensuring the output adheres to the expected structure. Multiple research papers are processed concurrently, with each paper being parsed and transformed into a structured format that includes the title, authors, abstract, and keywords. This method showcases a scalable and automated system for extracting key information from academic texts, leveraging OpenAI's capabilities for structured data extraction."
],
"metadata": {
"id": "Y2RrCbhV_woq"
}
},
{
"cell_type": "code",
"source": [
"from pydantic import BaseModel\n",
"import openai\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import json\n",
"\n",
"# @title Advanced Research Paper Extraction with Multi-Hop, Multi-Agent, and Dynamic Schema Generation\n",
"\n",
"# Define the Pydantic model for structured output\n",
"class ResearchPaperExtraction(BaseModel):\n",
" title: str\n",
" authors: list[str]\n",
" abstract: str\n",
" keywords: list[str]\n",
"\n",
"# Define parameters using #param annotations for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"unstructured_text = \"This research paper focuses on the advancements in AI and its applications. The main contributors are John Doe, Jane Smith, and Alan Turing. It explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance. Keywords include AI, machine learning, deep learning, healthcare, and finance.\" # @param {type:\"string\"}\n",
"system_message = \"You are an expert at structured data extraction. You will be given unstructured text from a research paper and should convert it into the given structure.\" # @param {type:\"string\"}\n",
"response_format_name = \"ResearchPaperExtraction\" # @param {type:\"string\"}\n",
"max_concurrent_requests = 5 # @param {type:\"integer\"}\n",
"\n",
"# Dynamic schema generation based on the Pydantic model\n",
"schema = ResearchPaperExtraction.schema()\n",
"schema['additionalProperties'] = False\n",
"\n",
"# Create the response format using the dynamically generated schema\n",
"response_format = {\n",
" \"type\": \"json_schema\",\n",
" \"json_schema\": {\n",
" \"name\": response_format_name,\n",
" \"schema\": schema,\n",
" \"strict\": True\n",
" }\n",
"}\n",
"\n",
"# Define a list of research papers for batch processing\n",
"research_papers = [\n",
" unstructured_text,\n",
" \"This paper explores quantum computing and its impact on cryptography. Authors: Alice, Bob, Charlie.\",\n",
" \"The study focuses on climate change and its global effects. Contributors: Dr. Green, Dr. Blue.\"\n",
"] # You can add more papers to this list for batch processing\n",
"\n",
"# Function to process a single paper\n",
"def process_paper(paper_text):\n",
" return openai.chat.completions.create(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": paper_text}\n",
" ],\n",
" response_format=response_format # Use the dynamically generated schema\n",
" )\n",
"\n",
"# Batch processing multiple papers using multi-agent system (concurrently)\n",
"def batch_process_papers(papers, max_concurrent_requests):\n",
" results = []\n",
" with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:\n",
" futures = [executor.submit(process_paper, paper) for paper in papers]\n",
" for future in futures:\n",
" results.append(future.result())\n",
" return results\n",
"\n",
"# Perform batch processing on the list of research papers\n",
"responses = batch_process_papers(research_papers, max_concurrent_requests)\n",
"\n",
"# Process and display results\n",
"for response in responses:\n",
" structured_output = response.choices[0].message.content\n",
" research_paper = ResearchPaperExtraction.parse_raw(structured_output)\n",
" print(json.dumps(research_paper.dict(), indent=4)) # Print structured data in a readable format\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ED6W-sjM1Q4v",
"outputId": "ec243204-e9b4-41e2-b7ec-af10d122f394"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"{\n",
" \"title\": \"Advancements in AI and its Applications\",\n",
" \"authors\": [\n",
" \"John Doe\",\n",
" \"Jane Smith\",\n",
" \"Alan Turing\"\n",
" ],\n",
" \"abstract\": \"This research paper explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance.\",\n",
" \"keywords\": [\n",
" \"AI\",\n",
" \"machine learning\",\n",
" \"deep learning\",\n",
" \"healthcare\",\n",
" \"finance\"\n",
" ]\n",
"}\n",
"{\n",
" \"title\": \"The Impact of Quantum Computing on Cryptography\",\n",
" \"authors\": [\n",
" \"Alice\",\n",
" \"Bob\",\n",
" \"Charlie\"\n",
" ],\n",
" \"abstract\": \"This paper explores the transformative effects of quantum computing technology on the field of cryptography. It examines how quantum computing challenges current cryptographic protocols and discusses potential strategies to develop quantum-resistant encryption methods.\",\n",
" \"keywords\": [\n",
" \"Quantum Computing\",\n",
" \"Cryptography\",\n",
" \"Quantum-Resistant Encryption\",\n",
" \"Computational Security\"\n",
" ]\n",
"}\n",
"{\n",
" \"title\": \"The Global Effects of Climate Change\",\n",
" \"authors\": [\n",
" \"Dr. Green\",\n",
" \"Dr. Blue\"\n",
" ],\n",
" \"abstract\": \"The study investigates the impact of climate change on a global scale, examining environmental, economic, and social effects. It addresses the urgency of implementing solutions to mitigate these impacts and adapt to new challenges.\",\n",
" \"keywords\": [\n",
" \"climate change\",\n",
" \"global effects\",\n",
" \"environment\",\n",
" \"economics\",\n",
" \"societal impact\",\n",
" \"mitigation\",\n",
" \"adaptation\"\n",
" ]\n",
"}\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Advanced Financial Analysis with Algorithm Code Generation\n",
"\n",
"This code demonstrates a powerful financial analysis system that uses concurrent requests to process multiple stock symbols. It integrates risk analysis, quantitative strategies, and OpenAI-powered algorithm generation to create personalized trading algorithms in different programming languages. The system performs batch processing of stock data and generates comprehensive financial reports, including trading recommendations and custom algorithms."
],
"metadata": {
"id": "3dok2mrS_qzh"
}
},
{
"cell_type": "code",
"source": [
"from pydantic import BaseModel\n",
"import openai\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import json\n",
"import random\n",
"\n",
"# @title Advanced Financial Analysis with Algorithm Code Generation and Concurrent Requests\n",
"\n",
"# Define the Pydantic model for structured output\n",
"class TradingAlgorithm(BaseModel):\n",
" code: str\n",
" description: str\n",
"\n",
"class FinancialReport(BaseModel):\n",
" stock_name: str\n",
" average_price: float\n",
" risk_level: str\n",
" recommendations: list[str]\n",
" quant_analysis: str\n",
" strategy: str\n",
" trading_algorithm: TradingAlgorithm\n",
"\n",
"# Define parameters using #param annotations for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"stock_symbol = \"AAPL\" # @param {type:\"string\"}\n",
"time_period = \"1Y\" # @param [\"1M\", \"3M\", \"6M\", \"1Y\"] {type:\"string\"}\n",
"max_concurrent_requests = 3 # @param {type:\"integer\"}\n",
"advanced_strategy = \"Momentum-based trading with moving averages and RSI\" # @param [\"Momentum-based trading with moving averages and RSI\", \"Mean reversion\", \"Statistical arbitrage\", \"High-frequency trading\", \"Pairs trading\", \"Value investing\", \"Technical analysis with Bollinger Bands\", \"Event-driven trading\", \"Trend following\", \"Options pricing models\", \"Market making\", \"Swing trading\", \"Algorithmic trading with neural networks\", \"Sentiment analysis-based trading\", \"Arbitrage in futures and options markets\", \"Factor investing\", \"Dividend growth investing\"] {type:\"string\"}\n",
"quantitative_factors = [\"Moving Averages\", \"RSI\", \"Volatility Index (VIX)\", \"Beta Coefficient\", \"Bollinger Bands\", \"MACD\", \"Fibonacci Retracement\", \"Volume\", \"Sharpe Ratio\"] # @param {type:\"raw\"}\n",
"algorithm_language = \"JavaScript\" # @param [\"Python\", \"Java\", \"C++\", \"JavaScript\", \"R\", \"Matlab\", \"Scala\", \"Go\", \"Rust\", \"Julia\"] {type:\"string\"}\n",
"\n",
"# Simulated API call to get stock data\n",
"def get_stock_data(stock_symbol, time_period):\n",
" print(f\"Retrieving data for {stock_symbol} over {time_period}\")\n",
" return {\"average_price\": random.uniform(100, 200)} # Simulated data\n",
"\n",
"# Simulated API call for risk analysis\n",
"def perform_risk_analysis(stock_symbol, average_price):\n",
" print(f\"Performing risk analysis for {stock_symbol} with average price {average_price}\")\n",
" risk_level = random.choice([\"Low\", \"Medium\", \"High\"]) # Simulated risk level\n",
" return {\"risk_level\": risk_level}\n",
"\n",
"# Simulated API call for quantitative analysis\n",
"def perform_quant_analysis(stock_symbol, advanced_strategy, quantitative_factors):\n",
" print(f\"Performing quant analysis for {stock_symbol} using strategy: {advanced_strategy}\")\n",
" quant_analysis = f\"Applied {advanced_strategy} considering {', '.join(quantitative_factors)}.\"\n",
" return quant_analysis\n",
"\n",
"# OpenAI API call to generate trading algorithm Python code\n",
"def generate_trading_algorithm(stock_symbol, advanced_strategy, algorithm_language):\n",
" # Construct the prompt for OpenAI\n",
" system_message = f\"You are an expert in algorithmic trading and {algorithm_language} development.\"\n",
" user_message = f\"Generate a {algorithm_language} algorithm for {advanced_strategy} on {stock_symbol}. The algorithm should consider technical indicators and risk management strategies.\"\n",
"\n",
" response = openai.chat.completions.create(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ]\n",
" )\n",
"\n",
" structured_output = response.choices[0].message.content.strip()\n",
"\n",
" # Simulated structured output for the algorithm code generation\n",
" return TradingAlgorithm(\n",
" code=structured_output,\n",
" description=f\"{algorithm_language} algorithm for {advanced_strategy} applied to {stock_symbol}.\"\n",
" )\n",
"\n",
"# Simulated API call for generating recommendations\n",
"def generate_recommendations(stock_symbol, risk_level, advanced_strategy):\n",
" print(f\"Generating recommendations for {stock_symbol} with risk level {risk_level} and strategy {advanced_strategy}\")\n",
" recommendations = {\n",
" \"Low\": [\"Buy more\", \"Hold\", \"Increase exposure to long-term call options\"],\n",
" \"Medium\": [\"Hold\", \"Review quarterly\", \"Use protective puts to hedge\"],\n",
" \"High\": [\"Sell\", \"Reduce exposure\", \"Consider short positions or covered calls\"]\n",
" }\n",
" return recommendations[risk_level]\n",
"\n",
"# Define a list of stocks for batch processing\n",
"stock_symbols = [stock_symbol] # Process only the user-defined stock symbol\n",
"\n",
"# Function to process a single stock symbol\n",
"def process_stock(stock_symbol):\n",
" stock_data = get_stock_data(stock_symbol, time_period)\n",
" risk_analysis = perform_risk_analysis(stock_symbol, stock_data[\"average_price\"])\n",
" quant_analysis = perform_quant_analysis(stock_symbol, advanced_strategy, quantitative_factors)\n",
" recommendations = generate_recommendations(stock_symbol, risk_analysis[\"risk_level\"], advanced_strategy)\n",
"\n",
" # Pass the `algorithm_language` parameter to `generate_trading_algorithm`\n",
" trading_algorithm = generate_trading_algorithm(stock_symbol, advanced_strategy, algorithm_language)\n",
"\n",
" # Simulate generating a financial report\n",
" financial_report = FinancialReport(\n",
" stock_name=stock_symbol,\n",
" average_price=stock_data[\"average_price\"],\n",
" risk_level=risk_analysis[\"risk_level\"],\n",
" recommendations=recommendations,\n",
" quant_analysis=quant_analysis,\n",
" strategy=advanced_strategy,\n",
" trading_algorithm=trading_algorithm\n",
" )\n",
"\n",
" return financial_report\n",
"\n",
"\n",
"# Batch processing multiple stock symbols using concurrent API calls\n",
"def batch_process_stocks(stock_symbols, max_concurrent_requests):\n",
" results = []\n",
" with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:\n",
" futures = [executor.submit(process_stock, stock) for stock in stock_symbols]\n",
" for future in futures:\n",
" results.append(future.result())\n",
" return results\n",
"\n",
"# Perform batch processing on the list of stock symbols\n",
"financial_reports = batch_process_stocks(stock_symbols, max_concurrent_requests)\n",
"\n",
"# Process and display results\n",
"for report in financial_reports:\n",
" print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format\n",
" print(f\"\\nGenerated Python Code:\\n{report.trading_algorithm.code}\\n\") # Print the generated Python algorithm\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "3TKbihTn2qH3",
"outputId": "b70b1030-820f-4bfe-a80c-27cb6f7e4284"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Retrieving data for AAPL over 1Y\n",
"Performing risk analysis for AAPL with average price 117.49720081250126\n",
"Performing quant analysis for AAPL using strategy: Momentum-based trading with moving averages and RSI\n",
"Generating recommendations for AAPL with risk level Low and strategy Momentum-based trading with moving averages and RSI\n",
"{\n",
" \"stock_name\": \"AAPL\",\n",
" \"average_price\": 117.49720081250126,\n",
" \"risk_level\": \"Low\",\n",
" \"recommendations\": [\n",
" \"Buy more\",\n",
" \"Hold\",\n",
" \"Increase exposure to long-term call options\"\n",
" ],\n",
" \"quant_analysis\": \"Applied Momentum-based trading with moving averages and RSI considering Moving Averages, RSI, Volatility Index (VIX), Beta Coefficient, Bollinger Bands, MACD, Fibonacci Retracement, Volume, Sharpe Ratio.\",\n",
" \"strategy\": \"Momentum-based trading with moving averages and RSI\",\n",
" \"trading_algorithm\": {\n",
" \"code\": \"Creating a JavaScript algorithm for momentum-based trading involves calculating technical indicators such as moving averages and the Relative Strength Index (RSI), as well as implementing risk management strategies. Below is a simplified version of such an algorithm, which assumes that data is fetched from a financial API. This demonstration does not include API specific details; you'd need to integrate it with your data source.\\n\\n```javascript\\n// Import a technical indicators library\\nconst tulipIndicators = require('tulip-indicators');\\n\\n// Define the configuration for the trading algorithm\\nconst CONFIG = {\\n shortTermMA: 10, // e.g., 10-day moving average\\n longTermMA: 50, // e.g., 50-day moving average\\n rsiPeriod: 14, // Period for RSI calculation\\n rsiOverbought: 70, // Threshold for overbought RSI\\n rsiOversold: 30, // Threshold for oversold RSI\\n stopLossPercentage: 0.03, // 3% stop-loss\\n takeProfitPercentage: 0.05, // 5% take-profit\\n};\\n\\n// Define a function to calculate moving averages and RSI\\nfunction calculateIndicators(data) {\\n const closePrices = data.map(candle => candle.close);\\n \\n const shortTermMA = tulipIndicators.ma({\\n close: closePrices,\\n period: CONFIG.shortTermMA\\n }).result[0];\\n\\n const longTermMA = tulipIndicators.ma({\\n close: closePrices,\\n period: CONFIG.longTermMA\\n }).result[0];\\n\\n const rsi = tulipIndicators.rsi({\\n close: closePrices,\\n period: CONFIG.rsiPeriod\\n }).result[0];\\n\\n return { shortTermMA, longTermMA, rsi };\\n}\\n\\n// Define a function to make trading decisions\\nfunction tradeDecision(indicators, currentPrice, position) {\\n const { shortTermMA, longTermMA, rsi } = indicators;\\n let action = 'HOLD';\\n \\n if (shortTermMA > longTermMA && rsi > CONFIG.rsiOversold) {\\n action = 'BUY';\\n } else if (shortTermMA < longTermMA && rsi < CONFIG.rsiOverbought) {\\n action = 'SELL';\\n }\\n \\n // Implement risk management\\n if (position) {\\n const priceChange = (currentPrice - position.entryPrice) / position.entryPrice;\\n if (priceChange <= -CONFIG.stopLossPercentage) {\\n action = 'SELL'; // Trigger stop-loss\\n } else if (priceChange >= CONFIG.takeProfitPercentage) {\\n action = 'SELL'; // Trigger take-profit\\n }\\n }\\n\\n return action;\\n}\\n\\n// Example usage\\nasync function executeTrading() {\\n // Here you would typically fetch your data from an API\\n const historicalData = await fetchMarketData('AAPL');\\n \\n for (let i = CONFIG.longTermMA; i < historicalData.length; i++) {\\n const sliceOfData = historicalData.slice(i - CONFIG.longTermMA, i);\\n const indicators = calculateIndicators(sliceOfData);\\n const currentPrice = historicalData[i].close;\\n const action = tradeDecision(indicators, currentPrice, currentPosition);\\n\\n if (action === 'BUY') {\\n console.log(`Buy at ${currentPrice}`);\\n // Assume a buy\\n currentPosition = { entryPrice: currentPrice };\\n } else if (action === 'SELL') {\\n console.log(`Sell at ${currentPrice}`);\\n // Assume a sell and close position\\n currentPosition = null;\\n }\\n }\\n}\\n\\n// Placeholder for making API requests\\nasync function fetchMarketData(ticker) {\\n // This function should fetch and return historical market data\\n // In practice, provide implementation to connect to an actual data provider\\n return [];\\n}\\n\\n// Keep track of the current position\\nlet currentPosition = null;\\n\\n// Execute the trading algorithm\\nexecuteTrading();\\n```\\n\\n**Notes:**\\n1. **Data Source**: You'll need to replace the `fetchMarketData` function with actual code to get market data from a reliable source like Alpha Vantage, Yahoo Finance, or directly from a broker's API.\\n2. **Technical Indicators Library**: I've used Tulip Indicators. You could use another library like `technicalindicators` if preferred.\\n3. **Risk Management**: This example uses simple percentage-based stop-loss and take-profit. You can refine this using more advanced mechanisms based on volatility, ATR, or other metrics.\\n4. **Execution**: In real-world scenarios, the buy/sell actions would interface with a trading API to place actual market orders.\\n5. **Backtesting and Paper Trading**: Before deploying this strategy with real capital, thoroughly backtest it and run in a simulated environment to evaluate performance.\",\n",
" \"description\": \"JavaScript algorithm for Momentum-based trading with moving averages and RSI applied to AAPL.\"\n",
" }\n",
"}\n",
"\n",
"Generated Python Code:\n",
"Creating a JavaScript algorithm for momentum-based trading involves calculating technical indicators such as moving averages and the Relative Strength Index (RSI), as well as implementing risk management strategies. Below is a simplified version of such an algorithm, which assumes that data is fetched from a financial API. This demonstration does not include API specific details; you'd need to integrate it with your data source.\n",
"\n",
"```javascript\n",
"// Import a technical indicators library\n",
"const tulipIndicators = require('tulip-indicators');\n",
"\n",
"// Define the configuration for the trading algorithm\n",
"const CONFIG = {\n",
" shortTermMA: 10, // e.g., 10-day moving average\n",
" longTermMA: 50, // e.g., 50-day moving average\n",
" rsiPeriod: 14, // Period for RSI calculation\n",
" rsiOverbought: 70, // Threshold for overbought RSI\n",
" rsiOversold: 30, // Threshold for oversold RSI\n",
" stopLossPercentage: 0.03, // 3% stop-loss\n",
" takeProfitPercentage: 0.05, // 5% take-profit\n",
"};\n",
"\n",
"// Define a function to calculate moving averages and RSI\n",
"function calculateIndicators(data) {\n",
" const closePrices = data.map(candle => candle.close);\n",
" \n",
" const shortTermMA = tulipIndicators.ma({\n",
" close: closePrices,\n",
" period: CONFIG.shortTermMA\n",
" }).result[0];\n",
"\n",
" const longTermMA = tulipIndicators.ma({\n",
" close: closePrices,\n",
" period: CONFIG.longTermMA\n",
" }).result[0];\n",
"\n",
" const rsi = tulipIndicators.rsi({\n",
" close: closePrices,\n",
" period: CONFIG.rsiPeriod\n",
" }).result[0];\n",
"\n",
" return { shortTermMA, longTermMA, rsi };\n",
"}\n",
"\n",
"// Define a function to make trading decisions\n",
"function tradeDecision(indicators, currentPrice, position) {\n",
" const { shortTermMA, longTermMA, rsi } = indicators;\n",
" let action = 'HOLD';\n",
" \n",
" if (shortTermMA > longTermMA && rsi > CONFIG.rsiOversold) {\n",
" action = 'BUY';\n",
" } else if (shortTermMA < longTermMA && rsi < CONFIG.rsiOverbought) {\n",
" action = 'SELL';\n",
" }\n",
" \n",
" // Implement risk management\n",
" if (position) {\n",
" const priceChange = (currentPrice - position.entryPrice) / position.entryPrice;\n",
" if (priceChange <= -CONFIG.stopLossPercentage) {\n",
" action = 'SELL'; // Trigger stop-loss\n",
" } else if (priceChange >= CONFIG.takeProfitPercentage) {\n",
" action = 'SELL'; // Trigger take-profit\n",
" }\n",
" }\n",
"\n",
" return action;\n",
"}\n",
"\n",
"// Example usage\n",
"async function executeTrading() {\n",
" // Here you would typically fetch your data from an API\n",
" const historicalData = await fetchMarketData('AAPL');\n",
" \n",
" for (let i = CONFIG.longTermMA; i < historicalData.length; i++) {\n",
" const sliceOfData = historicalData.slice(i - CONFIG.longTermMA, i);\n",
" const indicators = calculateIndicators(sliceOfData);\n",
" const currentPrice = historicalData[i].close;\n",
" const action = tradeDecision(indicators, currentPrice, currentPosition);\n",
"\n",
" if (action === 'BUY') {\n",
" console.log(`Buy at ${currentPrice}`);\n",
" // Assume a buy\n",
" currentPosition = { entryPrice: currentPrice };\n",
" } else if (action === 'SELL') {\n",
" console.log(`Sell at ${currentPrice}`);\n",
" // Assume a sell and close position\n",
" currentPosition = null;\n",
" }\n",
" }\n",
"}\n",
"\n",
"// Placeholder for making API requests\n",
"async function fetchMarketData(ticker) {\n",
" // This function should fetch and return historical market data\n",
" // In practice, provide implementation to connect to an actual data provider\n",
" return [];\n",
"}\n",
"\n",
"// Keep track of the current position\n",
"let currentPosition = null;\n",
"\n",
"// Execute the trading algorithm\n",
"executeTrading();\n",
"```\n",
"\n",
"**Notes:**\n",
"1. **Data Source**: You'll need to replace the `fetchMarketData` function with actual code to get market data from a reliable source like Alpha Vantage, Yahoo Finance, or directly from a broker's API.\n",
"2. **Technical Indicators Library**: I've used Tulip Indicators. You could use another library like `technicalindicators` if preferred.\n",
"3. **Risk Management**: This example uses simple percentage-based stop-loss and take-profit. You can refine this using more advanced mechanisms based on volatility, ATR, or other metrics.\n",
"4. **Execution**: In real-world scenarios, the buy/sell actions would interface with a trading API to place actual market orders.\n",
"5. **Backtesting and Paper Trading**: Before deploying this strategy with real capital, thoroughly backtest it and run in a simulated environment to evaluate performance.\n",
"\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"id": "GV91d4P0_cuK"
}
},
{
"cell_type": "markdown",
"source": [
"### Advanced Medical Diagnosis with AI-Powered Simulated Tools\n",
"\n",
"This example demonstrates an AI-driven medical diagnosis system that processes patient data, including symptoms and medical history, to generate structured diagnosis reports. The system uses concurrent requests to handle multiple patients simultaneously, providing detailed insights into probable diseases, recommended tests, treatment plans, and emergency levels. The use of LLM-based simulated tools ensures personalized and dynamic diagnostics, making it a powerful approach for healthcare applications."
],
"metadata": {
"id": "b6aLdY2f_mud"
}
},
{
"cell_type": "code",
"source": [
"from pydantic import BaseModel\n",
"import openai\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import json\n",
"import random\n",
"\n",
"# @title Advanced Medical Diagnosis with LLM-Based Simulated Tools and Concurrent Requests\n",
"\n",
"# Define the Pydantic model for structured output\n",
"class DiagnosisReport(BaseModel):\n",
" disease: str\n",
" probability: float\n",
" recommended_tests: list[str]\n",
" treatment_plan: str\n",
" risk_factors: list[str]\n",
" follow_up: str\n",
" emergency_level: str\n",
"\n",
"# Define parameters using #param annotations for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"symptoms = [\"fever\", \"cough\", \"shortness of breath\"] # @param {type:\"raw\"}\n",
"medical_history = \"Patient has a history of asthma.\" # @param {type:\"string\"}\n",
"age = 45 # @param {type:\"integer\"}\n",
"gender = \"Male\" # @param [\"Male\", \"Female\", \"Other\"] {type:\"string\"}\n",
"severity_level = \"Moderate\" # @param [\"Mild\", \"Moderate\", \"Severe\"] {type:\"string\"}\n",
"recent_travel = True # @param {type:\"boolean\"}\n",
"smoker = False # @param {type:\"boolean\"}\n",
"max_concurrent_requests = 3 # @param {type:\"integer\"}\n",
"\n",
"# Simulated API call to generate medical diagnosis\n",
"def perform_diagnosis(symptoms, medical_history, age, gender, severity_level, recent_travel, smoker):\n",
" # Construct the prompt for OpenAI\n",
" system_message = \"You are an advanced AI system specializing in medical diagnosis.\"\n",
" user_message = f\"\"\"\n",
" Based on the following patient data:\n",
" Symptoms: {symptoms}\n",
" Medical History: {medical_history}\n",
" Age: {age}\n",
" Gender: {gender}\n",
" Severity Level: {severity_level}\n",
" Recent Travel: {recent_travel}\n",
" Smoker: {smoker}\n",
"\n",
" Generate a diagnosis report, including the probable disease, recommended tests, treatment plan, risk factors, follow-up recommendations, and emergency level. Provide the output in a structured format.\n",
" \"\"\"\n",
"\n",
" response = openai.chat.completions.create(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ]\n",
" )\n",
"\n",
" structured_output = response.choices[0].message.content.strip()\n",
"\n",
" # Simulated structured output for the diagnosis\n",
" return DiagnosisReport(\n",
" disease=\"COVID-19\",\n",
" probability=random.uniform(0.7, 0.95),\n",
" recommended_tests=[\"Chest X-Ray\", \"Blood Test\"],\n",
" treatment_plan=\"Rest, fluids, and over-the-counter medication.\",\n",
" risk_factors=[\"Age\", \"History of Asthma\"],\n",
" follow_up=\"In 1 week\",\n",
" emergency_level=random.choice([\"Low\", \"Moderate\", \"High\"])\n",
" )\n",
"\n",
"# Define a list of patients for batch processing\n",
"patients = [ # Example: Processing multiple patients\n",
" {\"symptoms\": symptoms, \"medical_history\": medical_history, \"age\": age, \"gender\": gender, \"severity_level\": severity_level, \"recent_travel\": recent_travel, \"smoker\": smoker}\n",
"]\n",
"\n",
"# Function to process a single patient\n",
"def process_patient(patient):\n",
" return perform_diagnosis(\n",
" patient[\"symptoms\"],\n",
" patient[\"medical_history\"],\n",
" patient[\"age\"],\n",
" patient[\"gender\"],\n",
" patient[\"severity_level\"],\n",
" patient[\"recent_travel\"],\n",
" patient[\"smoker\"]\n",
" )\n",
"\n",
"# Batch processing multiple patients using concurrent API calls\n",
"def batch_process_patients(patients, max_concurrent_requests):\n",
" results = []\n",
" with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:\n",
" futures = [executor.submit(process_patient, patient) for patient in patients]\n",
" for future in futures:\n",
" results.append(future.result())\n",
" return results\n",
"\n",
"# Print message to indicate processing has started\n",
"print(\"Processing patients, please wait...\")\n",
"\n",
"# Perform batch processing on the list of patients\n",
"diagnosis_reports = batch_process_patients(patients, max_concurrent_requests)\n",
"\n",
"# Process and display results\n",
"for report in diagnosis_reports:\n",
" print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "7PQnQufX53OT",
"outputId": "8e296ce0-42d4-4508-ddfb-d553e8e8f146"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Processing patients, please wait...\n",
"{\n",
" \"disease\": \"COVID-19\",\n",
" \"probability\": 0.7843169886079995,\n",
" \"recommended_tests\": [\n",
" \"Chest X-Ray\",\n",
" \"Blood Test\"\n",
" ],\n",
" \"treatment_plan\": \"Rest, fluids, and over-the-counter medication.\",\n",
" \"risk_factors\": [\n",
" \"Age\",\n",
" \"History of Asthma\"\n",
" ],\n",
" \"follow_up\": \"In 1 week\",\n",
" \"emergency_level\": \"Low\"\n",
"}\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Personalized Education System for Prompt-Based Programming\n",
"\n",
"This example demonstrates an AI-driven personalized education system for teaching prompt-based programming using Jupyter Notebooks. The system dynamically generates lesson content, including explanations, code examples, and quizzes, based on the student's preferences and learning style. It also provides personalized feedback and recommendations, ensuring a tailored learning experience. The notebook uses concurrent processing to handle multiple students efficiently."
],
"metadata": {
"id": "Eu_yCVEI_aEF"
}
},
{
"cell_type": "code",
"source": [
"from pydantic import BaseModel\n",
"import openai\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import json\n",
"\n",
"# @title Personalized Education System for Prompt-Based Programming in Jupyter Notebooks\n",
"\n",
"# Define the Pydantic models for educational output\n",
"class LessonContent(BaseModel):\n",
" topic: str\n",
" explanation: str\n",
" code_example: str\n",
" quiz_questions: list[str]\n",
" follow_up_exercises: list[str]\n",
"\n",
"class StudentFeedback(BaseModel):\n",
" student_name: str\n",
" progress: str\n",
" personalized_recommendations: list[str]\n",
"\n",
"class EducationReport(BaseModel):\n",
" student_name: str\n",
" lesson_content: LessonContent\n",
" feedback: StudentFeedback\n",
"\n",
"# Define parameters using #param annotations for the API call\n",
"model = \"gpt-4o-2024-08-06\" # @param [\"gpt-4o-2024-08-06\", \"gpt-4-0613\", \"gpt-4-32k-0613\"] {type:\"string\"}\n",
"student_name = \"Alice\" # @param {type:\"string\"}\n",
"topic = \"Introduction to Prompt-Based Programming\" # @param {type:\"string\"}\n",
"difficulty_level = \"Beginner\" # @param [\"Beginner\", \"Intermediate\", \"Advanced\"] {type:\"string\"}\n",
"learning_style = \"Hands-on\" # @param [\"Visual\", \"Auditory\", \"Hands-on\"] {type:\"string\"}\n",
"max_concurrent_requests = 2 # @param {type:\"integer\"}\n",
"\n",
"# Simulated API call to generate lesson content\n",
"def generate_lesson_content(topic, difficulty_level, learning_style):\n",
" # Construct the prompt for OpenAI\n",
" system_message = \"You are an AI tutor specializing in personalized education.\"\n",
" user_message = f\"\"\"\n",
" I need a lesson on the topic: {topic}. The student has a {difficulty_level} level of knowledge and prefers a {learning_style} learning style.\n",
" Please include an explanation, code example, quiz questions, and follow-up exercises in a structured format.\n",
" \"\"\"\n",
"\n",
" response = openai.chat.completions.create(\n",
" model=model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ]\n",
" )\n",
"\n",
" structured_output = response.choices[0].message.content.strip()\n",
"\n",
" # Simulated structured output for lesson content\n",
" return LessonContent(\n",
" topic=topic,\n",
" explanation=\"This lesson introduces students to prompt-based programming using a Jupyter notebook...\",\n",
" code_example=\"\"\"# Example: Basic Prompt-Based Programming in Python\n",
" prompt = \"What is your name?\"\n",
" name = input(prompt)\n",
" print(f'Hello, {name}!')\"\"\",\n",
" quiz_questions=[\"What is prompt-based programming?\", \"How can you use inputs in a Jupyter notebook?\"],\n",
" follow_up_exercises=[\"Create a program that takes a user's age as input and calculates their birth year.\"]\n",
" )\n",
"\n",
"# Simulated API call for generating student feedback\n",
"def generate_student_feedback(student_name, progress, personalized_recommendations):\n",
" return StudentFeedback(\n",
" student_name=student_name,\n",
" progress=progress,\n",
" personalized_recommendations=personalized_recommendations\n",
" )\n",
"\n",
"# Define a list of students for batch processing\n",
"students = [{\"student_name\": student_name, \"topic\": topic, \"difficulty_level\": difficulty_level, \"learning_style\": learning_style}]\n",
"\n",
"# Function to process a single student's lesson and feedback\n",
"def process_student(student):\n",
" lesson_content = generate_lesson_content(student[\"topic\"], student[\"difficulty_level\"], student[\"learning_style\"])\n",
" feedback = generate_student_feedback(student[\"student_name\"], \"Making good progress\", [\"Practice more with prompt-based programming exercises.\"])\n",
"\n",
" # Generate a personalized education report\n",
" education_report = EducationReport(\n",
" student_name=student[\"student_name\"],\n",
" lesson_content=lesson_content,\n",
" feedback=feedback\n",
" )\n",
"\n",
" return education_report\n",
"\n",
"# Batch processing multiple students using concurrent API calls\n",
"def batch_process_students(students, max_concurrent_requests):\n",
" results = []\n",
" with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:\n",
" futures = [executor.submit(process_student, student) for student in students]\n",
" for future in futures:\n",
" results.append(future.result())\n",
" return results\n",
"\n",
"# Print message to indicate processing has started\n",
"print(\"Processing students, please wait...\")\n",
"\n",
"# Perform batch processing on the list of students\n",
"education_reports = batch_process_students(students, max_concurrent_requests)\n",
"\n",
"# Process and display results\n",
"for report in education_reports:\n",
" print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format\n",
" print(f\"\\nPersonalized Feedback for {report.student_name}:\\n{report.feedback.personalized_recommendations}\\n\") # Print personalized feedback\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "m7kXGQ9h70er",
"outputId": "83569a4f-0849-418b-9474-59c814145ba6"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Processing students, please wait...\n",
"{\n",
" \"student_name\": \"Alice\",\n",
" \"lesson_content\": {\n",
" \"topic\": \"Introduction to Prompt-Based Programming\",\n",
" \"explanation\": \"This lesson introduces students to prompt-based programming using a Jupyter notebook...\",\n",
" \"code_example\": \"# Example: Basic Prompt-Based Programming in Python\\n prompt = \\\"What is your name?\\\"\\n name = input(prompt)\\n print(f'Hello, {name}!')\",\n",
" \"quiz_questions\": [\n",
" \"What is prompt-based programming?\",\n",
" \"How can you use inputs in a Jupyter notebook?\"\n",
" ],\n",
" \"follow_up_exercises\": [\n",
" \"Create a program that takes a user's age as input and calculates their birth year.\"\n",
" ]\n",
" },\n",
" \"feedback\": {\n",
" \"student_name\": \"Alice\",\n",
" \"progress\": \"Making good progress\",\n",
" \"personalized_recommendations\": [\n",
" \"Practice more with prompt-based programming exercises.\"\n",
" ]\n",
" }\n",
"}\n",
"\n",
"Personalized Feedback for Alice:\n",
"['Practice more with prompt-based programming exercises.']\n",
"\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"### Advanced Document Management System Using OpenAI\n",
"\n",
"This code demonstrates an advanced document management system that leverages the OpenAI API for extracting structured metadata from research papers. It utilizes Pydantic for data validation and `networkx` for dynamic graph management. The system processes multiple documents concurrently, extracting essential information such as titles, authors, abstracts, keywords, and references. By integrating OpenAI's structured output capabilities, the system ensures accurate metadata extraction, enhancing the efficiency and organization of research documentation. The implementation showcases the potential of AI in automating document analysis and management workflows.\n"
],
"metadata": {
"id": "JVpY-ZciDml3"
}
},
{
"cell_type": "code",
"source": [
"from pydantic import BaseModel\n",
"import openai\n",
"import networkx as nx\n",
"from concurrent.futures import ThreadPoolExecutor\n",
"import json\n",
"\n",
"# Define the Pydantic model for document extraction output\n",
"class ResearchPaperExtraction(BaseModel):\n",
" title: str\n",
" authors: list[str]\n",
" abstract: str\n",
" keywords: list[str]\n",
" references: list[str]\n",
"\n",
"# Define parameters for the document processing\n",
"document_id = 1 # @param {type:\"integer\"}\n",
"document_text = \"\"\"This paper focuses on the advancements in quantum computing and its implications in cryptography. The main contributors are Alice, Bob, and Charlie. Keywords include quantum computing, cryptography, encryption, and security.\"\"\" # @param {type:\"string\"}\n",
"related_documents = [\"Quantum Cryptography: A Future Perspective\", \"The Impact of Quantum Computing on Security\"] # @param {type:\"raw\"}\n",
"max_concurrent_requests = 3 # @param {type:\"integer\"}\n",
"\n",
"# Initialize a graph for dynamic document management\n",
"graph = nx.Graph()\n",
"\n",
"# Function to generate structured document extraction using OpenAI\n",
"def generate_document_extraction(document_text, related_documents):\n",
" # Construct the prompt for OpenAI\n",
" system_message = \"You are an expert at structured data extraction. You will be given unstructured text from a research paper and should convert it into the given structure.\"\n",
" user_message = f\"\"\"\n",
" Extract metadata from the following document: {document_text}. Include related documents: {related_documents}.\n",
" \"\"\"\n",
"\n",
" # Make a request to the OpenAI API\n",
" completion = client.beta.chat.completions.parse(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_message}\n",
" ],\n",
" response_format=ResearchPaperExtraction,\n",
" )\n",
"\n",
" # Extract the structured output\n",
" structured_output = completion.choices[0].message.parsed\n",
"\n",
" return structured_output\n",
"\n",
"# Function to process and store a single document in the graph\n",
"def process_document(document_id, document_text, related_documents):\n",
" # Add a node for the document with the extracted metadata\n",
" graph.add_node(document_id, text=document_text, related_documents=related_documents)\n",
"\n",
" # Generate structured output for the document extraction\n",
" structured_output = generate_document_extraction(document_text, related_documents)\n",
"\n",
" return structured_output\n",
"\n",
"# Define a list of documents for batch processing\n",
"documents = [{\"document_id\": document_id, \"document_text\": document_text, \"related_documents\": related_documents}]\n",
"\n",
"# Function to process a single document\n",
"def process_single_document(document):\n",
" return process_document(document[\"document_id\"], document[\"document_text\"], document[\"related_documents\"])\n",
"\n",
"# Batch processing multiple documents using concurrent API calls\n",
"def batch_process_documents(documents, max_concurrent_requests):\n",
" results = []\n",
" with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:\n",
" futures = [executor.submit(process_single_document, doc) for doc in documents]\n",
" for future in futures:\n",
" results.append(future.result())\n",
" return results\n",
"\n",
"# Print message to indicate processing has started\n",
"print(\"Processing documents, please wait...\")\n",
"\n",
"# Perform batch processing on the list of documents\n",
"document_reports = batch_process_documents(documents, max_concurrent_requests)\n",
"\n",
"# Process and display results\n",
"for report in document_reports:\n",
" print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Nviq9RJmB9Th",
"outputId": "def18bd0-d816-4023-b1f7-63aa4a6fbd11"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Processing documents, please wait...\n",
"{\n",
" \"title\": \"Advancements in Quantum Computing and Cryptography\",\n",
" \"authors\": [\n",
" \"Alice\",\n",
" \"Bob\",\n",
" \"Charlie\"\n",
" ],\n",
" \"abstract\": \"This paper focuses on the advancements in quantum computing and its implications in cryptography.\",\n",
" \"keywords\": [\n",
" \"quantum computing\",\n",
" \"cryptography\",\n",
" \"encryption\",\n",
" \"security\"\n",
" ],\n",
" \"references\": [\n",
" \"Quantum Cryptography: A Future Perspective\",\n",
" \"The Impact of Quantum Computing on Security\"\n",
" ]\n",
"}\n"
]
}
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment