Last active
October 6, 2024 20:02
-
-
Save ruvnet/5cf24851841a120198f43e9639dba7a5 to your computer and use it in GitHub Desktop.
ruv-final.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/ruvnet/5cf24851841a120198f43e9639dba7a5/ruv-final.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "SkBkDqS0oFYf" | |
}, | |
"source": [ | |
"# Introduction to the Recursive Unified Validators (rUv) MoE Toolkit\n", | |
"\n", | |
"Reimagined by @rUv, the Recursive Unified Validators (rUv) toolkit redefines the landscape of Ai optimization.\n", | |
"\n", | |
"## **What's the future of software look like?**\n", | |
"### Introducing rUv MoE Toolkit: Powering software with self-learning & auto-enhancement. Supercharging older models, drastically boosting Ai performance and significantly reducing costs.\n", | |
"\n", | |
"Imagine if you could give super powers to older, less capable Ai models. The difference in performance, cost and capabilities is drastically different between the newest models and smaller or older models.\n", | |
"\n", | |
"**What if there was a way to automatically & easily tune older, cheaper, less capable models to greatly improve them?**\n", | |
"\n", | |
"My approach uses what I'm calling Recursive Unified Validators (rUv). It's an AI optimization framework that leverages a Mixture of Experts (MoE) with a self-optimization & training methodology. It reimagines AI optimization by combining reinforcement learning, self-optimization/hyper-tuning, and an autonomous self evolving architecture.\n", | |
"\n", | |
"Built using DSPy, rUv allows for seamless integration of expert modules and facilitates the creation of powerful AI systems.\n", | |
"\n", | |
"# Core Benefits\n", | |
"- **It evolves as it learns, auto-optimizing itself:** Using an internal teleprompter, it can create its own internal prompts on the fly, learning new things based any information / data / requests made to it.\n", | |
"\n", | |
"- **Efficiency through Resource Optimization:** rUv optimizes computational resources by dynamically selecting the most relevant & intelligent expert models for each task.\n", | |
"- **Hyper-Tuning:** Each model is hyper optimized for specific topic or domain using a automatic fine tuning based on internal reward system (reinforcement learning with human feedback).\n", | |
"\n", | |
"- **Accuracy via Tailored Outputs:** The framework generates tailored outputs by leveraging the specialized knowledge of multiple expert models. Automatically selects the best expert by testing for the best results.\n", | |
"\n", | |
"- **Flexibility with Versatile Application:** rUv can be applied to a wide range of domains and tasks, making it a versatile tool for various AI applications.\n", | |
"\n", | |
"- **Insight Generation through Continuous Learning:** The self-learning capabilities of rUv enable it to continuously generate valuable insights and improve its performance over time.\n", | |
"- *Automation:* rUv is great for automating various actions or task that require the application to learn and adjust. Think self driving software.\n", | |
"\n", | |
"## Novel Features\n", | |
"- **Reinforcement Learning and Self-Optimization Self-Learning capabilities:** rUv continuously learns and improves its performance through reinforcement learning techniques.\n", | |
"- **Self-Optimizing Architecture:** The framework dynamically adjusts its architecture to adapt to different tasks and optimize its performance.\n", | |
"- **Dynamic Expert Model Selection Mixture of Experts (MoE) approach:** rUv employs a MoE approach, where multiple expert models are trained to specialize in different domains or tasks.\n", | |
"- **Context-aware selection:** The gating model dynamically selects the most relevant expert model based on the input context.\n", | |
"\n", | |
"- **Enhanced Performance through Hyper-Tuning** rUv allows for fine-grained control over various hyperparameters, enabling users to tune the system for optimal performance based on their specific requirements.\n", | |
"\n", | |
"- **Adaptable Architecture for Output Generation** The framework generates comprehensive outputs by combining the knowledge and capabilities of multiple expert models, resulting in more accurate and diverse results.\n", | |
"\n", | |
"- **Auto Completion of Content or Code:** The rUv MoE Toolkit supports output continuation to generate more comprehensive responses. It recursively prompts the expert models to extend their outputs until a satisfactory level of completeness is achieved, as determined by checking for proper conclusion markers like context, grammar or code syntax, periods, exclamation points, or question marks at the end of the generated text.\n", | |
"\n", | |
"## Uses\n", | |
"\n", | |
"- **Business Analysis**: Offers detailed evaluations of market trends, investment opportunities, and technology impacts.\n", | |
"- **Code Development**: Assists in the generation, review, and optimization of code across various programming languages.\n", | |
"- **Creative Writing**: Enhances story creation, scriptwriting, and content development with innovative AI insights.\n", | |
"- **Academic Research**: Supports comprehensive analyses of complex topics, backed by up-to-date references and data.\n", | |
"\n", | |
"# Technical Configuration Overview\n", | |
"\n", | |
"## The Recursive Unified Validators (rUv) Toolkit operates under a set of technical parameters critical to its functionality.\n", | |
"\n", | |
"The rUv parameters are pivotal in dictating the system's behavior and output quality.\n", | |
"\n", | |
"Adjusting these settings enables precise control over the toolkit, ensuring optimal operation across various scenarios by fine-tuning efficiency, enhancing accuracy, and maintaining flexibility for a range of applications.\n", | |
"\n", | |
"\n", | |
"### 🤖 Number of Expert Models\n", | |
"\n", | |
"- **Purpose**: Determines the range and specialization of the expert models within the system.\n", | |
"- **Impact**: More experts increase topic coverage but require additional computational resources.\n", | |
"- **Configurable Range**: Typically between 3 to 12, with higher values offering greater diversity and specialization.\n", | |
"\n", | |
"### 🔄 Minimum Number of Iterations\n", | |
"\n", | |
"- **Purpose**: Ensures a meaningful exploration and refinement process by running the system for a sufficient number of iterations.\n", | |
"- **Impact**: Higher iteration counts allow for more thorough output refinement and system adaptation.\n", | |
"- **Configurable Range**: Common settings range from 3 for quick tasks to 15 for in-depth refinement.\n", | |
"\n", | |
"### 📈 Learning Rate\n", | |
"\n", | |
"- **Purpose**: Adjusts the speed at which the system adapts by controlling the step size of expert value updates.\n", | |
"- **Impact**: Balances between fast adaptation and stability. Higher rates increase speed but may lead to instability.\n", | |
"- **Configurable Range**: Varied from 0.05 for slow, stable learning to 0.5 for rapid adaptation.\n", | |
"\n", | |
"### 💰 Discount Factor\n", | |
"\n", | |
"- **Purpose**: Weighs the importance of future rewards in the system's decision-making process.\n", | |
"- **Impact**: Higher factors prioritize long-term success, while lower factors focus on immediate outcomes.\n", | |
"- **Configurable Range**: From 0.8, emphasizing short-term gains, to 0.99, focusing on long-term rewards.\n", | |
"\n", | |
"### 🔍 Exploration Rate\n", | |
"\n", | |
"- **Purpose**: Manages the exploration-exploitation trade-off by varying the system's willingness to try different experts.\n", | |
"- **Impact**: Higher exploration rates foster diversity and adaptability, whereas lower rates optimize for current knowledge.\n", | |
"- **Configurable Range**: Ranges from 0.05 for minimal exploration to 0.5 for aggressive exploration of new strategies.\n", | |
"\n", | |
"These parameters provide the foundation for tailoring the rUv Toolkit to specific needs, ensuring optimal performance across a wide array of applications.\n", | |
"\n", | |
"### License\n", | |
"\n", | |
"rUv is made available under the MIT License, supporting open-source collaboration and innovation.\n", | |
"\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "opxYHr7qoFYg" | |
}, | |
"source": [ | |
"# Getting Set-Up\n", | |
"\n", | |
"As we'll start to see below, **DSPy** can routinely teach powerful models like `GPT-3.5` and local models like `T5-base` or `Llama2-13b` to be much more reliable at complex tasks. **DSPy** will compile the _same program_ into different few-shot prompts and/or finetunes for each LM.\n", | |
"\n", | |
"Let's begin by setting things up.\n", | |
"\n", | |
"## The snippet below will also install **DSPy** if it's not there already." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": null, | |
"metadata": { | |
"id": "sJW5o0r9oFYh" | |
}, | |
"outputs": [], | |
"source": [ | |
"%load_ext autoreload\n", | |
"%autoreload 2\n", | |
"\n", | |
"import sys\n", | |
"import os\n", | |
"\n", | |
"try: # When on google Colab, let's clone the notebook so we download the cache.\n", | |
" import google.colab\n", | |
" repo_path = 'dspy'\n", | |
" !git -C $repo_path pull origin || git clone https://github.com/stanfordnlp/dspy $repo_path\n", | |
"except:\n", | |
" repo_path = '.'\n", | |
"\n", | |
"if repo_path not in sys.path:\n", | |
" sys.path.append(repo_path)\n", | |
"\n", | |
"# Set up the cache for this notebook\n", | |
"os.environ[\"DSP_NOTEBOOK_CACHEDIR\"] = os.path.join(repo_path, 'cache')\n", | |
"\n", | |
"import pkg_resources # Install the package if it's not installed\n", | |
"if not \"dspy-ai\" in {pkg.key for pkg in pkg_resources.working_set}:\n", | |
" !pip install -U pip\n", | |
" !pip install dspy-ai\n", | |
" !pip install openai~=0.28.1\n", | |
" # !pip install -e $repo_path\n", | |
"\n", | |
"import dspy" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"## Configure your OpenAi Key or use another LLM" | |
], | |
"metadata": { | |
"id": "HC2Da441aCuq" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"# Install the OpenAI library (uncomment if needed)\n", | |
"# !pip install openai\n", | |
"\n", | |
"# Import necessary libraries\n", | |
"import openai\n", | |
"from google.colab import userdata\n", | |
"\n", | |
"# Retrieve and set the API key\n", | |
"api_key = userdata.get('OPENAI_API_KEY')\n", | |
"openai.api_key = api_key\n", | |
"\n", | |
"# Verify the API key is set (this is just for demonstration and should not be used in production code)\n", | |
"if openai.api_key:\n", | |
" print(\"OpenAI API key is set. Ready to proceed!\")\n", | |
"else:\n", | |
" print(\"OpenAI API key is not set. Please check your setup.\")\n" | |
], | |
"metadata": { | |
"colab": { | |
"base_uri": "https://localhost:8080/" | |
}, | |
"id": "k3Y7ksBIG2IW", | |
"outputId": "498ce9f8-4334-48bd-83f3-509541994135" | |
}, | |
"execution_count": null, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"name": "stdout", | |
"text": [ | |
"OpenAI API key is set. Ready to proceed!\n" | |
] | |
} | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"### 1] Getting Started\n", | |
"\n", | |
"We'll start by setting up the language model (LM) and retrieval model (RM). **DSPy** supports multiple API and local models. In this notebook, we'll work with GPT-3.5 (`gpt-3.5-turbo`) and the retriever `ColBERTv2`.\n", | |
"\n", | |
"To make things easy, we've set up a ColBERTv2 server hosting a Wikipedia 2017 \"abstracts\" search index (i.e., containing first paragraph of each article from this [2017 dump](https://hotpotqa.github.io/wiki-readme.html)), so you don't need to worry about setting one up! It's free." | |
], | |
"metadata": { | |
"id": "dVnyqY4o88Hb" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"turbo = dspy.OpenAI(model='gpt-3.5-turbo')\n", | |
"colbertv2_wiki17_abstracts = dspy.ColBERTv2(url='http://20.102.90.50:2017/wiki17_abstracts')\n", | |
"\n", | |
"dspy.settings.configure(lm=turbo, rm=colbertv2_wiki17_abstracts)" | |
], | |
"metadata": { | |
"id": "tiRb9e1Nr2VZ" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"### Recursive Unified Validators (rUv) with Mixture of Experts (MoE) Approach\n", | |
"\n", | |
"The Recursive Unified Validators (rUv) concept, when applied within the DSPy framework, introduces a sophisticated approach to leveraging a Mixture of Experts (MoE) for complex problem-solving. This methodology is particularly effective in scenarios where a single model or \"expert\" is insufficient to address the multifaceted nature of the task at hand. By orchestrating a dynamic selection among a pool of specialized experts based on the input context, rUv aims to enhance both the accuracy and efficiency of the solution.\n", | |
"\n", | |
"In the context of DSPy, each expert is encapsulated within a declarative signature, defining the expected inputs and outputs for that expert's domain of knowledge. Here's an example of how an expert signature might be defined:\n", | |
"\n", | |
"```python\n", | |
"class rUv(dspy.Signature):\n", | |
" input_field = dspy.InputField()\n", | |
" output_field = dspy.OutputField(desc=\"Expert output\")\n", | |
"```\n", | |
"\n", | |
"This `rUv` serves as a blueprint for individual expert models, specifying the structure of the data they will receive and produce. The `input_field` represents the data or context that is fed into the expert, while the `output_field` describes the expert's output, which in this case, is generically described as \"Expert output\". This abstraction allows for a modular design where experts can be seamlessly integrated into the rUv system.\n", | |
"\n", | |
"The rUv system itself is designed to recursively validate and refine the outputs of these experts. It employs a selector mechanism to dynamically choose the most appropriate expert(s) for a given input. This selection process is crucial for handling diverse and complex inputs that may require specialized knowledge or processing. Once an expert is selected, its output is then subject to validation and refinement processes, ensuring that the final output meets the desired criteria of accuracy and coherence.\n", | |
"\n", | |
"The integration of rUv with MoE in DSPy facilitates a powerful, flexible approach to tackling challenging problems. It allows for the leveraging of specialized knowledge across various domains, ensuring that the most suitable expertise is applied to each aspect of the problem. This methodology not only enhances the system's overall performance but also its adaptability to new, unforeseen challenges.\n" | |
], | |
"metadata": { | |
"id": "kTyE7TyB_END" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"# Configure your Experts" | |
], | |
"metadata": { | |
"id": "ATO6tTbJBbc7" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"## Click the ▶️ below to active each settings. Look for a ✅ *mark*" | |
], | |
"metadata": { | |
"id": "7QZsdnvE2c7k" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title Configuration Settings - Expert Models and Iterations { display-mode: \"form\" }\n", | |
"import ipywidgets as widgets\n", | |
"from IPython.display import display\n", | |
"\n", | |
"num_experts_widget = widgets.IntSlider(\n", | |
" value=3, min=1, max=32, step=1,\n", | |
" description='Number of expert models to use',\n", | |
" style={'description_width': 'initial'},\n", | |
" layout=widgets.Layout(width='40%')\n", | |
")\n", | |
"\n", | |
"min_iterations_widget = widgets.IntSlider(\n", | |
" value=3, min=1, max=100, step=1,\n", | |
" description='Minimum number of iterations to run',\n", | |
" style={'description_width': 'initial'},\n", | |
" layout=widgets.Layout(width='40%')\n", | |
")\n", | |
"\n", | |
"config_form_1 = widgets.VBox([\n", | |
" widgets.HTML(\"<h2>🤖 Number of Expert Models</h2>\"),\n", | |
" widgets.HTML(\"<p>Determines the diversity and specialization of the expert models. More experts can cover a wider range of topics but may require more computational resources.</p>\"),\n", | |
" num_experts_widget,\n", | |
" widgets.HTML(\"<p><em>Examples:</em></p>\"),\n", | |
" widgets.HTML(\"<ul>\"\n", | |
" \"<li><code>num_experts = 3</code>: Use a small number of expert models for a focused approach. Suitable for simpler tasks or when computational resources are limited.</li>\"\n", | |
" \"<li><code>num_experts = 5</code>: Use a moderate number of expert models to balance diversity and computational efficiency. Appropriate for most general-purpose applications.</li>\"\n", | |
" \"<li><code>num_experts = 8</code>: Use a higher number of expert models for increased diversity and specialization. Beneficial for complex tasks that require expertise in multiple domains.</li>\"\n", | |
" \"<li><code>num_experts = 12</code>: Use a large number of expert models for highly diverse and specialized knowledge. Suitable for advanced applications with ample computational resources.</li>\"\n", | |
" \"</ul>\"),\n", | |
" widgets.HTML(\"<h2>🔄 Minimum Number of Iterations</h2>\"),\n", | |
" widgets.HTML(\"<p>Ensures that the system runs for a sufficient number of iterations to generate meaningful outputs and updates.</p>\"),\n", | |
" min_iterations_widget,\n", | |
" widgets.HTML(\"<p><em>Examples:</em></p>\"),\n", | |
" widgets.HTML(\"<ul>\"\n", | |
" \"<li><code>min_iterations = 3</code>: Run the system for a minimum of 3 iterations. Suitable for quick prototyping or when a small number of iterations is sufficient.</li>\"\n", | |
" \"<li><code>min_iterations = 6</code>: Run the system for at least 6 iterations. Provides a balance between efficiency and allowing the system to refine its outputs.</li>\"\n", | |
" \"<li><code>min_iterations = 10</code>: Run the system for a minimum of 10 iterations. Allows for more comprehensive refinement and adaptation of the expert models.</li>\"\n", | |
" \"<li><code>min_iterations = 15</code>: Run the system for an extended number of iterations. Beneficial when the task requires significant fine-tuning and improvement over time.</li>\"\n", | |
" \"</ul>\")\n", | |
"])\n", | |
"\n", | |
"display(config_form_1)" | |
], | |
"metadata": { | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 869, | |
"referenced_widgets": [ | |
"b62d1817420440eebdd355b1a5b1881c", | |
"a3241775143b42aabec79452ec72db83", | |
"9ed09c22674c46229b2a95213ff63cd7", | |
"96bdef1cb33b425ca5c7f1a708ff8089", | |
"c162030f65e94a48964698f930766e60", | |
"62c8a82b75c04a4bb0b9ea2bb02dcad5", | |
"309afada93db4c739860a239d51c14d9", | |
"29c87ea3108943ad8fbedf2157e955d0", | |
"86f8eefffb604eb6848d986668a607e0", | |
"89f6007681d14dcca41d16cc5d531575", | |
"e7c2627ce00e461299bd45cd5c539ea8", | |
"dd989f046aee48d7968acb13eaa57e88", | |
"0f32288dfda24e17b72b5faa5a37f37b", | |
"0cd57b77fa514b8cb4a972cc5ff6d4d8", | |
"5d0c70a45eff45f58b854720279bbe32", | |
"3702e4f7d0414e60902be299cd9c6f8f", | |
"ef3e8a90e3ed46d38c685de358ba86f1", | |
"70e180bc184147109b9d50b75895cc21", | |
"41338ec82c1f4325a1e93f1860cc89de", | |
"279ad65e88d142dba2f8ccda10c6f83d", | |
"eee3f319087748d39bf447813a3ccad0", | |
"774570b3713740f8ba1a52ad793eeab1", | |
"61157d2d7e6b4850a5c71ff6715eeb88", | |
"5d69bbd9440b468b9b076bc4d6e98185", | |
"4acd883d2af744f9aac1cf022d731b6b", | |
"bd4c8c977688495f9c5073872c29efa3", | |
"6c81e7a21c314af89dbc3fc8b37d9932", | |
"031f9bad574147269583543eb0fe66c4", | |
"75d85358d1914595bc7b347ebeefdcce", | |
"3ee905f583b841f4bc80d11c2326b92d", | |
"8a880b69d2d6475c998987e488a44fa5", | |
"f728ffddcf744e3b82ef80a269927a15" | |
] | |
}, | |
"id": "ZBPjozuU9pxG", | |
"outputId": "965f9c7f-898e-4318-e1aa-f1e4060b90ce" | |
}, | |
"execution_count": null, | |
"outputs": [ | |
{ | |
"output_type": "display_data", | |
"data": { | |
"text/plain": [ | |
"VBox(children=(HTML(value='<h2>🤖 Number of Expert Models</h2>'), HTML(value='<p>Determines the diversity and s…" | |
], | |
"application/vnd.jupyter.widget-view+json": { | |
"version_major": 2, | |
"version_minor": 0, | |
"model_id": "b62d1817420440eebdd355b1a5b1881c" | |
} | |
}, | |
"metadata": {} | |
} | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title Configuration Settings - Learning Parameters { display-mode: \"form\" }\n", | |
"learning_rate_widget = widgets.FloatSlider(\n", | |
" value=0.1, min=0.01, max=0.5, step=0.01,\n", | |
" description='Learning rate for updating expert values',\n", | |
" style={'description_width': 'initial'},\n", | |
" layout=widgets.Layout(width='40%')\n", | |
")\n", | |
"\n", | |
"discount_factor_widget = widgets.FloatSlider(\n", | |
" value=0.99, min=0.8, max=0.99, step=0.01,\n", | |
" description='Discount factor for future rewards',\n", | |
" style={'description_width': 'initial'},\n", | |
" layout=widgets.Layout(width='40%')\n", | |
")\n", | |
"\n", | |
"exploration_rate_widget = widgets.FloatSlider(\n", | |
" value=0.2, min=0.05, max=0.5, step=0.05,\n", | |
" description='Exploration rate for selecting experts',\n", | |
" style={'description_width': 'initial'},\n", | |
" layout=widgets.Layout(width='40%')\n", | |
")\n", | |
"\n", | |
"config_form_2 = widgets.VBox([\n", | |
" widgets.HTML(\"<h2>📈 Learning Rate</h2>\"),\n", | |
" widgets.HTML(\"<p>Controls the step size of the value updates. Higher learning rates lead to faster adaptation but may cause instability, while lower learning rates result in slower but more stable learning.</p>\"),\n", | |
" learning_rate_widget,\n", | |
" widgets.HTML(\"<p><em>Examples:</em></p>\"),\n", | |
" widgets.HTML(\"<ul>\"\n", | |
" \"<li><code>learning_rate = 0.05</code>: Use a low learning rate for cautious and gradual updates. Suitable when stability is a priority and slower adaptation is acceptable.</li>\"\n", | |
" \"<li><code>learning_rate = 0.1</code>: Use a moderate learning rate for balanced updates. Provides a good trade-off between adaptation speed and stability (default).</li>\"\n", | |
" \"<li><code>learning_rate = 0.2</code>: Use a higher learning rate for faster adaptation. Beneficial when quick adjustments are needed, but may introduce some instability.</li>\"\n", | |
" \"<li><code>learning_rate = 0.5</code>: Use an aggressive learning rate for rapid adaptation. Suitable for scenarios where fast convergence is desired, but careful monitoring is required to avoid instability.</li>\"\n", | |
" \"</ul>\"),\n", | |
" widgets.HTML(\"<h2>💰 Discount Factor</h2>\"),\n", | |
" widgets.HTML(\"<p>Determines the importance of future rewards. Higher discount factors give more weight to future rewards, while lower discount factors prioritize immediate rewards.</p>\"),\n", | |
" discount_factor_widget,\n", | |
" widgets.HTML(\"<p><em>Examples:</em></p>\"),\n", | |
" widgets.HTML(\"<ul>\"\n", | |
" \"<li><code>discount_factor = 0.8</code>: Use a lower discount factor to prioritize short-term rewards. Suitable when immediate outcomes are more important than long-term considerations.</li>\"\n", | |
" \"<li><code>discount_factor = 0.9</code>: Use a moderate discount factor to balance short-term and long-term rewards. Provides a good trade-off for most applications.</li>\"\n", | |
" \"<li><code>discount_factor = 0.95</code>: Use a higher discount factor to give more emphasis to future rewards. Beneficial when long-term performance is a key objective.</li>\"\n", | |
" \"<li><code>discount_factor = 0.99</code>: Use a very high discount factor to strongly prioritize future rewards. Suitable for tasks where long-term success is crucial (default).</li>\"\n", | |
" \"</ul>\"),\n", | |
" widgets.HTML(\"<h2>🔍 Exploration Rate</h2>\"),\n", | |
" widgets.HTML(\"<p>Balances the trade-off between exploiting the current best expert and exploring potentially better experts. Higher exploration rates encourage trying different experts, while lower rates focus on the current best expert.</p>\"),\n", | |
" exploration_rate_widget,\n", | |
" widgets.HTML(\"<p><em>Examples:</em></p>\"),\n", | |
" widgets.HTML(\"<ul>\"\n", | |
" \"<li><code>exploration_rate = 0.05</code>: Use a low exploration rate to heavily focus on the current best expert. Suitable when the system has already converged to a good solution and stability is desired.</li>\"\n", | |
" \"<li><code>exploration_rate = 0.1</code>: Use a moderate exploration rate to occasionally explore alternative experts. Provides a balance between exploitation and exploration.</li>\"\n", | |
" \"<li><code>exploration_rate = 0.3</code>: Use a higher exploration rate to more frequently try different experts. Beneficial when the optimal expert is uncertain and more exploration is needed.</li>\"\n", | |
" \"<li><code>exploration_rate = 0.5</code>: Use an aggressive exploration rate to prioritize exploring new experts over exploiting the current best. Suitable for tasks with a high degree of uncertainty or when the system needs to adapt to changing conditions.</li>\"\n", | |
" \"</ul>\")\n", | |
"])\n", | |
"\n", | |
"display(config_form_2)" | |
], | |
"metadata": { | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 1000, | |
"referenced_widgets": [ | |
"a1a694d78186448dadee815fd5a19af0", | |
"0a8b4a68c9d34f319cd2c0e49f660bcf", | |
"bb556759c4d441158b9aa87d3db0e71f", | |
"661ae8e7386349daa33c9d5d54962818", | |
"51087a4f3dea42e59c97bfd97c4db7a5", | |
"d72a600a5aab435a9820bc1d8e32be6c", | |
"fead9805ac6a405d9636f54f98d4f4c3", | |
"a2bd8ab0d6ae4ce1bd5c2bd01bddbe43", | |
"30584c3c266d41f7817cb0ececf5e941", | |
"dc5257471be9457fbe9cf454dd8b34e7", | |
"80f2f2ae617349dab6db97a18700f9ee", | |
"b179b217661d456db3b59ef02abdcfab", | |
"332beedfadc74a4a95ac9753fd8b3ca0", | |
"edaa25849ecb49d7ad2d98b6977aeecc", | |
"2db2452fed464fac96050c22558a1899", | |
"4c550d4c8aa447e9aa9f7019996cbfbb", | |
"8041d4bf9955495893d4ada4daed387e", | |
"085e3e897679415a84ff8f3341be74e2", | |
"c5835f1ccbab4b288b97e9517e8ca82c", | |
"bfe83c92d7e244dabd421a37862808c6", | |
"a3ac06235c4f4027b5309498ff35c8b3", | |
"d6d5e7cf6cfb45a18d4892434a4ed16f", | |
"7c89e8e68bab424284cfedd17d9283e8", | |
"a694eda30fe342a0b6303789d658eb44", | |
"4bf6ff2be919466a84ee442f585ab4db", | |
"61749680118845839f83275a6e75e280", | |
"6afb3fa777ee4485a79efad968821ee3", | |
"2a16f24a58be4bb2b55d5d057e8dba30", | |
"69b183de25264659a1cd0e67673ac09c", | |
"aa8d785ccf414e65b2f4189ce179d6c3", | |
"e5ab8d3855ec4d5296c83d1b3cadbee5", | |
"b34ad3298cb340b9996734fb9d4e4161", | |
"18f63332aa37430984da20bc5a49670c", | |
"24936b0a0c1f4ffd97ea8f269e0e92aa", | |
"27a054ace5614c0c9a5c6c25eefebed9", | |
"d1945e9b05be4cc7be15fde78d799a29", | |
"82487143b8294ba98caf6f4a914e5883", | |
"0e78500d1a0a44d0876a7159903d6cb6", | |
"5015d3188478422a8595de88df764b65", | |
"5477889420be4bb89c29683a1ae2c1f8", | |
"5cb084d2eb2645e4823f8d8e153f4b93", | |
"045ba4e692e147cbb3737e91e6915e12", | |
"925c3c659a034cd6b8236e9b35c755dd", | |
"91f69828da594e91ba07ca30f5863ac6", | |
"10c7f5bb43274557a88edfe121a31cd0", | |
"0e00121e9f1145568c380df5c3ca6e7a", | |
"ae40b66470d542d5bf455f6df22f3386" | |
] | |
}, | |
"id": "WDgH0Xve-LTK", | |
"outputId": "029158b8-90e9-460f-86ac-fe224e81a251" | |
}, | |
"execution_count": null, | |
"outputs": [ | |
{ | |
"output_type": "display_data", | |
"data": { | |
"text/plain": [ | |
"VBox(children=(HTML(value='<h2>📈 Learning Rate</h2>'), HTML(value='<p>Controls the step size of the value upda…" | |
], | |
"application/vnd.jupyter.widget-view+json": { | |
"version_major": 2, | |
"version_minor": 0, | |
"model_id": "a1a694d78186448dadee815fd5a19af0" | |
} | |
}, | |
"metadata": {} | |
} | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"## The Prompts\n", | |
"To introduce a dynamic and user-friendly template selection for configuring context, prompt, and guidance settings, the following code integrates a dropdown widget with pre-defined templates.\n", | |
"\n", | |
"These templates cover various scenarios like Business Analysis, Code Development, Story Creation, and TV/Movie Script writing, offering a structured approach to initializing the inputs for generating specialized content.\n", | |
"\n", | |
" The dropdown selection triggers an update in the context, prompt, and guidance text areas, reflecting the specific requirements of the chosen template. This setup not only streamlines the configuration process but also ensures that users can easily tailor the system to their immediate needs without manual input adjustments.\n" | |
], | |
"metadata": { | |
"id": "uanmk7vSwv7J" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"import ipywidgets as widgets\n", | |
"\n", | |
"# Define the templates\n", | |
"templates = {\n", | |
" \"Business Analysis\": {\n", | |
" \"context\": \"Acme Corporation, a leading multinational conglomerate, is actively exploring strategic investment opportunities in emerging technologies to maintain its competitive edge and drive future growth. The board of directors has convened a special committee to conduct a comprehensive analysis of the technological landscape and identify the most promising areas for investment. The committee seeks in-depth insights and recommendations on which cutting-edge technologies have the potential to revolutionize Acme's core industries and create new market opportunities over the next decade.\",\n", | |
" \"prompt\": \"Conduct a thorough evaluation of the potential impact and investment viability of four key emerging technologies: artificial intelligence (AI), blockchain, quantum computing, and biotechnology. For each technology, provide a detailed assessment of its current state of development, major players in the field, and projected market growth. Analyze the specific applications and use cases within Acme's core industries, highlighting the potential benefits, challenges, and disruptions each technology could bring. Consider factors such as scalability, regulatory landscape, talent availability, and competitive dynamics when assessing the investment viability of each technology. Provide clear recommendations on which technologies Acme should prioritize for investment, along with a proposed allocation of resources and a high-level roadmap for integration into the company's existing operations.\",\n", | |
" \"guidance\": \"Provide a comprehensive and well-structured analysis, focusing on delivering clear, concise, and actionable insights. Use industry-specific terminology and cite relevant data and examples to support your recommendations. Maintain an objective and analytical tone throughout the report.\"\n", | |
" },\n", | |
" \"Application Planning\": {\n", | |
" \"context\": \"Acme Software Solutions is developing a new web application for task management and collaboration. The application aims to streamline project management processes and enhance team productivity. The development team is in the early stages of the project and seeks guidance on architecting a scalable and maintainable solution.\",\n", | |
" \"prompt\": \"Design a high-level architecture for the task management and collaboration web application. Consider factors such as user authentication, data storage, real-time updates, and integration with third-party services. Provide recommendations on the choice of frontend and backend technologies, along with a justification for each selection. Outline the key components of the application, including the user interface, database schema, and API endpoints. Discuss potential challenges and propose strategies for addressing them, such as performance optimization, security considerations, and error handling. Finally, provide a roadmap for the development process, including milestones and deliverables.\",\n", | |
" \"guidance\": \"Provide a clear and concise architectural overview, focusing on the key design decisions and their rationale. Use technical terminology and diagrams where appropriate to illustrate the system architecture. Ensure that the recommendations align with industry best practices and consider the long-term maintainability and scalability of the application.\"\n", | |
" },\n", | |
" \"Source Code Generation\": {\n", | |
" \"context\": \"The development team at Acme Software Solutions is tasked with automating parts of their workflow, specifically focusing on generating source code for repetitive tasks and common software patterns. The team aims to enhance productivity and reduce manual coding errors.\",\n", | |
" \"prompt\": \"Create a Python script that automates the generation of source code for a simple REST API. The API should support basic CRUD (Create, Read, Update, Delete) operations for managing user information. Consider aspects such as request handling, response formatting, and data storage. Include error handling and input validation to ensure the robustness of the API. Provide comments within the code to explain the functionality and decisions made during development.\",\n", | |
" \"guidance\": \"Ensure the source code is clean, modular, and follows Python best practices. Use appropriate libraries and frameworks, such as Flask or FastAPI, to simplify the implementation. Structure the code to allow for easy extension and maintenance. Include detailed comments to aid understanding and future development. The final script should offer a clear example of how to structure a basic REST API in Python, serving as a template for further customization and expansion.\"\n", | |
" },\n", | |
" \"SQL Generation\": {\n", | |
" \"context\": \"The analytics team at Acme Corporation needs to frequently extract insights from their customer database. To streamline their analysis, they require an automated solution for generating SQL queries based on specific analytical requirements. This solution should accommodate various types of queries, such as data retrieval, aggregation, and filtering based on dynamic inputs.\",\n", | |
" \"prompt\": \"Develop a Python function that generates SQL queries for extracting user data from a 'customers' table. The function should accept parameters for selecting fields, setting conditions, and defining aggregation operations (e.g., COUNT, AVG). For example, if the user needs to find the average age of users in New York, the function should produce the appropriate SQL query. Include error handling to manage invalid inputs and ensure the generated SQL is valid and efficient.\",\n", | |
" \"guidance\": \"Focus on creating a flexible and robust function capable of handling a variety of query requirements. Ensure the function is well-documented, with examples demonstrating how to call it with different parameters. Use string formatting or templating libraries like Jinja2 to construct the SQL queries dynamically. Incorporate best practices for avoiding SQL injection vulnerabilities, such as using parameterized queries. The output should be an executable SQL query string, ready for use with a database connection.\"\n", | |
" },\n", | |
" \"Story Creation\": {\n", | |
" \"context\": \"Acme Publishing House is seeking fresh and engaging story ideas for its upcoming anthology series. The anthology will feature short stories across various genres, including science fiction, fantasy, mystery, and romance. The editorial team is looking for unique and captivating storylines that will resonate with a diverse audience.\",\n", | |
" \"prompt\": \"Generate a collection of five original story ideas, each belonging to a different genre. For each story idea, provide a brief synopsis that captures the main plot, characters, and themes. The stories should have compelling hooks, well-developed protagonists, and unexpected twists. Consider the target audience for each genre and tailor the stories accordingly. Provide a title for each story and a short explanation of why it would be a good fit for the anthology. Additionally, suggest potential authors or writing styles that could bring each story to life.\",\n", | |
" \"guidance\": \"Deliver creative and imaginative story ideas that showcase originality and depth. Use vivid descriptions and engaging language to capture the essence of each story. Ensure that the stories have a clear structure and narrative arc, with well-defined conflicts and resolutions. Provide enough detail to give the editorial team a strong sense of each story's potential, while leaving room for further development and interpretation.\"\n", | |
" },\n", | |
" \"TV/Movie Script\": {\n", | |
" \"context\": \"Acme Productions is developing a new television series that explores the lives of a group of friends navigating their careers, relationships, and personal growth in a bustling city. The series aims to capture the authentic experiences and challenges faced by young professionals in contemporary society. The writing team is brainstorming ideas for the pilot episode and seeks guidance on crafting a compelling script.\",\n", | |
" \"prompt\": \"Develop a detailed outline for the pilot episode of the television series. Introduce the main characters, their backgrounds, and their relationships with each other. Establish the central conflict or theme that will drive the narrative throughout the episode. Create a series of scenes that showcase the characters' personalities, aspirations, and struggles. Incorporate realistic dialogue and relatable situations that resonate with the target audience. Consider the pacing and structure of the episode, including key moments of tension, humor, and emotional depth. Provide a clear resolution or cliffhanger that sets the stage for future episodes.\",\n", | |
" \"guidance\": \"Craft a script outline that balances character development, plot progression, and thematic exploration. Use a mix of dialogue, action, and description to bring the scenes to life. Ensure that the characters have distinct voices and motivations that fuel their actions and interactions. Pay attention to the overall tone and style of the series, creating a consistent and engaging narrative. Provide enough detail to guide the writing process while allowing room for creative interpretation and collaboration among the writing team.\"\n", | |
" }\n", | |
"}\n", | |
"\n", | |
"# Create the dropdown widget\n", | |
"template_dropdown = widgets.Dropdown(\n", | |
" options=list(templates.keys()),\n", | |
" value=list(templates.keys())[0],\n", | |
" description='Template:',\n", | |
" layout=widgets.Layout(width='40%')\n", | |
")\n", | |
"\n", | |
"# Create the context, prompt, and guidance widgets\n", | |
"context_widget = widgets.Textarea(\n", | |
" value=templates[template_dropdown.value]['context'],\n", | |
" placeholder='Enter the context here',\n", | |
" description='Context:',\n", | |
" layout=widgets.Layout(width='40%', height='150px')\n", | |
")\n", | |
"\n", | |
"prompt_widget = widgets.Textarea(\n", | |
" value=templates[template_dropdown.value]['prompt'],\n", | |
" placeholder='Enter the prompt here',\n", | |
" description='Prompt:',\n", | |
" layout=widgets.Layout(width='40%', height='200px')\n", | |
")\n", | |
"\n", | |
"max_tokens_widget = widgets.IntSlider(\n", | |
" value=100,\n", | |
" min=100,\n", | |
" max=2000,\n", | |
" step=50,\n", | |
" description='Max Tokens:',\n", | |
" layout=widgets.Layout(width='40%'),\n", | |
" style={'description_width': 'initial'}\n", | |
")\n", | |
"\n", | |
"guidance_widget = widgets.Textarea(\n", | |
" value=templates[template_dropdown.value]['guidance'],\n", | |
" placeholder='Enter guidance for the model',\n", | |
" description='Guidance:',\n", | |
" layout=widgets.Layout(width='40%', height='100px')\n", | |
")\n", | |
"\n", | |
"# Define the on_template_change function\n", | |
"def on_template_change(change):\n", | |
" context_widget.value = templates[change.new]['context']\n", | |
" prompt_widget.value = templates[change.new]['prompt']\n", | |
" guidance_widget.value = templates[change.new]['guidance']\n", | |
"\n", | |
"# Observe the dropdown value change\n", | |
"template_dropdown.observe(on_template_change, names='value')\n", | |
"\n", | |
"# Create the configuration form\n", | |
"config_form_3 = widgets.VBox([\n", | |
" template_dropdown,\n", | |
" context_widget,\n", | |
" prompt_widget,\n", | |
" max_tokens_widget,\n", | |
" guidance_widget\n", | |
"])\n", | |
"\n", | |
"display(config_form_3)" | |
], | |
"metadata": { | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 543, | |
"referenced_widgets": [ | |
"a356162964e34b95bcb8e954d4ec683b", | |
"369229298bd0490494a5c0ee1a9c1e50", | |
"f99fe275b3494e399a1969a6b55638b1", | |
"9b0720d786224e27a2518209b6148ae5", | |
"e5f8db235feb4ceebcaa29a08fdac923", | |
"1b7e18d715ff4fe28fc60ac8e5760d31", | |
"6b3a1ade2832450caee860e298f2a41a", | |
"84f542806b3a4dc1a5fda1f22636301e", | |
"ce2e521f9f5b492f940a814d8672ae01", | |
"92008cb3084046508211d9351d4b483d", | |
"71b84a7a136448118a0b1127e0ded14f", | |
"544331156f82422e8944aa5688c1238d", | |
"6b1b21a0670b4407b249d9ad604cd0a2", | |
"27668c65119547d7abcde7e18cec117d", | |
"ebc4a82495e0412aa1a3990fe7b898ba", | |
"db54b88135e249a3af3819072013cfab", | |
"217aa3ead273475ea1e1d858dce97114" | |
] | |
}, | |
"id": "RqiePYzc-cxN", | |
"outputId": "a35d3120-b15b-4789-fd05-1f40db5e8b06" | |
}, | |
"execution_count": null, | |
"outputs": [ | |
{ | |
"output_type": "display_data", | |
"data": { | |
"text/plain": [ | |
"VBox(children=(Dropdown(description='Template:', layout=Layout(width='40%'), options=('Business Analysis', 'Ap…" | |
], | |
"application/vnd.jupyter.widget-view+json": { | |
"version_major": 2, | |
"version_minor": 0, | |
"model_id": "a356162964e34b95bcb8e954d4ec683b" | |
} | |
}, | |
"metadata": {} | |
} | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"# Recursive Unified Validators (rUv) MoE Toolkit Source Code\n", | |
"\n", | |
"This version maintains the foundational structure from the previous iteration, continuing to leverage DSPy for dynamic interaction with AI models like GPT-3.5-turbo and ColBERTv2.\n", | |
"\n", | |
"Notable enhancements include refined mechanisms for generating and evaluating expert model outputs, with improved logging for monitoring and debugging.\n", | |
"\n", | |
"The introduction of advanced features such as output continuation and intrinsic reward assessment underscores the toolkit's evolution towards more autonomous, context-sensitive operations. The system's architecture has been further optimized for adaptive learning, enabling more sophisticated and nuanced expert model integration and selection processes.\n" | |
], | |
"metadata": { | |
"id": "3EOFKZuephZ1" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"import dspy\n", | |
"import logging\n", | |
"import time\n", | |
"import random\n", | |
"from typing import List\n", | |
"\n", | |
"\n", | |
"#initial DSPy\n", | |
"turbo = dspy.OpenAI(model='gpt-3.5-turbo')\n", | |
"colbertv2_wiki17_abstracts = dspy.ColBERTv2(url='http://20.102.90.50:2017/wiki17_abstracts')\n", | |
"\n", | |
"# Configure logging\n", | |
"logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\n", | |
"\n", | |
"class rUv(dspy.Signature):\n", | |
" \"\"\"\n", | |
" Recursive Unified Validators (rUv): Generate expert model outputs.\n", | |
" This is the primary function that generates outputs from multiple expert models.\n", | |
" \"\"\"\n", | |
" context = dspy.InputField(desc=\"The current context\")\n", | |
" prompt = dspy.InputField(desc=\"A prompt to guide the language model\")\n", | |
" max_tokens = dspy.InputField(desc=\"Maximum number of tokens to generate\", default=\"1500\")\n", | |
" temperature = dspy.InputField(desc=\"Temperature for sampling (higher values make output more random)\", default=\"0.1\")\n", | |
" top_k = dspy.InputField(desc=\"Top K words to sample from (higher values consider more words)\", default=\"100\")\n", | |
" top_p = dspy.InputField(desc=\"Top P probability threshold (higher values make output more diverse)\", default=\"0.9\")\n", | |
" frequency_penalty = dspy.InputField(desc=\"Frequency penalty (higher values penalize frequent words)\", default=\"0.0\")\n", | |
" presence_penalty = dspy.InputField(desc=\"Presence penalty (higher values penalize repeated words)\", default=\"0.0\")\n", | |
" output = dspy.OutputField(desc=\"The generated expert model output\")\n", | |
" teleprompter = dspy.InputField(desc=\"Additional context or instructions for the language model\", default=\"matter of fact\")\n", | |
"\n", | |
"class GatingModel(dspy.Signature):\n", | |
" \"\"\"Assess expert model relevance.\"\"\"\n", | |
" context = dspy.InputField(desc=\"The current context\")\n", | |
" expert_outputs = dspy.InputField(desc=\"Serialized string of generated expert model outputs\")\n", | |
" selected_expert = dspy.OutputField(desc=\"The index of the selected expert model\")\n", | |
" teleprompter = dspy.InputField(desc=\"Additional context or instructions for the language model\", default=\"Select the index of the most relevant expert for the given context.\")\n", | |
"\n", | |
"class IntrinsicRewardModel(dspy.Signature):\n", | |
" \"\"\"Evaluate the agent's performance intrinsically.\"\"\"\n", | |
" context = dspy.InputField(desc=\"The current context\")\n", | |
" expert_outputs = dspy.InputField(desc=\"Serialized string of generated expert model outputs\")\n", | |
" selected_expert_index = dspy.InputField(desc=\"The index of the selected expert model\")\n", | |
" intrinsic_reward = dspy.OutputField(desc=\"The intrinsic reward for the agent's performance\")\n", | |
"\n", | |
"class MixtureOfExperts:\n", | |
" def __init__(self, num_experts: int = 4, min_iterations: int = 4, learning_rate: float = 0.1, discount_factor: float = 0.99, exploration_rate: float = 0.2):\n", | |
" \"\"\"\n", | |
" Initialize the MixtureOfExperts class with default values if the previous code cell isn't run.\n", | |
"\n", | |
" Args:\n", | |
" num_experts (int, optional): Number of expert models to use. Defaults to 4.\n", | |
" min_iterations (int, optional): Minimum number of iterations to run. Defaults to 4.\n", | |
" learning_rate (float, optional): Learning rate for updating expert values. Defaults to 0.1.\n", | |
" discount_factor (float, optional): Discount factor for future rewards. Defaults to 0.99.\n", | |
" exploration_rate (float, optional): Exploration rate for selecting experts. Defaults to 0.2.\n", | |
" \"\"\"\n", | |
" self.num_experts: int = num_experts\n", | |
" self.expert_outputs: List[str] = []\n", | |
" self.min_iterations: int = min_iterations\n", | |
" self.learning_rate: float = learning_rate\n", | |
" self.discount_factor: float = discount_factor\n", | |
" self.exploration_rate: float = exploration_rate\n", | |
" self.expert_values: List[float] = [0.0] * num_experts\n", | |
" self.expert_architectures: List[dict] = [self.initialize_expert_architecture() for _ in range(num_experts)]\n", | |
" self.gating_architecture: dict = self.initialize_gating_architecture()\n", | |
" self.selected_experts_history: List[int] = []\n", | |
"\n", | |
" def initialize_expert_architecture(self) -> dict:\n", | |
" \"\"\"Initialize the architecture of an expert model.\"\"\"\n", | |
" return {\"num_layers\": random.randint(1, 5), \"hidden_size\": random.randint(32, 256)}\n", | |
"\n", | |
" def initialize_gating_architecture(self) -> dict:\n", | |
" \"\"\"Initialize the architecture of the gating model.\"\"\"\n", | |
" return {\"num_layers\": random.randint(1, 5), \"hidden_size\": random.randint(32, 256)}\n", | |
"\n", | |
" def generate_expert_outputs(self, context: str, prompt: str, max_tokens: int, guidance: str) -> str:\n", | |
" \"\"\"Generate expert outputs based on the given context and prompt.\"\"\"\n", | |
" logging.info(\"Starting to generate expert outputs...\")\n", | |
" print(\"Generating expert outputs...\")\n", | |
"\n", | |
" try:\n", | |
" generate_expert = dspy.Predict(rUv)\n", | |
" select_expert = dspy.Predict(GatingModel)\n", | |
" evaluate_intrinsic_reward = dspy.Predict(IntrinsicRewardModel)\n", | |
" except Exception as e:\n", | |
" logging.error(\"Error initializing DSPy Predict functions: %s\", e)\n", | |
" return \"Failed to initialize expert models.\"\n", | |
"\n", | |
" for i in range(self.num_experts):\n", | |
" print(f\"Generating output for Expert {i+1}/{self.num_experts}...\")\n", | |
" logging.info(f\"Generating output for Expert {i+1}...\")\n", | |
"\n", | |
" try:\n", | |
" expert_prompt = f\"Expert {i+1}: {prompt}\"\n", | |
"\n", | |
" # Determine the desired output length based on the previous values\n", | |
" if self.expert_values[i] < 0.2:\n", | |
" output_length = \"short\"\n", | |
" elif self.expert_values[i] < 0.5:\n", | |
" output_length = \"medium\"\n", | |
" else:\n", | |
" output_length = \"long\"\n", | |
"\n", | |
" expert_output = \"\"\n", | |
" while True:\n", | |
" partial_output = generate_expert(\n", | |
" context=context,\n", | |
" prompt=expert_prompt,\n", | |
" max_tokens=str(max_tokens),\n", | |
" temperature=str(random.uniform(0.7, 1.2)),\n", | |
" top_k=str(random.randint(30, 70)),\n", | |
" top_p=str(random.uniform(0.8, 1.0)),\n", | |
" frequency_penalty=str(random.uniform(0.0, 0.5)),\n", | |
" presence_penalty=str(random.uniform(0.0, 0.5)),\n", | |
" teleprompter=f\"Focus on your area of expertise. Provide a {output_length} response using a {random.choice(['formal', 'casual', 'technical'])} tone.\"\n", | |
" ).output\n", | |
" expert_output += partial_output\n", | |
"\n", | |
" if self.check_output_completeness(expert_output):\n", | |
" break\n", | |
"\n", | |
" expert_prompt = f\"Expert {i+1} (continued): {prompt}\\n{expert_output}\"\n", | |
"\n", | |
" self.expert_outputs.append(expert_output)\n", | |
"\n", | |
" print(f\"LLM Parameters for Expert {i+1}:\")\n", | |
" print(f\"Max Tokens per chunk: {max_tokens}\")\n", | |
" print(f\"Temperature: {random.uniform(0.7, 1.2)}\")\n", | |
" print(f\"Top K: {random.randint(30, 70)}\")\n", | |
" print(f\"Top P: {random.uniform(0.8, 1.0)}\")\n", | |
" print(f\"Frequency Penalty: {random.uniform(0.0, 0.5)}\")\n", | |
" print(f\"Presence Penalty: {random.uniform(0.0, 0.5)}\")\n", | |
" print(\"------------------------\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" logging.error(\"Error generating output for Expert %d: %s\", i+1, e)\n", | |
" continue\n", | |
"\n", | |
" logging.info(f\"Output for Expert {i+1} generated.\")\n", | |
" time.sleep(1)\n", | |
"\n", | |
" try:\n", | |
" serialized_expert_outputs = ','.join(self.expert_outputs)\n", | |
"\n", | |
" if random.random() < self.exploration_rate:\n", | |
" selected_expert_index = random.randint(0, self.num_experts - 1)\n", | |
" else:\n", | |
" selected_expert_index = select_expert(context=context, expert_outputs=serialized_expert_outputs, teleprompter=\"Select the index of the most relevant expert for the given context.\").selected_expert\n", | |
" selected_expert_index = int(selected_expert_index) if selected_expert_index.isdigit() else 0\n", | |
"\n", | |
" # Penalize selection of recently chosen experts\n", | |
" if selected_expert_index in self.selected_experts_history:\n", | |
" selected_expert_index = random.randint(0, self.num_experts - 1)\n", | |
" except Exception as e:\n", | |
" logging.error(\"Error selecting the most relevant expert: %s\", e)\n", | |
" return \"Failed to select the most relevant expert.\"\n", | |
"\n", | |
" if selected_expert_index < 0 or selected_expert_index >= len(self.expert_outputs):\n", | |
" selected_expert_index = 0\n", | |
"\n", | |
" self.selected_experts_history.append(selected_expert_index)\n", | |
"\n", | |
" try:\n", | |
" intrinsic_reward_str = evaluate_intrinsic_reward(context=context, expert_outputs=serialized_expert_outputs, selected_expert_index=str(selected_expert_index)).intrinsic_reward\n", | |
"\n", | |
" # Extract numeric reward value from the string\n", | |
" if \"highly effective\" in intrinsic_reward_str.lower():\n", | |
" intrinsic_reward = 1.0\n", | |
" elif \"effective\" in intrinsic_reward_str.lower():\n", | |
" intrinsic_reward = 0.7\n", | |
" elif \"moderate\" in intrinsic_reward_str.lower():\n", | |
" intrinsic_reward = 0.5\n", | |
" elif \"poor\" in intrinsic_reward_str.lower():\n", | |
" intrinsic_reward = 0.2\n", | |
" else:\n", | |
" intrinsic_reward = 0.0\n", | |
" except Exception as e:\n", | |
" logging.error(\"Error evaluating intrinsic reward: %s\", e)\n", | |
" intrinsic_reward = 0.0\n", | |
"\n", | |
" print(\"Expert output generation complete!\")\n", | |
" logging.info(\"All expert outputs have been generated and the most relevant expert has been selected.\")\n", | |
"\n", | |
" return self.expert_outputs[selected_expert_index], selected_expert_index, intrinsic_reward\n", | |
"\n", | |
" def check_output_completeness(self, output: str) -> bool:\n", | |
" \"\"\"Check if the output ends with a proper conclusion.\"\"\"\n", | |
" if output.endswith(\".\") or output.endswith(\"!\") or output.endswith(\"?\"):\n", | |
" return True\n", | |
" return False\n", | |
"\n", | |
" def update_expert_values(self, selected_expert_index: int, reward: float):\n", | |
" \"\"\"Update the value estimate of the selected expert based on the received reward.\"\"\"\n", | |
" self.expert_values[selected_expert_index] += self.learning_rate * (reward + self.discount_factor * max(self.expert_values) - self.expert_values[selected_expert_index])\n", | |
"\n", | |
" def update_expert_architecture(self, expert_index: int):\n", | |
" \"\"\"Update the architecture of the specified expert model.\"\"\"\n", | |
" self.expert_architectures[expert_index] = self.initialize_expert_architecture()\n", | |
"\n", | |
" def update_gating_architecture(self):\n", | |
" \"\"\"Update the architecture of the gating model.\"\"\"\n", | |
" self.gating_architecture = self.initialize_gating_architecture()\n", | |
"\n", | |
" def check_termination_condition(self, iteration: int, total_reward: float) -> bool:\n", | |
" \"\"\"Check if the termination condition is met based on the iteration and total reward.\"\"\"\n", | |
" if iteration >= self.min_iterations and total_reward >= 10.0:\n", | |
" return True\n", | |
" return False\n", | |
"\n", | |
" def update_exploration_rate(self, iteration: int):\n", | |
" \"\"\"Update the exploration rate based on the current iteration.\"\"\"\n", | |
" self.exploration_rate = max(0.1, 1.0 - (iteration / self.min_iterations))\n", | |
"\n", | |
"# Example usage with adjustments for self-improvement and intrinsic motivation\n", | |
"context = \"Acme Corporation is exploring investment opportunities in emerging technologies. The board seeks insights into which technologies could potentially transform their industry over the next decade.\"\n", | |
"prompt = \"Evaluate the potential impact and investment viability of artificial intelligence (AI), blockchain, quantum computing, and biotechnology.\"\n", | |
"\n", | |
"# Get values from widgets\n", | |
"num_experts = num_experts_widget.value\n", | |
"min_iterations = min_iterations_widget.value\n", | |
"learning_rate = learning_rate_widget.value\n", | |
"discount_factor = discount_factor_widget.value\n", | |
"exploration_rate = exploration_rate_widget.value\n", | |
"context = context_widget.value\n", | |
"prompt = prompt_widget.value\n", | |
"max_tokens = max_tokens_widget.value\n", | |
"guidance = guidance_widget.value\n", | |
"\n", | |
"# Instantiate MixtureOfExperts with widget values\n", | |
"moe = MixtureOfExperts(\n", | |
" num_experts=num_experts,\n", | |
" min_iterations=min_iterations,\n", | |
" learning_rate=learning_rate,\n", | |
" discount_factor=discount_factor,\n", | |
" exploration_rate=exploration_rate\n", | |
")\n", | |
"for iteration in range(moe.min_iterations):\n", | |
" final_output, selected_expert_index, intrinsic_reward = moe.generate_expert_outputs(context, prompt, max_tokens, guidance)\n", | |
"\n", | |
" print(f\"Iteration {iteration+1} - Selected Expert: {selected_expert_index}, Intrinsic Reward: {intrinsic_reward}\")\n", | |
" print(\"Expert Values:\", moe.expert_values)\n", | |
" print(\"Final Expert Output:\")\n", | |
" print(final_output)\n", | |
" print(\"------------------------\")\n", | |
"\n", | |
" # Get reward from an external expert\n", | |
" expert_reward = float(input(f\"Enter expert reward for iteration {iteration+1}: \"))\n", | |
"\n", | |
" # Combine intrinsic and expert rewards\n", | |
" total_reward = intrinsic_reward + expert_reward\n", | |
"\n", | |
" print(f\"Expert Reward: {expert_reward}, Total Reward: {total_reward}\")\n", | |
"\n", | |
" # Update the value estimate of the selected expert\n", | |
" moe.update_expert_values(selected_expert_index, total_reward)\n", | |
"\n", | |
" # Update expert and gating architectures for self-improvement\n", | |
" if random.random() < 0.2:\n", | |
" moe.update_expert_architecture(selected_expert_index)\n", | |
" if random.random() < 0.1:\n", | |
" moe.update_gating_architecture()\n", | |
"\n", | |
" # Update exploration rate based on the current iteration\n", | |
" moe.update_exploration_rate(iteration)\n", | |
"\n", | |
" # Check termination condition\n", | |
" if moe.check_termination_condition(iteration, total_reward):\n", | |
" print(\"Termination condition met. Stopping the process.\")\n", | |
" break" | |
], | |
"metadata": { | |
"id": "W42acv02jBva", | |
"colab": { | |
"base_uri": "https://localhost:8080/" | |
}, | |
"outputId": "c2ad8d71-2c66-4f00-f62d-52edfbf39c9f" | |
}, | |
"execution_count": null, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"name": "stdout", | |
"text": [ | |
"Generating expert outputs...\n", | |
"Generating output for Expert 1/5...\n", | |
"LLM Parameters for Expert 1:\n", | |
"Max Tokens per chunk: 100\n", | |
"Temperature: 1.1332587161457235\n", | |
"Top K: 32\n", | |
"Top P: 0.9830388581598246\n", | |
"Frequency Penalty: 0.4846254455168328\n", | |
"Presence Penalty: 0.46250359452569334\n", | |
"------------------------\n", | |
"Generating output for Expert 2/5...\n", | |
"LLM Parameters for Expert 2:\n", | |
"Max Tokens per chunk: 100\n", | |
"Temperature: 1.100431727680847\n", | |
"Top K: 57\n", | |
"Top P: 0.8261864052371326\n", | |
"Frequency Penalty: 0.05766838068219726\n", | |
"Presence Penalty: 0.23045925903293224\n", | |
"------------------------\n", | |
"Generating output for Expert 3/5...\n", | |
"LLM Parameters for Expert 3:\n", | |
"Max Tokens per chunk: 100\n", | |
"Temperature: 0.7101652480793152\n", | |
"Top K: 31\n", | |
"Top P: 0.8054642673532337\n", | |
"Frequency Penalty: 0.018006329559420386\n", | |
"Presence Penalty: 0.08152148485919464\n", | |
"------------------------\n", | |
"Generating output for Expert 4/5...\n", | |
"LLM Parameters for Expert 4:\n", | |
"Max Tokens per chunk: 100\n", | |
"Temperature: 0.7818407384390405\n", | |
"Top K: 55\n", | |
"Top P: 0.82234975542765\n", | |
"Frequency Penalty: 0.07942774549010995\n", | |
"Presence Penalty: 0.47307463016381535\n", | |
"------------------------\n", | |
"Generating output for Expert 5/5...\n", | |
"LLM Parameters for Expert 5:\n", | |
"Max Tokens per chunk: 100\n", | |
"Temperature: 0.7992177385363244\n", | |
"Top K: 59\n", | |
"Top P: 0.9022164193478114\n", | |
"Frequency Penalty: 0.4585617982849777\n", | |
"Presence Penalty: 0.05136062904581068\n", | |
"------------------------\n", | |
"Expert output generation complete!\n", | |
"Iteration 1 - Selected Expert: 3, Intrinsic Reward: 0.0\n", | |
"Expert Values: [0.0, 0.0, 0.0, 0.0, 0.0]\n", | |
"Final Expert Output:\n", | |
"Based on the comprehensive evaluation of the four key emerging technologies, I recommend prioritizing investment in artificial intelligence (AI) and blockchain. AI has reached a mature stage of development with significant market growth potential, especially in automation, predictive analytics, and personalized customer experiences. Blockchain, on the other hand, offers transformative solutions in supply chain management, secure transactions, and decentralized applications. Quantum computing and biotechnology show promise but require further research and development before widespread adoption. Acme should allocate resources to AI and blockchain integration, leveraging existing talent and partnerships to drive innovation and competitive advantage.\n", | |
"------------------------\n" | |
] | |
} | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"## What's happening?\n", | |
"The expert reward is an external evaluation of the quality and relevance of the output generated by the selected expert model during each iteration of the Recursive Unified Validators (rUv) process.\n", | |
"\n", | |
"It plays a crucial role in guiding the learning and adaptation of the expert models over time. Here's how the expert reward affects the output:\n", | |
"\n", | |
"- **Feedback mechanism:** The expert reward serves as a feedback signal that indicates how well the selected expert model performed in generating a relevant and high-quality output for the given context and prompt. It allows the system to assess the effectiveness of each expert model based on external evaluation.\n", | |
"\n", | |
"- **Updating expert values:** The expert reward is used to update the value estimate of the selected expert model. The update_expert_values method in the MixtureOfExperts class adjusts the value of the selected expert based on the received reward, the learning rate, and the discount factor. This update helps the system learn which experts are more reliable and valuable for specific contexts over time.\n", | |
"\n", | |
"- **Reinforcement learning:** The expert reward is combined with the intrinsic reward (generated by the IntrinsicRewardModel) to calculate the total reward for each iteration. This total reward is used to guide the reinforcement learning process, where the system learns to select the most appropriate expert models based on their historical performance and the current context.\n", | |
"\n", | |
"- **Balancing exploration and exploitation:** The expert reward influences the balance between exploration and exploitation in the expert selection process. If an expert consistently receives high rewards, it is more likely to be selected in future iterations (exploitation). However, the system also maintains an exploration rate to occasionally select random experts and explore potentially better options (exploration).\n", | |
"\n", | |
"- **Termination condition:** The expert reward contributes to the total reward, which is used to check the termination condition for the rUv process. If the total reward exceeds a certain threshold and the minimum number of iterations is reached, the process may terminate early, indicating that a satisfactory output has been generated.\n", | |
"\n", | |
"By providing an external evaluation of the generated outputs, the expert reward helps the rUv system learn and adapt over time. It guides the selection and improvement of expert models, ensuring that the most relevant and high-quality outputs are generated for the given context and prompt.\n", | |
"\n", | |
"The expert reward is typically provided by a human evaluator or a separate evaluation model that assesses the quality and relevance of the generated outputs.\n", | |
"\n", | |
"The reward value is usually a numeric score that represents the level of satisfaction or effectiveness of the output. It's important to note that the expert reward is specific to each iteration and expert model. It allows for fine-grained feedback and adaptation, enabling the system to continuously improve its performance and generate more relevant and coherent outputs over time." | |
], | |
"metadata": { | |
"id": "P01jYzIZNKV1" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"# Simple rUv Version\n", | |
"The basic version, good with framrworks like FASTAPI.\n", | |
"\n", | |
"Please note that the max_tokens parameter may need to be adjusted based on the desired output length, as there isn't an auto-complete option in this version. Experiment with different values to find the optimal token limit for your specific use case.\n" | |
], | |
"metadata": { | |
"id": "B_2HAkL7Hw_F" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"import dspy\n", | |
"import logging\n", | |
"import random\n", | |
"\n", | |
"# Initialize DSPy\n", | |
"turbo = dspy.OpenAI(model='gpt-3.5-turbo')\n", | |
"colbertv2_wiki17_abstracts = dspy.ColBERTv2(url='http://20.102.90.50:2017/wiki17_abstracts')\n", | |
"\n", | |
"# Configure logging\n", | |
"logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\n", | |
"\n", | |
"class rUv(dspy.Signature):\n", | |
" \"\"\"\n", | |
" Recursive Unified Validators (rUv): Generate expert model outputs.\n", | |
" This is the primary function that generates outputs from multiple expert models.\n", | |
" \"\"\"\n", | |
" context = dspy.InputField(desc=\"The current context\")\n", | |
" prompt = dspy.InputField(desc=\"A prompt to guide the language model\")\n", | |
" max_tokens = dspy.InputField(desc=\"Maximum number of tokens to generate\", default=\"1500\")\n", | |
" temperature = dspy.InputField(desc=\"Temperature for sampling (higher values make output more random)\", default=\"0.1\")\n", | |
" top_k = dspy.InputField(desc=\"Top K words to sample from (higher values consider more words)\", default=\"100\")\n", | |
" top_p = dspy.InputField(desc=\"Top P probability threshold (higher values make output more diverse)\", default=\"0.9\")\n", | |
" frequency_penalty = dspy.InputField(desc=\"Frequency penalty (higher values penalize frequent words)\", default=\"0.0\")\n", | |
" presence_penalty = dspy.InputField(desc=\"Presence penalty (higher values penalize repeated words)\", default=\"0.0\")\n", | |
" output = dspy.OutputField(desc=\"The generated expert model output\")\n", | |
" teleprompter = dspy.InputField(desc=\"Additional context or instructions for the language model\", default=\"matter of fact\")\n", | |
"\n", | |
"def generate_expert_output(context: str, prompt: str, max_tokens: int, guidance: str) -> str:\n", | |
" \"\"\"Generate expert output based on the given context and prompt.\"\"\"\n", | |
" logging.info(\"Starting to generate expert output...\")\n", | |
" print(\"Generating expert output...\")\n", | |
"\n", | |
" try:\n", | |
" generate_expert = dspy.Predict(rUv)\n", | |
" except Exception as e:\n", | |
" logging.error(\"Error initializing DSPy Predict function: %s\", e)\n", | |
" return \"Failed to initialize expert model.\"\n", | |
"\n", | |
" try:\n", | |
" expert_output = generate_expert(\n", | |
" context=context,\n", | |
" prompt=prompt,\n", | |
" max_tokens=str(max_tokens),\n", | |
" temperature=str(random.uniform(0.7, 1.2)),\n", | |
" top_k=str(random.randint(30, 70)),\n", | |
" top_p=str(random.uniform(0.8, 1.0)),\n", | |
" frequency_penalty=str(random.uniform(0.0, 0.5)),\n", | |
" presence_penalty=str(random.uniform(0.0, 0.5)),\n", | |
" teleprompter=f\"Focus on your area of expertise. Provide a response using a {random.choice(['formal', 'casual', 'technical'])} tone.\"\n", | |
" ).output\n", | |
" except Exception as e:\n", | |
" logging.error(\"Error generating expert output: %s\", e)\n", | |
" return \"Failed to generate expert output.\"\n", | |
"\n", | |
" print(\"Expert output generation complete!\")\n", | |
" logging.info(\"Expert output has been generated.\")\n", | |
"\n", | |
" return expert_output\n", | |
"\n", | |
"# Example usage\n", | |
"context = \"Acme Corporation is exploring investment opportunities in emerging technologies. The board seeks insights into which technologies could potentially transform their industry over the next decade.\"\n", | |
"prompt = \"Evaluate the potential impact and investment viability of artificial intelligence (AI), blockchain, quantum computing, and biotechnology.\"\n", | |
"max_tokens = 1000\n", | |
"guidance = \"Provide a detailed analysis of each technology, considering factors such as market potential, adoption rates, and regulatory landscape.\"\n", | |
"\n", | |
"expert_output = generate_expert_output(context, prompt, max_tokens, guidance)\n", | |
"print(\"Expert Output:\")\n", | |
"print(expert_output)" | |
], | |
"metadata": { | |
"colab": { | |
"base_uri": "https://localhost:8080/" | |
}, | |
"id": "Lc5DeTe5HtCw", | |
"outputId": "4d3d87c6-af04-4bfb-dc0f-086a2f449ded" | |
}, | |
"execution_count": null, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"name": "stdout", | |
"text": [ | |
"Generating expert output...\n", | |
"Expert output generation complete!\n", | |
"Expert Output:\n", | |
"Artificial Intelligence (AI) has already shown significant potential to transform industries across the board. From improving customer service through chatbots to optimizing supply chains with predictive analytics, AI is poised to revolutionize how businesses operate. In terms of investment viability, AI startups continue to attract substantial funding, indicating strong market interest in this technology.\n", | |
"\n", | |
"Blockchain, known for its secure and transparent nature, has the potential to disrupt industries like finance, healthcare, and supply chain management. Its decentralized ledger system can enhance data security, streamline transactions, and reduce fraud. As more companies explore blockchain applications, investing in this technology could yield substantial returns in the long run.\n", | |
"\n", | |
"Quantum computing, although still in its early stages, holds immense promise for solving complex problems that are beyond the capabilities\n" | |
] | |
} | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "py39", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.9.17" | |
}, | |
"orig_nbformat": 4, | |
"colab": { | |
"provenance": [], | |
"include_colab_link": true | |
}, | |
"widgets": { | |
"application/vnd.jupyter.widget-state+json": { | |
"b62d1817420440eebdd355b1a5b1881c": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "VBoxModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "VBoxModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "VBoxView", | |
"box_style": "", | |
"children": [ | |
"IPY_MODEL_a3241775143b42aabec79452ec72db83", | |
"IPY_MODEL_9ed09c22674c46229b2a95213ff63cd7", | |
"IPY_MODEL_96bdef1cb33b425ca5c7f1a708ff8089", | |
"IPY_MODEL_c162030f65e94a48964698f930766e60", | |
"IPY_MODEL_62c8a82b75c04a4bb0b9ea2bb02dcad5", | |
"IPY_MODEL_309afada93db4c739860a239d51c14d9", | |
"IPY_MODEL_29c87ea3108943ad8fbedf2157e955d0", | |
"IPY_MODEL_86f8eefffb604eb6848d986668a607e0", | |
"IPY_MODEL_89f6007681d14dcca41d16cc5d531575", | |
"IPY_MODEL_e7c2627ce00e461299bd45cd5c539ea8" | |
], | |
"layout": "IPY_MODEL_dd989f046aee48d7968acb13eaa57e88" | |
} | |
}, | |
"a3241775143b42aabec79452ec72db83": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_0f32288dfda24e17b72b5faa5a37f37b", | |
"placeholder": "", | |
"style": "IPY_MODEL_0cd57b77fa514b8cb4a972cc5ff6d4d8", | |
"value": "<h2>🤖 Number of Expert Models</h2>" | |
} | |
}, | |
"9ed09c22674c46229b2a95213ff63cd7": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_5d0c70a45eff45f58b854720279bbe32", | |
"placeholder": "", | |
"style": "IPY_MODEL_3702e4f7d0414e60902be299cd9c6f8f", | |
"value": "<p>Determines the diversity and specialization of the expert models. More experts can cover a wider range of topics but may require more computational resources.</p>" | |
} | |
}, | |
"96bdef1cb33b425ca5c7f1a708ff8089": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "IntSliderModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "IntSliderModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "IntSliderView", | |
"continuous_update": true, | |
"description": "Number of expert models to use", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_ef3e8a90e3ed46d38c685de358ba86f1", | |
"max": 32, | |
"min": 1, | |
"orientation": "horizontal", | |
"readout": true, | |
"readout_format": "d", | |
"step": 1, | |
"style": "IPY_MODEL_70e180bc184147109b9d50b75895cc21", | |
"value": 5 | |
} | |
}, | |
"c162030f65e94a48964698f930766e60": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_41338ec82c1f4325a1e93f1860cc89de", | |
"placeholder": "", | |
"style": "IPY_MODEL_279ad65e88d142dba2f8ccda10c6f83d", | |
"value": "<p><em>Examples:</em></p>" | |
} | |
}, | |
"62c8a82b75c04a4bb0b9ea2bb02dcad5": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_eee3f319087748d39bf447813a3ccad0", | |
"placeholder": "", | |
"style": "IPY_MODEL_774570b3713740f8ba1a52ad793eeab1", | |
"value": "<ul><li><code>num_experts = 3</code>: Use a small number of expert models for a focused approach. Suitable for simpler tasks or when computational resources are limited.</li><li><code>num_experts = 5</code>: Use a moderate number of expert models to balance diversity and computational efficiency. Appropriate for most general-purpose applications.</li><li><code>num_experts = 8</code>: Use a higher number of expert models for increased diversity and specialization. Beneficial for complex tasks that require expertise in multiple domains.</li><li><code>num_experts = 12</code>: Use a large number of expert models for highly diverse and specialized knowledge. Suitable for advanced applications with ample computational resources.</li></ul>" | |
} | |
}, | |
"309afada93db4c739860a239d51c14d9": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_61157d2d7e6b4850a5c71ff6715eeb88", | |
"placeholder": "", | |
"style": "IPY_MODEL_5d69bbd9440b468b9b076bc4d6e98185", | |
"value": "<h2>🔄 Minimum Number of Iterations</h2>" | |
} | |
}, | |
"29c87ea3108943ad8fbedf2157e955d0": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_4acd883d2af744f9aac1cf022d731b6b", | |
"placeholder": "", | |
"style": "IPY_MODEL_bd4c8c977688495f9c5073872c29efa3", | |
"value": "<p>Ensures that the system runs for a sufficient number of iterations to generate meaningful outputs and updates.</p>" | |
} | |
}, | |
"86f8eefffb604eb6848d986668a607e0": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "IntSliderModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "IntSliderModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "IntSliderView", | |
"continuous_update": true, | |
"description": "Minimum number of iterations to run", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_6c81e7a21c314af89dbc3fc8b37d9932", | |
"max": 100, | |
"min": 1, | |
"orientation": "horizontal", | |
"readout": true, | |
"readout_format": "d", | |
"step": 1, | |
"style": "IPY_MODEL_031f9bad574147269583543eb0fe66c4", | |
"value": 3 | |
} | |
}, | |
"89f6007681d14dcca41d16cc5d531575": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_75d85358d1914595bc7b347ebeefdcce", | |
"placeholder": "", | |
"style": "IPY_MODEL_3ee905f583b841f4bc80d11c2326b92d", | |
"value": "<p><em>Examples:</em></p>" | |
} | |
}, | |
"e7c2627ce00e461299bd45cd5c539ea8": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_8a880b69d2d6475c998987e488a44fa5", | |
"placeholder": "", | |
"style": "IPY_MODEL_f728ffddcf744e3b82ef80a269927a15", | |
"value": "<ul><li><code>min_iterations = 3</code>: Run the system for a minimum of 3 iterations. Suitable for quick prototyping or when a small number of iterations is sufficient.</li><li><code>min_iterations = 6</code>: Run the system for at least 6 iterations. Provides a balance between efficiency and allowing the system to refine its outputs.</li><li><code>min_iterations = 10</code>: Run the system for a minimum of 10 iterations. Allows for more comprehensive refinement and adaptation of the expert models.</li><li><code>min_iterations = 15</code>: Run the system for an extended number of iterations. Beneficial when the task requires significant fine-tuning and improvement over time.</li></ul>" | |
} | |
}, | |
"dd989f046aee48d7968acb13eaa57e88": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"0f32288dfda24e17b72b5faa5a37f37b": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"0cd57b77fa514b8cb4a972cc5ff6d4d8": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"5d0c70a45eff45f58b854720279bbe32": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"3702e4f7d0414e60902be299cd9c6f8f": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"ef3e8a90e3ed46d38c685de358ba86f1": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"70e180bc184147109b9d50b75895cc21": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "SliderStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "SliderStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "initial", | |
"handle_color": null | |
} | |
}, | |
"41338ec82c1f4325a1e93f1860cc89de": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"279ad65e88d142dba2f8ccda10c6f83d": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"eee3f319087748d39bf447813a3ccad0": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"774570b3713740f8ba1a52ad793eeab1": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"61157d2d7e6b4850a5c71ff6715eeb88": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"5d69bbd9440b468b9b076bc4d6e98185": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"4acd883d2af744f9aac1cf022d731b6b": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"bd4c8c977688495f9c5073872c29efa3": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"6c81e7a21c314af89dbc3fc8b37d9932": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"031f9bad574147269583543eb0fe66c4": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "SliderStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "SliderStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "initial", | |
"handle_color": null | |
} | |
}, | |
"75d85358d1914595bc7b347ebeefdcce": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"3ee905f583b841f4bc80d11c2326b92d": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"8a880b69d2d6475c998987e488a44fa5": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"f728ffddcf744e3b82ef80a269927a15": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"a1a694d78186448dadee815fd5a19af0": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "VBoxModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "VBoxModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "VBoxView", | |
"box_style": "", | |
"children": [ | |
"IPY_MODEL_0a8b4a68c9d34f319cd2c0e49f660bcf", | |
"IPY_MODEL_bb556759c4d441158b9aa87d3db0e71f", | |
"IPY_MODEL_661ae8e7386349daa33c9d5d54962818", | |
"IPY_MODEL_51087a4f3dea42e59c97bfd97c4db7a5", | |
"IPY_MODEL_d72a600a5aab435a9820bc1d8e32be6c", | |
"IPY_MODEL_fead9805ac6a405d9636f54f98d4f4c3", | |
"IPY_MODEL_a2bd8ab0d6ae4ce1bd5c2bd01bddbe43", | |
"IPY_MODEL_30584c3c266d41f7817cb0ececf5e941", | |
"IPY_MODEL_dc5257471be9457fbe9cf454dd8b34e7", | |
"IPY_MODEL_80f2f2ae617349dab6db97a18700f9ee", | |
"IPY_MODEL_b179b217661d456db3b59ef02abdcfab", | |
"IPY_MODEL_332beedfadc74a4a95ac9753fd8b3ca0", | |
"IPY_MODEL_edaa25849ecb49d7ad2d98b6977aeecc", | |
"IPY_MODEL_2db2452fed464fac96050c22558a1899", | |
"IPY_MODEL_4c550d4c8aa447e9aa9f7019996cbfbb" | |
], | |
"layout": "IPY_MODEL_8041d4bf9955495893d4ada4daed387e" | |
} | |
}, | |
"0a8b4a68c9d34f319cd2c0e49f660bcf": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_085e3e897679415a84ff8f3341be74e2", | |
"placeholder": "", | |
"style": "IPY_MODEL_c5835f1ccbab4b288b97e9517e8ca82c", | |
"value": "<h2>📈 Learning Rate</h2>" | |
} | |
}, | |
"bb556759c4d441158b9aa87d3db0e71f": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_bfe83c92d7e244dabd421a37862808c6", | |
"placeholder": "", | |
"style": "IPY_MODEL_a3ac06235c4f4027b5309498ff35c8b3", | |
"value": "<p>Controls the step size of the value updates. Higher learning rates lead to faster adaptation but may cause instability, while lower learning rates result in slower but more stable learning.</p>" | |
} | |
}, | |
"661ae8e7386349daa33c9d5d54962818": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "FloatSliderModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "FloatSliderModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "FloatSliderView", | |
"continuous_update": true, | |
"description": "Learning rate for updating expert values", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_d6d5e7cf6cfb45a18d4892434a4ed16f", | |
"max": 0.5, | |
"min": 0.01, | |
"orientation": "horizontal", | |
"readout": true, | |
"readout_format": ".2f", | |
"step": 0.01, | |
"style": "IPY_MODEL_7c89e8e68bab424284cfedd17d9283e8", | |
"value": 0.1 | |
} | |
}, | |
"51087a4f3dea42e59c97bfd97c4db7a5": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_a694eda30fe342a0b6303789d658eb44", | |
"placeholder": "", | |
"style": "IPY_MODEL_4bf6ff2be919466a84ee442f585ab4db", | |
"value": "<p><em>Examples:</em></p>" | |
} | |
}, | |
"d72a600a5aab435a9820bc1d8e32be6c": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_61749680118845839f83275a6e75e280", | |
"placeholder": "", | |
"style": "IPY_MODEL_6afb3fa777ee4485a79efad968821ee3", | |
"value": "<ul><li><code>learning_rate = 0.05</code>: Use a low learning rate for cautious and gradual updates. Suitable when stability is a priority and slower adaptation is acceptable.</li><li><code>learning_rate = 0.1</code>: Use a moderate learning rate for balanced updates. Provides a good trade-off between adaptation speed and stability (default).</li><li><code>learning_rate = 0.2</code>: Use a higher learning rate for faster adaptation. Beneficial when quick adjustments are needed, but may introduce some instability.</li><li><code>learning_rate = 0.5</code>: Use an aggressive learning rate for rapid adaptation. Suitable for scenarios where fast convergence is desired, but careful monitoring is required to avoid instability.</li></ul>" | |
} | |
}, | |
"fead9805ac6a405d9636f54f98d4f4c3": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_2a16f24a58be4bb2b55d5d057e8dba30", | |
"placeholder": "", | |
"style": "IPY_MODEL_69b183de25264659a1cd0e67673ac09c", | |
"value": "<h2>💰 Discount Factor</h2>" | |
} | |
}, | |
"a2bd8ab0d6ae4ce1bd5c2bd01bddbe43": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_aa8d785ccf414e65b2f4189ce179d6c3", | |
"placeholder": "", | |
"style": "IPY_MODEL_e5ab8d3855ec4d5296c83d1b3cadbee5", | |
"value": "<p>Determines the importance of future rewards. Higher discount factors give more weight to future rewards, while lower discount factors prioritize immediate rewards.</p>" | |
} | |
}, | |
"30584c3c266d41f7817cb0ececf5e941": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "FloatSliderModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "FloatSliderModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "FloatSliderView", | |
"continuous_update": true, | |
"description": "Discount factor for future rewards", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_b34ad3298cb340b9996734fb9d4e4161", | |
"max": 0.99, | |
"min": 0.8, | |
"orientation": "horizontal", | |
"readout": true, | |
"readout_format": ".2f", | |
"step": 0.01, | |
"style": "IPY_MODEL_18f63332aa37430984da20bc5a49670c", | |
"value": 0.99 | |
} | |
}, | |
"dc5257471be9457fbe9cf454dd8b34e7": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_24936b0a0c1f4ffd97ea8f269e0e92aa", | |
"placeholder": "", | |
"style": "IPY_MODEL_27a054ace5614c0c9a5c6c25eefebed9", | |
"value": "<p><em>Examples:</em></p>" | |
} | |
}, | |
"80f2f2ae617349dab6db97a18700f9ee": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_d1945e9b05be4cc7be15fde78d799a29", | |
"placeholder": "", | |
"style": "IPY_MODEL_82487143b8294ba98caf6f4a914e5883", | |
"value": "<ul><li><code>discount_factor = 0.8</code>: Use a lower discount factor to prioritize short-term rewards. Suitable when immediate outcomes are more important than long-term considerations.</li><li><code>discount_factor = 0.9</code>: Use a moderate discount factor to balance short-term and long-term rewards. Provides a good trade-off for most applications.</li><li><code>discount_factor = 0.95</code>: Use a higher discount factor to give more emphasis to future rewards. Beneficial when long-term performance is a key objective.</li><li><code>discount_factor = 0.99</code>: Use a very high discount factor to strongly prioritize future rewards. Suitable for tasks where long-term success is crucial (default).</li></ul>" | |
} | |
}, | |
"b179b217661d456db3b59ef02abdcfab": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_0e78500d1a0a44d0876a7159903d6cb6", | |
"placeholder": "", | |
"style": "IPY_MODEL_5015d3188478422a8595de88df764b65", | |
"value": "<h2>🔍 Exploration Rate</h2>" | |
} | |
}, | |
"332beedfadc74a4a95ac9753fd8b3ca0": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_5477889420be4bb89c29683a1ae2c1f8", | |
"placeholder": "", | |
"style": "IPY_MODEL_5cb084d2eb2645e4823f8d8e153f4b93", | |
"value": "<p>Balances the trade-off between exploiting the current best expert and exploring potentially better experts. Higher exploration rates encourage trying different experts, while lower rates focus on the current best expert.</p>" | |
} | |
}, | |
"edaa25849ecb49d7ad2d98b6977aeecc": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "FloatSliderModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "FloatSliderModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "FloatSliderView", | |
"continuous_update": true, | |
"description": "Exploration rate for selecting experts", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_045ba4e692e147cbb3737e91e6915e12", | |
"max": 0.5, | |
"min": 0.05, | |
"orientation": "horizontal", | |
"readout": true, | |
"readout_format": ".2f", | |
"step": 0.05, | |
"style": "IPY_MODEL_925c3c659a034cd6b8236e9b35c755dd", | |
"value": 0.2 | |
} | |
}, | |
"2db2452fed464fac96050c22558a1899": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_91f69828da594e91ba07ca30f5863ac6", | |
"placeholder": "", | |
"style": "IPY_MODEL_10c7f5bb43274557a88edfe121a31cd0", | |
"value": "<p><em>Examples:</em></p>" | |
} | |
}, | |
"4c550d4c8aa447e9aa9f7019996cbfbb": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "HTMLModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "HTMLModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "HTMLView", | |
"description": "", | |
"description_tooltip": null, | |
"layout": "IPY_MODEL_0e00121e9f1145568c380df5c3ca6e7a", | |
"placeholder": "", | |
"style": "IPY_MODEL_ae40b66470d542d5bf455f6df22f3386", | |
"value": "<ul><li><code>exploration_rate = 0.05</code>: Use a low exploration rate to heavily focus on the current best expert. Suitable when the system has already converged to a good solution and stability is desired.</li><li><code>exploration_rate = 0.1</code>: Use a moderate exploration rate to occasionally explore alternative experts. Provides a balance between exploitation and exploration.</li><li><code>exploration_rate = 0.3</code>: Use a higher exploration rate to more frequently try different experts. Beneficial when the optimal expert is uncertain and more exploration is needed.</li><li><code>exploration_rate = 0.5</code>: Use an aggressive exploration rate to prioritize exploring new experts over exploiting the current best. Suitable for tasks with a high degree of uncertainty or when the system needs to adapt to changing conditions.</li></ul>" | |
} | |
}, | |
"8041d4bf9955495893d4ada4daed387e": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"085e3e897679415a84ff8f3341be74e2": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"c5835f1ccbab4b288b97e9517e8ca82c": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"bfe83c92d7e244dabd421a37862808c6": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"a3ac06235c4f4027b5309498ff35c8b3": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"d6d5e7cf6cfb45a18d4892434a4ed16f": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"7c89e8e68bab424284cfedd17d9283e8": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "SliderStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "SliderStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "initial", | |
"handle_color": null | |
} | |
}, | |
"a694eda30fe342a0b6303789d658eb44": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"4bf6ff2be919466a84ee442f585ab4db": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"61749680118845839f83275a6e75e280": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"6afb3fa777ee4485a79efad968821ee3": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"2a16f24a58be4bb2b55d5d057e8dba30": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"69b183de25264659a1cd0e67673ac09c": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"aa8d785ccf414e65b2f4189ce179d6c3": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"e5ab8d3855ec4d5296c83d1b3cadbee5": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"b34ad3298cb340b9996734fb9d4e4161": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"18f63332aa37430984da20bc5a49670c": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "SliderStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "SliderStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "initial", | |
"handle_color": null | |
} | |
}, | |
"24936b0a0c1f4ffd97ea8f269e0e92aa": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"27a054ace5614c0c9a5c6c25eefebed9": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"d1945e9b05be4cc7be15fde78d799a29": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"82487143b8294ba98caf6f4a914e5883": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"0e78500d1a0a44d0876a7159903d6cb6": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"5015d3188478422a8595de88df764b65": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"5477889420be4bb89c29683a1ae2c1f8": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"5cb084d2eb2645e4823f8d8e153f4b93": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"045ba4e692e147cbb3737e91e6915e12": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"925c3c659a034cd6b8236e9b35c755dd": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "SliderStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "SliderStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "initial", | |
"handle_color": null | |
} | |
}, | |
"91f69828da594e91ba07ca30f5863ac6": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"10c7f5bb43274557a88edfe121a31cd0": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"0e00121e9f1145568c380df5c3ca6e7a": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"ae40b66470d542d5bf455f6df22f3386": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"a356162964e34b95bcb8e954d4ec683b": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "VBoxModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "VBoxModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "VBoxView", | |
"box_style": "", | |
"children": [ | |
"IPY_MODEL_369229298bd0490494a5c0ee1a9c1e50", | |
"IPY_MODEL_f99fe275b3494e399a1969a6b55638b1", | |
"IPY_MODEL_9b0720d786224e27a2518209b6148ae5", | |
"IPY_MODEL_e5f8db235feb4ceebcaa29a08fdac923", | |
"IPY_MODEL_1b7e18d715ff4fe28fc60ac8e5760d31" | |
], | |
"layout": "IPY_MODEL_6b3a1ade2832450caee860e298f2a41a" | |
} | |
}, | |
"369229298bd0490494a5c0ee1a9c1e50": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DropdownModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DropdownModel", | |
"_options_labels": [ | |
"Business Analysis", | |
"Application Planning", | |
"Source Code Generation", | |
"SQL Generation", | |
"Story Creation", | |
"TV/Movie Script" | |
], | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "DropdownView", | |
"description": "Template:", | |
"description_tooltip": null, | |
"disabled": false, | |
"index": 0, | |
"layout": "IPY_MODEL_84f542806b3a4dc1a5fda1f22636301e", | |
"style": "IPY_MODEL_ce2e521f9f5b492f940a814d8672ae01" | |
} | |
}, | |
"f99fe275b3494e399a1969a6b55638b1": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "TextareaModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "TextareaModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "TextareaView", | |
"continuous_update": true, | |
"description": "Context:", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_92008cb3084046508211d9351d4b483d", | |
"placeholder": "Enter the context here", | |
"rows": null, | |
"style": "IPY_MODEL_71b84a7a136448118a0b1127e0ded14f", | |
"value": "Acme Corporation, a leading multinational conglomerate, is actively exploring strategic investment opportunities in emerging technologies to maintain its competitive edge and drive future growth. The board of directors has convened a special committee to conduct a comprehensive analysis of the technological landscape and identify the most promising areas for investment. The committee seeks in-depth insights and recommendations on which cutting-edge technologies have the potential to revolutionize Acme's core industries and create new market opportunities over the next decade." | |
} | |
}, | |
"9b0720d786224e27a2518209b6148ae5": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "TextareaModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "TextareaModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "TextareaView", | |
"continuous_update": true, | |
"description": "Prompt:", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_544331156f82422e8944aa5688c1238d", | |
"placeholder": "Enter the prompt here", | |
"rows": null, | |
"style": "IPY_MODEL_6b1b21a0670b4407b249d9ad604cd0a2", | |
"value": "Conduct a thorough evaluation of the potential impact and investment viability of four key emerging technologies: artificial intelligence (AI), blockchain, quantum computing, and biotechnology. For each technology, provide a detailed assessment of its current state of development, major players in the field, and projected market growth. Analyze the specific applications and use cases within Acme's core industries, highlighting the potential benefits, challenges, and disruptions each technology could bring. Consider factors such as scalability, regulatory landscape, talent availability, and competitive dynamics when assessing the investment viability of each technology. Provide clear recommendations on which technologies Acme should prioritize for investment, along with a proposed allocation of resources and a high-level roadmap for integration into the company's existing operations." | |
} | |
}, | |
"e5f8db235feb4ceebcaa29a08fdac923": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "IntSliderModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "IntSliderModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "IntSliderView", | |
"continuous_update": true, | |
"description": "Max Tokens:", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_27668c65119547d7abcde7e18cec117d", | |
"max": 2000, | |
"min": 100, | |
"orientation": "horizontal", | |
"readout": true, | |
"readout_format": "d", | |
"step": 50, | |
"style": "IPY_MODEL_ebc4a82495e0412aa1a3990fe7b898ba", | |
"value": 100 | |
} | |
}, | |
"1b7e18d715ff4fe28fc60ac8e5760d31": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "TextareaModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_dom_classes": [], | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "TextareaModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/controls", | |
"_view_module_version": "1.5.0", | |
"_view_name": "TextareaView", | |
"continuous_update": true, | |
"description": "Guidance:", | |
"description_tooltip": null, | |
"disabled": false, | |
"layout": "IPY_MODEL_db54b88135e249a3af3819072013cfab", | |
"placeholder": "Enter guidance for the model", | |
"rows": null, | |
"style": "IPY_MODEL_217aa3ead273475ea1e1d858dce97114", | |
"value": "Provide a comprehensive and well-structured analysis, focusing on delivering clear, concise, and actionable insights. Use industry-specific terminology and cite relevant data and examples to support your recommendations. Maintain an objective and analytical tone throughout the report." | |
} | |
}, | |
"6b3a1ade2832450caee860e298f2a41a": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": null | |
} | |
}, | |
"84f542806b3a4dc1a5fda1f22636301e": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"ce2e521f9f5b492f940a814d8672ae01": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"92008cb3084046508211d9351d4b483d": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": "150px", | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"71b84a7a136448118a0b1127e0ded14f": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"544331156f82422e8944aa5688c1238d": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": "200px", | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"6b1b21a0670b4407b249d9ad604cd0a2": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
}, | |
"27668c65119547d7abcde7e18cec117d": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": null, | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"ebc4a82495e0412aa1a3990fe7b898ba": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "SliderStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "SliderStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "initial", | |
"handle_color": null | |
} | |
}, | |
"db54b88135e249a3af3819072013cfab": { | |
"model_module": "@jupyter-widgets/base", | |
"model_name": "LayoutModel", | |
"model_module_version": "1.2.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/base", | |
"_model_module_version": "1.2.0", | |
"_model_name": "LayoutModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "LayoutView", | |
"align_content": null, | |
"align_items": null, | |
"align_self": null, | |
"border": null, | |
"bottom": null, | |
"display": null, | |
"flex": null, | |
"flex_flow": null, | |
"grid_area": null, | |
"grid_auto_columns": null, | |
"grid_auto_flow": null, | |
"grid_auto_rows": null, | |
"grid_column": null, | |
"grid_gap": null, | |
"grid_row": null, | |
"grid_template_areas": null, | |
"grid_template_columns": null, | |
"grid_template_rows": null, | |
"height": "100px", | |
"justify_content": null, | |
"justify_items": null, | |
"left": null, | |
"margin": null, | |
"max_height": null, | |
"max_width": null, | |
"min_height": null, | |
"min_width": null, | |
"object_fit": null, | |
"object_position": null, | |
"order": null, | |
"overflow": null, | |
"overflow_x": null, | |
"overflow_y": null, | |
"padding": null, | |
"right": null, | |
"top": null, | |
"visibility": null, | |
"width": "40%" | |
} | |
}, | |
"217aa3ead273475ea1e1d858dce97114": { | |
"model_module": "@jupyter-widgets/controls", | |
"model_name": "DescriptionStyleModel", | |
"model_module_version": "1.5.0", | |
"state": { | |
"_model_module": "@jupyter-widgets/controls", | |
"_model_module_version": "1.5.0", | |
"_model_name": "DescriptionStyleModel", | |
"_view_count": null, | |
"_view_module": "@jupyter-widgets/base", | |
"_view_module_version": "1.2.0", | |
"_view_name": "StyleView", | |
"description_width": "" | |
} | |
} | |
} | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 0 | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment